Ex Parte BedingfieldDownload PDFPatent Trial and Appeal BoardFeb 27, 201311412004 (P.T.A.B. Feb. 27, 2013) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 11/412,004 04/26/2006 James Carlton Bedingfield SR. 9400-188 (050037) 4813 39072 7590 02/28/2013 AT&T Legal Department - MB Attn: Patent Docketing Room 2A-207 One AT&T Way Bedminster, NJ 07921 EXAMINER ENGLAND, SARA M ART UNIT PAPER NUMBER 2172 MAIL DATE DELIVERY MODE 02/28/2013 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte JAMES CARLTON BEDINGFIELD SR. ____________ Appeal 2011-000084 Application 11/412,004 Technology Center 2100 ____________ Before CARL W. WHITEHEAD, JR., ERIC S. FRAHM, and ANDREW J. DILLON, Administrative Patent Judges. DILLON, Administrative Patent Judge. DECISION ON APPEAL Appellant appeals under 35 U.S.C. § 134(a) from the Examiner’s rejection of claims 1-10 and 12-19. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. STATEMENT OF THE CASE “According to some embodiments of the present invention, recorded audio information is managed by using annotation markers.” Spec., ¶ 5. “[A]nnotating the audio information … comprises processing the audio information to convert the audio information to text information, Appeal 2011-000084 Application 11/412,004 2 electronically generating a concordance comprising selected words from the text information, and saving the text information and the concordance in the electronically searchable file.” Spec., ¶ 8. The Examiner relies on the following references as evidence of unpatentability: Moran US 5,717,869 Feb. 10, 1998 Spielberg US 2002/0129057 A1 Sep. 12, 2002 REJECTION The Examiner rejected claims 1, 2, 6-10, 12-15, and 19 under 35 U.S.C. § 102(b) as anticipated by Moran. Ans. 3-6.1 The Examiner rejected claims 3-5 and 16-18 under 35 U.S.C. § 103(a) as unpatentable over Moran and Spielberg. Ans. 6-10. MORAN Moran teaches a system for organizing business meeting data. Abstract; col. 5, ll. 15-21. Moran achieves this objective via three types of data – timestreams, events, and sessions. Col. 6, l. 34 – col. 9, l. 14. Essentially, a timestream constitutes a portion of media content, an event constitutes an occurrence within media content, and a session maps timestreams and their event/s to one another. Id. All three data types can be stored within a Timestream Database. Id. at col. 9, ll. 16-17; col. 16, ll. 1- 27. The session data and event data can be searched within the Timestream 1 Throughout this opinion, we refer to the Appeal Brief filed June 1, 2010 (“App. Br.”), the Examiner’s Answer mailed July 8, 2010 (“Ans.”), and the Reply Brief filed September 8, 2010. Appeal 2011-000084 Application 11/412,004 3 Database. Id. at col. 16, ll. 31-44 (“database querying capability to allow selective retrieval of Sessions and Events”). ANALYSIS Claims 1-3, 5-10, 12-16, 18, and 19 Claims 1-3, 5-10, 12-16, 18, and 19 stand or fall together. App. Br., pp. 6-7. We select claim 1, reproduced below, as representative. 37 C.F.R. § 41.37(c)(iv). 1. A method of managing information, comprising: recording audio information; annotating the audio information to include at least one marker so as to modify the audio information, the at least one marker being searchable; and saving the annotated audio information including the at least one marker in an electronically searchable file. The Examiner reads the claimed audio information on Moran’s Timestream Database and, particularly, on the stored “temporal data” comprised of timestreams and events. Ans., p. 10 (citing Moran, col. 16, ll. 23-31); see also id. at p. 3 (citing Moran, col. 3, ll. 1-8 ). The Examiner reads the claimed “marker” on the event data of the Timestream Database. Id. at p. 8 (citing Moran, col. 16, ll. 35-38). Appellant’s arguments fail to address the Examiner’s reading of the claimed audio information on the Timestream Database’s temporal data. App. Br., pp. 5-7. The arguments instead incorrectly state that the claimed audio information is read strictly on Moran’s timestream data. Id. This Appeal 2011-000084 Application 11/412,004 4 mistake is best exemplified by the Appeal Brief’s asserted summary of the issues: In summary, Moran discloses “annotating” an audio/video file through user interaction with a presentation while the presentation is being recorded. While these annotations may modify the audio/video information, they are not searchable. Moran also discloses generating session and event data from time stream data corresponding to a presentation. The session and event data, while searchable, do not modify the timestream data. They may be stored in the same or different databases, but the original time stream data remain intact. Thus, Moran does not disclose or suggest, at least, annotating audio information in such a way that it modifies the audio information and includes at least one searchable marker as recited in the pending independent claims. App. Br., p. 7 (italic emphasis added). Appellant repeats this mistake within the Reply Brief, arguing: Moran does not describe modifying the bulk timestream data to include event and/or session data. Even if the bulk time stream data is stored in the same database, they still remain as separate entities within the database. The bulk timestream data is not modified by the event and/or session data. By contrast, independent Claims 1, 14, and 19 state that the audio information is modified with at least one marker. Reply Br., p. 2 (italic emphasis added). The record clearly reflects, in the several respects presented below, that the claimed audio information is read on the Timestream Database’s temporal data. First, in addressing the claimed step of recording audio information, the rejection cites to temporal data comprising both timestreams and events. Ans., p. 3 (citing Moran, col. 3, ll. 1-8); see also Appeal 2011-000084 Application 11/412,004 5 Final Rej., p. 2. Second, in addressing the claimed step of annotating/modifying the audio information, the Answer’s “Response to Arguments” section explains that Moran’s “[t]emporal data is a combination of the timestream and Events (Col. 6, lines 16-17)[;] therefore when the event is named, the temporal data is altered.” Ans., p. 12. And third, the “Response to Arguments” section block quotes a portion of Appellant’s Specification that indicates the claimed audio information may be any conglomeration of audio information bound within a cohesive unit and, moreover, emphasizes the passage’s example of “database document” in bold characters. Id. at p. 11 (citing Spec., ¶ 33). In light of the above, Appellant should have understood the rejection as reading the claimed audio information on the Timestream Database’s temporal data. Instead, Appellant incorrectly addresses the rejection as though reading the claimed audio information strictly on Moran’s timestream data. As such, the arguments do not address the rejection; much less identify a reversible error. Accordingly, we sustain the anticipation rejection of claim 1 and claims 3-7, 9-11, 13-18, and 20 falling therewith. We find the Examiner’s reading of the claimed invention on Moran’s teachings to be reasonable. As reflected above, Appellant’s Specification presents an expansive interpretation of “audio information” that encompasses any conglomeration of audio information bound within a coherent unit. Spec., ¶ 33. Given this expansive scope, Moran’s system can be reasonably interpreted as: “recording audio information,” as claimed, by way of generating the Timestream Database’s temporal data; “annotating the audio information to include at least one marker so as to modify the audio Appeal 2011-000084 Application 11/412,004 6 information, the at least one marker being searchable,” as claimed, by way of including the searchable event data within the Timestream Database’s temporal data; and “saving the annotated audio information including the at least one marker in an electronically searchable file,” as claimed, by way of maintaining the Timestream Database as a searchable database. Claims 4 and 17 Remaining claims 4 and 17 stand or fall together. App. Br., p. 8. We select claim 4, reproduced below, as representative. 37 C.F.R. § 41.37(c)(iv). 4. The method of Claim 1, wherein annotating the audio information and saving the annotated audio information comprises: processing the audio information to convert the audio information to text information; electronically generating a concordance comprising selected words from the text information; and saving the text information and the concordance in the electronically searchable file. The Examiner finds that the further subject matter is taught by Moran as follows: Moran states “The timestreams are analyzed to create a set of events for each timestream. An event is subsequently used as an index for replaying the session” as described by the abstract. Moran further states “Events are used to created indices which provide direct access to a point or span in time during the collaborative activity. Timestreams may inherently define events, or alternatively may be analyzed to identify events[.]” Appeal 2011-000084 Application 11/412,004 7 Ans., p. 13 (citing Moran, col. 3, ll. 15-18). Appellant responds: Claims 4 and 17 … state that the concordance is generated from selected words taken from the audio information – not from annotations made to the audio information. By contrast, it appears that the Examiner’s Answer is alleging that Moran teaches that a concordance can be made from textual annotations used for indexing underlying audio/video data as opposed to generating a concordance based on the underlying audio/video data. Reply Br., p. 2. Appellant misinterprets the rejection. As shown by the above block quote of the Answer, the Examiner finds that the subject matter of claim 4 is suggested by Moran’s analyzing of timestreams to create events, which are in turn used to create indices. Ans., p. 13 (citing Moran, col. 3, ll. 15-18). This process is generally described by Moran’s abstract, as is discussed by the Examiner (see Examiner block quote, above). The “analyzing” part of this process is particularly described with respect to Moran’s Analyzer, as follows: The analyzer accesses the data in timestreams of the given session, and creates events associated with the session. … For example the analyzer might be an audio word spotter which creates an event every time a given word (which would be part of the specs string) is spoken. Note that the Analyzer could be running after a session has been recorded[.] Moran, col. 8, ll. 51-61. An exemplary use of the analyzer would be to store timestreams of audio content within the Timestream Database as temporal data (id. at col. 13, ll. 50-54; col. 14, ll. 19-22), process the temporal data with a word spotting analyzer so as to generate events for particular words (id. at col. 8, ll. 51-61; col. 14, ll. 26-31), and then use the events to generate Appeal 2011-000084 Application 11/412,004 8 indices for some of the spotted words (id. at col. 3, ll. 16-19; col. 5, ll. 15- 21). The claimed text information and concordance read on the events and indices of the spotted words, respectively. Accordingly, we sustain the obviousness rejection of claim 4 and claim 17 falling therewith. We note the Examiner relies on Spielberg as teaching that audio content can be automatically converted to text indices. Ans., p. 8 (Moran “fail[s] to show converting audio to text, that text to be the input for the indices.”) We find this reliance on Spielberg unnecessary. Claim 4 more broadly recites converting the “audio information” to “text information” and, in turn, “electronically generating” a concordance comprising words of the text information. The use of an audio word spotter to “create[] an event every time a given word (which would be part of the specs string) is spoken” (see above block quote of Moran, col. 8, ll. 51-61), implicitly entails some manner of converting audio information to text information; e.g., determining a text information equivalent of the audio information. Otherwise, the audio information would not be recognized as including the selected words of the text information. Any ensuing generation of indices comprising the selected words – e.g., even by user input – would in turn constitute “electronically generating a concordance,” as claimed. ORDER The Examiner’s decision rejecting claims 1-7 and 9-20 is affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). AFFIRMED Appeal 2011-000084 Application 11/412,004 9 tkl Copy with citationCopy as parenthetical citation