Gracenote, Inc.Download PDFPatent Trials and Appeals BoardOct 23, 202015620440 - (D) (P.T.A.B. Oct. 23, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/620,440 06/12/2017 Dewey Ho Lee 17-129 5460 139404 7590 10/23/2020 McDonnell Boehnen Hulbert & Berghoff LLP/Gracenote 300 South Wacker Drive, Suite 3100 Chicago, IL 60606 EXAMINER ANDRAMUNO, FRANKLIN S ART UNIT PAPER NUMBER 2424 MAIL DATE DELIVERY MODE 10/23/2020 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte DEWEY HO LEE, SHASHANK C. MERCHANT, and MARKUS K. CREMER ____________________ Appeal 2020-000610 Application 15/620,440 Technology Center 2400 ____________________ Before: ST. JOHN COURTENAY III, ELENI MANTIS MERCADER, and JUSTIN BUSCH, Administrative Patent Judges. BUSCH, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1–20, which constitute all the claims pending. Oral argument was heard on September 17, 2020. A transcript of the hearing has been added to the record. We have jurisdiction over the pending claims under 35 U.S.C. § 6(b). We reverse. 1 We use the word Appellant to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as Gracenote, Inc., a subsidiary of The Nielsen Company (US), LLC. Appeal Br. 1. Appeal 2020-000610 Application 15/620,440 2 STATEMENT OF THE CASE Introduction The invention generally relates to a video presentation device, alone or in combination with a server or computing system, identifying video content being rendered by evaluating the content without receiving information regarding the content from the video source. Spec. ¶ 8. More specifically, the described and claimed invention relates to using video content fingerprints, which have a first portion representing a pre-established video segment and a second portion representing a dynamically-defined video segment, to (1) identify, by matching the first portion to a reference fingerprint, the content being rendered and (2) detect, by applying a neural network to the second portion, that the content continues to be rendered. Spec. ¶¶ 11, 13–20, Figs. 2–4. Claims 1, 16, and 19 are independent claims. Claim 1 is reproduced below: 1. A method of detecting and responding to rendering of video content by a video presentation device, wherein the video content includes (i) a pre-established video segment that does not vary based on user-interaction during the rendering and (ii) a dynamically-defined video segment that varies based on user-interaction during the rendering, the method comprising: obtaining by a computing system a query fingerprint generated in real-time during the rendering as a representation of the video content being rendered, the query fingerprint including a first portion representing the pre-established video segment and a second portion representing the dynamically-defined video segment; while obtaining the query fingerprint, the computing system (a) detecting a match between the first portion of the query fingerprint and a reference fingerprint that represents the pre-established video segment, (b) based on the detecting of the match, identifying the video content being rendered, (c) after identifying the video content being rendered, applying a trained Appeal 2020-000610 Application 15/620,440 3 neural network to at least the second portion of the query fingerprint, and (d) detecting, based on the applying of the neural network, that rendering of the identified video content continues; and responsive to at least the detecting that rendering of the identified video content continues, the computing system taking action specific to the identified video content. The Pending Rejections Claims 1–6, 11, 12, 16, 17, and 19 stand rejected under 35 U.S.C. § 103 as obvious in view of Sakaguchi (US 2006/0252533 A1; Nov. 9, 2006), Burges (US 2006/0106867 A1; May 18, 2006), and Ham (US 2008/0318676 A1; Dec. 25, 2008). Final Act. 5–15. Claims 7–10, 18, and 20 stand rejected under 35 U.S.C. § 103 as obvious in view of Sakaguchi, Burges, Ham, and Rose (US 2014/0373043 A1; Dec. 18, 2014). Final Act. 16–19. Claim 13 stands rejected under 35 U.S.C. § 103 as obvious in view of Sakaguchi, Burges, Ham, and Soderstrom (US 2009/0150947 A1; June 11, 2009). Final Act. 20–21. Claim 14 stands rejected under 35 U.S.C. § 103 as obvious in view of Sakaguchi, Burges, Ham, Soderstrom, and Lewis (US 9,872,076 B1; Jan. 16, 2018). Final Act. 21–22. Claim 15 stands rejected under 35 U.S.C. § 103 as obvious in view of Sakaguchi, Burges, Ham, and Hua (US 2018/0041765 A1; Feb. 8, 2018). Final Act. 22–23. ANALYSIS The Examiner finds the combination of Sakaguchi, Burges, and Ham teaches or suggests every limitation recited in independent claims 1, 16, and 19. Final Act. 5–9, 11–15. Of particular relevance to this Appeal, the Appeal 2020-000610 Application 15/620,440 4 Examiner finds (1) Sakaguchi teaches or suggests a method of detecting and responding to rendering video content including a pre-established segment and a dynamic segment because Sakaguchi divides a game play screen into a portion containing interactive content (i.e., dynamic) and other portions containing non-playable cutscenes (i.e., pre-established) and (2) Burges teaches or suggests a query fingerprint representing streamed media, as recited in claim 1, because Burges’s system identifies media objects in a media stream by comparing traces from sampled portions of the streamed media to a database having known fingerprints. Final Act. 5–6 (citing Sakaguchi ¶¶ 6, 76; Burges ¶¶ 10, 15); see also Ans. 3 (“Examiner is clear in pointing out the ‘fingerprint’ is taught by Burges (see page 2 paragraph (0015)) not by Ham.”). The Examiner finds the combination of Sakaguchi and Burges, however, is “silent in teaching a query first portion representing the pre- established video segment and a second portion representing the dynamically-defined video segment,” and the Examiner therefore relies on Ham for these teachings. Final Act. 7 (citing Ham ¶ 52). The Examiner also finds Ham teaches the remaining limitations—i.e., the limitations regarding (1) detecting a match between a reference and a portion of a query (2) to identify the rendered content, (3) applying a neural network to the query (4) to detect the identified content continues to be rendered, and (5) taking an action in response to detecting rendering continues—with respect to a “query” and a “reference,” but the Examiner does not find Ham teaches performing these actions with respect to query and reference fingerprints Appeal 2020-000610 Application 15/620,440 5 representing portions of the content. Final Act. 7–8 (citing Ham ¶¶ 55, 56, 58).2 The Examiner concludes it would have been obvious to include Ham’s teachings “for a query first portion representing the pre-established video segment and a second portion representing the dynamically-defined video segment” from the Sakaguchi-Burges combination, finding that “[a] useful combination is found on Ham (page 1 paragraph (0004)) a determination is made that player’s avatar has performed an action while an audio signal representing a narrative of a non-player character is being produced.” Final Act. 8–9. The Examiner further states that “combining Sakaguchi Burges and Ham is proper because they are related to media stream combined to display content to users.” Ans. 3. Appellant argues the rejection fails to address the claim limitations as recited. Appeal Br. 5–12; Reply Br. 3–4. More specifically, Appellant contends the Examiner omits the “fingerprint” term from portions of claim language such that the Examiner’s findings regarding the Sakaguchi, Burges, and Ham fail to address the particularly recited claim limitations. See, e.g., Appeal Br. 5–6. Appellant argues that, even assuming Burges teaches using fingerprints to identify content, the claims recite a query fingerprint with particularly recited portions representing different types of video content and using the different portions in particular ways. Appellant then asserts that, 2 The Examiner appears to map Ham’s “impression-to-response mapping (330)” to a first portion of a query and Ham’s “action to response database (328)” to a reference. Ans. 4. These findings (i.e., that Ham’s impression- to-response mapping 330 and action-to-response database 328 teach or suggest a first portion of a query and a reference, respectively) are not clearly explained, but for purposes of this Appeal, we accept the Examiner’s findings. Appeal 2020-000610 Application 15/620,440 6 even accepting Ham teaches what the Examiner finds it teaches regarding various logic functions and Burges generally teaches using fingerprints, Ham does not relate to fingerprints and the Examiner has not explained how or why Ham’s logic functions would be included in the proposed Sakaguchi- Burges system to result in (1) query fingerprints having the recited portions or (2) performing the recited actions on the respective portions of the query fingerprint. Appeal Br. 7–8; Reply Br. 3–4; see also Appeal Br. 11–12 (arguing that, even if combined, the Examiner has not sufficiently explained how the disclosures from Sakaguchi, Burges, and Ham that the Examiner relies on would have taught or suggested the recited limitations relate to analyzing the portions of the query fingerprint). We agree with Appellant. Even accepting the Examiner’s findings regarding the teachings of Sakaguchi, Burges, and Ham, it is unclear from the Examiner’s findings and explanation how the Examiner proposes combining the cited teachings to result in at least (1) a query fingerprint with two distinct portions, respectively representing a pre-established video segment and a dynamically-defined video segment and (2) after identifying the rendered content, detecting that the identified content continues to be rendered by applying a trained neural network to the second portion of the fingerprint (that represents a dynamically-defined segment). “When a reference is complex or shows or describes inventions other than that claimed by the applicant, the particular part relied on must be designated as nearly as practicable” and “[t]he pertinence of each reference, if not apparent, must be clearly explained and each rejected claim specified.” 37 C.F.R. §1.104(c)(2) (emphases added). An agency is bound by its own regulations. See Service v. Dulles, 354 U.S. 363, 388 (1957). Appeal 2020-000610 Application 15/620,440 7 Because the Examiner has not sufficiently explained how Sakaguchi, Burges, and Ham, in combination, teach or suggest the particular limitations recited in representative claim 1, we are constrained by this record to reverse the rejection of independent claim 1 and claims 16 and 19, which recite commensurate limitations, as obvious under 35 U.S.C. § 103 in view of Sakaguchi, Burges, and Ham. Each of the dependent claims include the same limitation via their ultimate dependency from one of claims 1, 16, and 19, and the Examiner does not find that any of the additionally cited art cures this deficiency. Accordingly, we also reverse the rejection of dependent claims 2–15, 17, 18, and 20 as obvious under 35 U.S.C. § 103 in view of the cited prior art for the same reasons. DECISION SUMMARY Claims Rejected 35 U.S.C. § References Affirmed Reversed 1–6, 11, 12, 16, 17, 19 103 Sakguchi, Burges, Ham 1–6, 11, 12, 16, 17, 19 7–10, 18, 20 103 Sakguchi, Burges, Ham 7–10, 18, 20 13 103 Sakguchi, Burges, Ham 13 14 103 Sakguchi, Burges, Ham 14 15 Sakguchi, Burges, Ham 15 Overall Outcome 1–20 REVERSED Copy with citationCopy as parenthetical citation