Ex Parte Venkitaraman et alDownload PDFPatent Trial and Appeal BoardMar 1, 201713723176 (P.T.A.B. Mar. 1, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/723,176 12/20/2012 Narayanan Venkitaraman CS40330 1021 43471 7590 03/03/2017 ARRTS2 F.ntp.mrisp.s T ! C EXAMINER Legal Dept - Docketing 101 Tournament Drive GOOD JOHNSON, MOTILEWA HORSHAM, PA 19044 ART UNIT PAPER NUMBER 2616 NOTIFICATION DATE DELIVERY MODE 03/03/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): arris. docketing @ arris .com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte GENERAL INSTRUMENT CORPORATION Appeal 2016-006053 Application 13/723,176 Technology Center 2600 Before JEREMY J. CURCURI, GREGG I. ANDERSON, and KARA L. SZPONDOWSKI, Administrative Patent Judges. ANDERSON, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from the Examiner’s rejection of claims 1—24.1 We have jurisdiction under 35 U.S.C. § 6(b). We reverse. 1 In this Opinion, we refer to the Appeal Brief (“App. Br.,” filed October 19, 2015), the Reply Brief (“Reply Br.,” filed May 25, 2016), the Final Office Action (“Final Act.,” mailed May 22, 2015), and the Examiner’s Answer (“Ans.,” mailed Mar. 25, 2016), and the original Specification (“Spec.,” filed Dec. 20, 2012). Appeal 2016-006053 Application 13/723,176 STATEMENT OF THE CASE A. The Invention Appellants’ invention relates to augmented reality (AR), a “combination of a captured real-world environment with computer generated data, thus creating an ‘augmented’ view of the captured real-world environment.” Spec. 11. An example of AR is “in a football game, the broadcaster may overlay an image of a line that represents the first down on a view of the football field.” Id. “As another example, video capture of an object using a mobile device camera may provide more information about the object overlaid on the video.” Id. The AR system receives a media stream at a receiving device. Spec. 115. The media stream may be a live media feed or pre-recorded media. Id. The receiving device may be a television, a monitor on a computer system, or a display on a hand held device. Id. The AR system may communicate with a companion device, like a smartphone or tablet. Id. at 116. The companion device includes a video capture unit and a display unit. Id. The video capture unit captures the neighborhood (e.g., a user’s surroundings) and generates a captured media stream. Id. The captured media stream is sent to the display unit and presented to the user as displayed scenes of the user’s neighborhood. Id. 2 Appeal 2016-006053 Application 13/723,176 Figure 4 of the application is reproduced below. mmot ob#ct mv&xte FIG. 4 Figure 4 is a workflow of the AR system. Spec. 126. At block 402 first context information is accessed “based on the content of the delivered media stream 104 that is delivered to the receiving device 122.” Id. The first context information may specify the activity that is occurring in the delivered media stream 104 (e.g., a battle scene, a baseball game, etc.), objects that are identified in the delivered media stream (e.g., characters and weaponry in the battle scene, players in the baseball game, etc.), events that are taking place, and so on. Id. At block 404, the AR system accesses second context information based on the content of captured media stream captured by the companion device in the form of video of the user’s neighborhood. Id. at 133. At block 406, the AR system identifies one or more virtual objects to be presented on the companion device, which may be “images, a sequence of images, animations, and so on.” Spec. 137. At block 408, the AR 3 Appeal 2016-006053 Application 13/723,176 system determines a set of transformational information (transforms) for each of the identified virtual objects. Spec. 142. The transforms specify spatial information for the virtual objects with respect to the captured media stream received from the companion device. Id. At block 410, the AR system determines “one or more points in time (times) in the media captured by the companion device 142 at which to introduce the identified virtual objects 234.” Id. at 146. “At block 412, the AR system 100 may provide object metadata to the companion device 142, for example, as information 114.” Id. at 149. “At block 414, the companion device 142 may render the virtual object on its display unit 146 to create an augmented reality experience for the user.” Id. at 151. B. The Claims Independent claims 1,16, and 24 are respectively a method claim, a computer device claim, and a computer medium claim. Claims 2—15 depend directly or indirectly from claim 1 and claims 17—23 depend directly or indirectly from claim 16. Claim 1 is illustrative: 1. A computer-implemented method for augmented reality comprising a computer performing: accessing first context information that is based on content in a delivered media stream that is being delivered to a receiving device; accessing second context information that is based on content in a captured media stream representative of a neighborhood of a user device; determining a first virtual object using at least the first context information, wherein the first virtual object is based on the content in the delivered media stream; 4 Appeal 2016-006053 Application 13/723,176 determining, using at least the second context information, and based at least on the content in the captured media stream, transformational information comprising at least one transformation to be performed on the first virtual object; and providing, to the user device, information representative of the first virtual object as transformed using the transformational information, wherein a field of view of the neighborhood seen using the user device is augmented with one or more images of the first virtual object by rendering the one or more images of the first virtual object using the information representative of the first virtual object as transformed using the transformational information. C. The Rejection 1. The Examiner rejected claims 1—24 under pre-AIA 35 U.S.C. § 103(a) as being unpatentable over Soon-Shiong, U.S. Patent Publication 2012/0256954 Al, published Oct. 11, 2012 (hereinafter “Soon-Shiong”) and Osterhout et al., U.S. Patent Publication 2014/0063055 Al, published Mar. 6, 2014 (hereinafter “Osterhout”). Ans. 2. D. Issue Appellants’ arguments present the following issue, which is dispositive of this appeal: Has the Examiner erred in finding Soon-Shiong teaches a first context information and a second context information that are distinguished from one another, as recited in claim 1? 5 Appeal 2016-006053 Application 13/723,176 ANALYSIS A. The Rejection as it Relates to First and Second Context Information The Examiner cites Soon-Shiong as teaching all the limitations of claim 1 with the exception of the claimed “media streams.” Answer 3^4. Osterhout is cited for the media stream limitations. Id. at 4. At issue here is whether the first and second context information limitations are disclosed as two distinct contexts. See App. Br. 7—8. For these limitations, the Examiner relies solely on Soon-Shiong. See Ans. 10-11. The Examiner quotes from Soon-Shiong that the hosting platform recognizes “at least one element in a scene” and that “other elements” in the scene “could be recognized as well.” Ans. 10 (quoting Soon-Shiong | 66). The Examiner concludes “analyzing one or more recognized element to determine a context that pertains to the recognized target object and multiple contexts could pertain the recognized target objects or scene based on all information available via digital representation, context 332A and 332B.” Id. (citing Soon-Shiong || 66—67) (emphasis added). Specifically as to the claimed “first context,” the Examiner cites to Soon-Shiong “figure 5, scene 695, with context related to scene captured as show[n] in figures 5 and 6.” Ans. 11. For the “second context information” the Examiner cites to the “neighborhood of a user device” as “context pertaining to a current environment or scene associated with the AR-capable device.” Id. (citing Soon-Shiong 155). The Examiner concludes by contending Soon-Shiong discloses 6 Appeal 2016-006053 Application 13/723,176 context can ebb or flow or even shift focus, from a first context to a second context, individuals can participate within an augmented reality experience associated with a gaming context and the augmented reality experience can incorporate a shopping context, and a shift could include retaining a previously identified context without discarding them in favor of new context, paragraph 0077, therefore Soon-Shiong discloses distinctly identifying a first context information and a second context information that are distinguished from one another. Ans. 11. B. Appellants ’ Argument on First and Second Context Information Appellants summarize the Examiner’s position by pointing to the reliance on recognition of a “target object” and the disclosure of multiple contexts 332A and 332B in Soon-Shiong. App. Br. 8 (citing Soon-Shiong 11 66—67). Appellants point to paragraph 67 of Soon-Shiong disclosing the “multiple contexts 332 could also pertain the recognized target objects or scene based on all information made available via digital representation 334.” Id. Appellants conclude the preceding disclosures “fail[s] to distinctly identify a first context information and a second context information that are distinguished from one another.” Id. C. Whether the Examiner has Shown that Soon-Shiong Distinguishes First Context Information from Second Context Information On this record, the Examiner erred in the obviousness rejection of representative claim 1 by failing to make a prima facie case that the recited “accessing first context information” is distinct from “accessing second context information.” The citations the Examiner made to Soon-Shiong do 7 Appeal 2016-006053 Application 13/723,176 not clearly show separate first and second contexts. As a result, the Examiner has not shown that the cited prior art teaches all elements. The Examiner cites to Soon-Shiong “figure 5, scene 695, with context related to scene captured as show[n] in figures 5 and 6” for the “first context information.” Ans. 11. For the “second context information,” the Examiner contends that “other elements” in the scene “could be recognized as well.” Ans. 10 (quoting Soon-Shiong | 66) (emphasis added). This statement does not provide a rational basis upon which to conclude Soon-Shiong teaches the “second context information.” Neither are we persuaded that the general reference to the “neighborhood of a user device” as “context pertaining to a current environment or scene associated with the AR-capable device” shows the “second context information.” See id. (citing Soon-Shiong | 55). In summary, there is an insufficient showing of where Soon-Shiong teaches “accessing second context information” separate from “accessing first context information.” It is unclear which of the teachings of Soon- Shiong are relied on for which of the first two steps of claim 1. The Examiner relies on the showing for claim 1 to reject independent claims 16 and 24. See Ans. 8 (claim 16), 10 (claim 24). Accordingly, the rejections of independent claims 1,16, and 24 are not sustained. As a result of finding the Examiner erred with respect to the rejections of the independent claims, 1,16, and 24, the rejections of dependent claims 2—15 and 17—23 are also not sustained. CONCLUSION The Examiner erred in rejecting claims 1—24 under 35 U.S.C. § 103(a). 8 Appeal 2016-006053 Application 13/723,176 ORDER The Examiner’s decision rejecting claims 1—24 is reversed. REVERSED 9 Copy with citationCopy as parenthetical citation