Ex Parte Turner et alDownload PDFPatent Trial and Appeal BoardMar 22, 201712650800 (P.T.A.B. Mar. 22, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 12/650,800 12/31/2009 Tara Handy Turner P191590.US.01 7917 63658 7590 Disney Enterprises, Inc. c/o Dorsey & Whitney LLP 1400 Wewatta Street Suite 400 Denver, CO 80202-5549 EXAMINER LIU, ZHENGXI ART UNIT PAPER NUMBER 2611 NOTIFICATION DATE DELIVERY MODE 03/24/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): Docketing-DV @ dorsey.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte TARA HANDY TURNER, MATTHEW F. SCHNITTKER, ROBERT M. NEUMAN, EVAN M. GOLDBERG, and JOSEPH W. LONGSON Appeal 2017-000727 Application 12/650,800 Technology Center 2600 Before ALLEN R. MacDONALD, BETH Z. SHAW, and MICHAEL M. BARRY, Administrative Patent Judges. SHAW, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from the Examiner’s final rejection of claims 1—11 and 13—29, which are the only claims currently pending in this application. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. INVENTION The invention is for displaying depth and volume information and for choreographing stereoscopic depth information between 3-D images. See Spec. 11. Appeal 2017-000727 Application 12/650,800 Claim 1, which is illustrative, reads as follows, with disputed limitations in italics'. 1. A system for visualization and editing of one or more scenes, with each scene comprising a plurality of a stereoscopic frames comprising: one or more computing devices in communication with a display, the computing devices coupled with a storage medium storing a plurality of stereoscopic images, each of the stereoscopic images corresponding to at least one frame and including depth and volume information for a plurality of layers for the plurality of stereoscopic images; a visualization and editing interface for choreographing stereoscopic depth information between the plurality of stereoscopic frames stored on the storage medium and displayed on the display, the visualization and editing interface configured to: provide at least one editing control that provides for editing of the depth and volume information for a first layer and a second layer of the plurality of layers, wherein the editing control is configured to: receive a first user instruction to modify a first depth value for a first layer; modify the first depth value based on the first user instruction; modify a second depth value for a second layer based on the first user instruction', receive a second user instruction to modify a first volume value for a first object within the first layer; modify the first volume value based on the second user instruction; and modify a second volume value for a second object based on the second user instruction. REJECTIONS AT ISSUE The Examiner rejected claims 1, 2, 4—10, 13, 14, 16, 17, 19, 20, 24, 25, and 27—29 under 35 U.S.C. § 103 as being unpatentable over Harman (US 2002/0118275 Al; published Aug. 29, 2002), Brimelow (“New tutorial 2 Appeal 2017-000727 Application 12/650,800 on parallax 3D effects”), Takahashi et al. (US 5,537,528; issued July 16, 1996), and Tam et al. (“3D-TV Content Generation: 2D-To-3D Conversion”). Final Act. 9-45. The Examiner rejected claims 3, 11, 15, 18, and 20-22 under 35 U.S.C. § 103 as being unpatentable over Harman, Brimelow, Takahashi, Tam, and Simpson (US 2005/0271303 Al; published Dec. 8, 2005). Final Act. 45—50. The Examiner rejected claim 23 under 35 U.S.C. § 103 as being unpatentable over Harman, Brimelow, Takahashi, Tam, and Cash et al. (US 2004/0015424 Al; published Jan. 22, 2004). Final Act. 50-51. The Examiner rejected claim 26 under 35 U.S.C. § 103 as being unpatentable over Harman, Brimelow, Takahashi, Tam, and Engle (“Beowulf 3D: A Case Study”). Final Act. 51—52. ANALYSIS Appellants argue that the Examiner’s rejections are in error. App. Br. 11—23; Reply Br. 2—6. We have reviewed Appellants’ arguments in the Appeal Brief, the Examiner’s rejections, and the Examiner’s response to Appellants’ arguments. For the reasons set forth below, we are not persuaded of Examiner error. Claim 1 Appellants argue the Examiner erred in finding the combination of Harman, Brimelow, Takahashi, and Tam teaches or suggests “modify[ing] a second depth value for a second layer based on the first user instruction,” as recited in claim 1. App. Br. 16—18. In particular, Appellants argue that in Harman, each layer is individually assigned a depth value by a separate 3 Appeal 2017-000727 Application 12/650,800 input, and thus Harman does not teach using a single user input to modify depth values for multiple layers. App. Br. 18. Appellants argue that Tam also fails to teach this limitation because Tam merely generates different perspectives of the same image with the depth value remaining constant and unchanged between the views. App. Br. 17. As the Examiner finds, and we agree, the combination of Harman, Brimelow, and Takahashi “teaches a GUI-based framework for updating depth related information of a layer-based 3D scene comprising a plurality of frames.” Ans. 8. The Examiner finds Harman teaches modifying a second depth value for a second layer based on a user instruction. Final Act. 13 (citing Harman 140). The Examiner also finds that Tam teaches the depth information of any object, including the layers in the layer-based 3D scene, are updated according to imaging basic principles, articulated in Figure 1 and equation 1 of Tam. Ans. 8. The Examiner explains that Tam discloses that one may calculate depth values based on focal length, just as Appellants’ Specification teaches in general that focal length can be used to calculate depth values. Id. at 9—10 (citing Tam, 1870; Spec. 172). The Examiner also points out that the Specification itself fails to provide further details on how the depth values are changed according to a changed focal length. Id. at 11. The Examiner relies on Tam to illustrate that a change to focal length may correspondingly update a depth value for multiple layers at once. Ans. 9—11. Although Appellants argue that in claim 1, a user “can make a single adjustment to a single layer of a frame and that single adjustment is used to affect other layers within the frame as well as other layers in other frames” and that “a change to one layer is applied holistically to additional, non- 4 Appeal 2017-000727 Application 12/650,800 edited layers,” we are not persuaded that claim 1 is so narrow. Rather, claim 1 merely recites “modify a second depth value for a second layer based on the first user instruction.” Although Appellants argue that “nowhere does Harman disclose using a single user input to modify depth values for multiple layers” and that it would not be “possible to modify Harman to do so because every object is selected individually, and separate input is required from the user,” we are not persuaded by this characterization of Harman. App. Br. 18. Rather, contrary to Appellants’ characterization of Harman as teaching that depth values “require[] individual entry on a per object or per layer basis by a user through ‘manual depth definition’” (Reply Br. 11 (citing Harman 140)), Harman explains that the “depth of an object or objects may be determined either manually, automatically or semi- automatically.” Harman If 39. Also, a user may assign an object to have a range of depths that varies over time, object location, or based on motion. Harman 141. Moreover, Harman also gives an example of selecting depth characteristics and then applying them to a newly created depth layer along with modifying the original layer. Harman 179; Ans. 11. Thus, an ordinarily skilled artisan would understand Harman to teach modifying depth values on more than one layer based on a single instruction. Appellants provide insufficient evidence showing the Specification or claims limit “the first user instruction” such that, under a broad but reasonable interpretation, it is not encompassed by Harman and Tam’s teachings of modifying a depth value for a second layer based on a single instruction. Appellants argue the rejections are improper because they fail to consider the claimed invention as a whole. App. Br. 21. In particular, 5 Appeal 2017-000727 Application 12/650,800 Appellants argue that the Examiner improperly crossed out sub-elements in the claim and failed to interpret the full context of the claims. Id. at 21—22. However, we find that the Examiner sufficiently explains how the Examiner properly considered and analyzed the full scope of the claims as well as the prior art. Ans. 12—14. Appellants also argue that the combination of Harman and Brimelow is based on improper hindsight and is conclusory. App. Br. 23. We find this unpersuasive. The Examiner finds that one skilled in the art would have been motivated to combine Harman and Brimelow, which both involve 3-D image technology, because the combination would make it easier to edit information. Final Act. 18; Ans. 16. The Examiner explains that Brimelow teaches the use of graphical user interfaces to edit information. Id. We agree with the Examiner that one skilled in the art would have combined Harman’s 2-D and 3-D image conversion and encoding techniques with Brimelow’s graphical user interfaces for 3-D images because it would make it easier to edit information about the 3-D images. See id. In the absence of sufficient evidence or a line of technical reasoning to the contrary, we find the Examiner’s findings are reasonable, and we find no reversible error. Moreover, “[t]he combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.” KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007). “If a person of ordinary skill can implement a predictable variation, §103 likely bars its patentability.” Id. at 417. For these reasons, we sustain the Examiner’s rejection of claim 1. 6 Appeal 2017-000727 Application 12/650,800 Claim 23 Appellants argue that Cash is not analogous art because it is not in the same field of endeavor as the invention and it is not reasonably pertinent to the problem solved by the inventor. App. Br. 20. The Examiner finds, however, that Cash is reasonably pertinent to a particular problem with which the Appellants were concerned. Ans. 12. In particular, the Examiner applies Cash to show an “apply to all” feature of a graphical user interface (GUI) recited in claim 23. Id. The Examiner finds that the “apply to all” feature is a general teaching of a feature used in GUIs. Id. We agree with the Examiner that Cash’s general teachings related to GUIs are reasonably pertinent to the problem of applying information to each feature in a GUI and, thus, Cash qualifies as analogous art. Accordingly, we sustain the rejection of claim 23. Remaining Pending Claims With respect to claim 26, Appellants argue that Engle does not cure the deficiencies of the references discussed above with respect to claim 1. App. Br. 21. Because Appellants have not presented separate patentability arguments for claim 26 or have reiterated substantially the same arguments as those previously discussed above, as well as for the remaining pending claims, they all fall for the same reasons as claim 1. See 37 C.F.R. § 41.37(c)(l)(iv). DECISION The decision of the Examiner to reject claims 1—11 and 13—29 is affirmed. 7 Appeal 2017-000727 Application 12/650,800 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 8 Copy with citationCopy as parenthetical citation