Ex Parte Kornmann et alDownload PDFPatent Trial and Appeal BoardFeb 26, 201612546274 (P.T.A.B. Feb. 26, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 12/546,274 08/24/2009 100462 7590 02/29/2016 Dority & Manning P,A, and Google Inc, Post Office Box 1449 Greenville, SC 29602 FIRST NAMED INVENTOR David Kornmann UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. GGL-346C 3093 EXAMINER GOOD JOHNSON, MOTILEWA ART UNIT PAPER NUMBER 2616 MAILDATE DELIVERY MODE 02/29/2016 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) U-NITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte DAVID KORNMANN and PETER BIRCH Appeal2014-000489 Application 12/546,274 Technology Center 2600 Before JOHN G. NEW, JEFFREY A. STEPHENS, and KEVIN C. TROCK, Administrative Patent Judges. STEPHENS, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Appellants 1 seek our review under 35 U.S.C. § 134(a) from the Examiner's final rejection of claims 1-39 and 41--4 7, which are all the claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b ). We affirm. 1 The real party in interest is identified as Google Inc. (Br. 3.) Appeal2014-000489 Application 12/546,274 Claimed Subject Matter The invention generally relates to navigation in a three dimensional environment on a mobile device. (Title.) Claims 1 and 39, reproduced below, are illustrative: 1. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising: (a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device; (b) receiving a second user input indicating that a second object has moved on the touch screen while the first object is maintained approximately stationary on the touch screen; and ( c) changing an orientation of the virtual camera in the three dimensional environment according to the second user input. 39. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising: (a) receiving a first user input indicating that a first object has touched a first point on a touch screen of a mobile device; (b) receiving a second user input indicating that a second object has touched a second point on the touch screen after the first object touched the first point on the screen; and ( c) determining a navigation mode from a plurality of navigation modes based on the position of the first point relative to the second point, wherein the plurality of navigation modes includes a first navigation mode that changes a position of the virtual camera in the three dimensional environment and a second navigation mode that changes an orientation of the virtual camera in the three dimensional environment. 2 Appeal2014-000489 Application 12/546,274 Rejections Claims 1-5, 7-13, 15-20, 22, 23, 25-32, 34, 35, 37-39, and 41--47 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Ogawa et al. (US 2004/0051709 Al, published Mar. 18, 2004) ("Ogawa"), Sullivan (US 2008/0094358 Al, published Apr. 24, 2008), and Han et al. (US 2008/0180406 Al, published July 31, 2008) ("Han"). (Final Act. 2-21.) Claims 6, 14, 21, and 33 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Ogawa, Sullivan, Han, and Wong et al. (US 7, 159, 194 B2, issued Jan. 2, 2007). (Final Act. 21-22.) Claims 24 and 36 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Ogawa, Sullivan, Han, and Schradi (US 2004/0128071 Al, published July 1, 2004). (Final Act. 22-23.) 10): ISSUES Appellants' arguments present us with the following issues (see Br. 1. Whether the Examiner improperly relies on Ogawa as teaching determining a navigation mode from a plurality of navigation modes, as recited in independent claims 39 and 41. 2. Whether the Examiner improperly relies on Han as teaching changing an orientation of a virtual camera as recited in independent claims 1, 9, 16, 28, 39, 41, and 42. 3. Whether the Examiner improperly relies on Ogawa as teaching changing a tilt or an azimuth value of a virtual camera relative to a vector directed upwards from a target location, as recited in independent claim 38. 3 Appeal2014-000489 Application 12/546,274 4. Whether the Examiner improperly relies on Ogawa as teaching receiving a user input indicating that two objects have moved on the touch screen approximately the same distance in approximately the same direction, as recited in independent claim 42. ANALYSIS We have reviewed the Examiner's rejections in light of Appellants' arguments the Examiner erred (Br. 9-22). We are not persuaded by Appellants' arguments. We adopt as our own the findings and reasons set forth by the Examiner in the action from which this appeal is taken and in the Answer (see Ans. 24--29). We highlight and address specific arguments and findings for emphasis as follows. Issue 1 Appellants argue the Examiner improperly relies on Ogawa as teaching determining a navigation mode from a plurality of navigation modes, as recited in independent claims 39 and 41. (Br. 11-15.) In particular, Appellants argue Ogawa considers the number of fingers touching the screen to signal a particular navigation operation, and does not consider the relative position of the second point touched relative to the first. (Br. 11-12.) We are not persuaded by Appellants' argument because paragraphs 68 and 69 of Ogawa, cited by the Examiner (Ans. 25), teach shifting the viewpoint based on the distance and direction of a second touch position relative to a first touch position. Appellants argue "this operation, at most, involves a single navigation mode where a viewpoint is shifted in a 4 Appeal2014-000489 Application 12/546,274 particular direction." (Br. 14.) The direction of viewpoint shift is encompassed by the Examiner's broad, but reasonable, interpretation of navigation mode, and Appellants have not presented persuasive evidence or argument that the Specification defines navigation mode to exclude the direction of a navigation shift or different navigation operations as taught in Ogawa. Furthermore, the Examiner finds, and we agree, Ogawa teaches the input operation and the first claimed navigation mode (i.e., changes a position of the virtual camera), and Han teaches the second navigation mode (i.e., changes an orientation of the virtual camera) recited in claim 39. (Final Act. 16-17.) We also agree with the Examiner's reasoning that it would have been obvious to combine the multi-touch functions taught by Ogawa with changing the orientation of a virtual camera according to an input as taught by Han. (See Final Act. 17.) The combined teachings of the references, therefore, teach determining whether to change a position or an orientation of a virtual camera based on the distance and direction of a second touch position relative to the first. Accordingly, we are not persuaded the Examiner improperly relies on Ogawa as teaching determining a navigation mode from a plurality of navigation modes, as recited in independent claims 39 and 41. Issue 2 Appellants argue the Examiner improperly relies on Han as teaching changing an orientation of a virtual camera as recited in independent claims 1, 9, 16, 28, 39, 41, and 42. (Br. 15-18.) In particular, Appellants emphasize Han's teaching that "a user controls various movement of a displayed 3D object," including orientation of the object. (Br. 16 (quoting 5 Appeal2014-000489 Application 12/546,274 Han if 45).) Appellants contend"[ c]ontrolling the position, scale and/or orientation of a three-dimensional object is not the same as, nor does it teach or suggest, 'changing an orientation of the virtual camera in the three dimensional environment.'" (Id.) Appellants' arguments do not persuade us of error in the Examiner's rejection. Paragraph 45 of Han teaches "a user controls various movements of a displayed 3 D object (or scene) using one or more inputs" (emphasis added). One of the movements that may be controlled is orientation. (Han if 45.) We agree with the Examiner that Han teaches changing an orientation of the virtual camera in the three dimensional environment because changing an orientation of a three dimensional scene suggests changing the orientation of the camera used to view the scene. Indeed, Han teaches "there is no functional difference" between manipulating the camera rather than the object. (Han if 40 ("This also is known as pan-zoom-rotate, or 'PZR,' a term generally used when the camera is implied to be being manipulated rather than the object (though there is no functional difference).") In addition, we agree with the Examiner's findings that Han explicitly teaches 3D globe view control (see Ans. 29 (citing Han iii! 62-71)), and we note Appellants do not file a reply brief to rebut the Examiner's findings. Accordingly, we are not persuaded the Examiner improperly relies on Han as teaching changing an orientation of a virtual camera as recited in independent claims 1, 9, 16, 28, 39, 41, and 42. Issue 3 Appellants argue the Examiner improperly relies on Ogawa as teaching changing a tilt or an azimuth value of a virtual camera relative to a 6 Appeal2014-000489 Application 12/546,274 vector directed upwards from a target location, as recited in independent claim 38. (Br. 18-19.) In support, Appellants quote portions of paragraphs 46 and 49 of Ogawa and state that "Ogawa is completely silent with regards to changing a tilt value or an azimuth value of the virtual camera relative to a 'vector directed upwards from the target location.'" (Br. 18-19.) We are not persuaded by Appellants' arguments. As Appellants acknowledge (Br. 18), Ogawa teaches changing "the rolling angle, the heading angle, the pitch angle, and the angel [sic] of view" (Ogawa i-f 49). Changes in the heading angle and pitch angle of a camera with respect to a scene correspond to a change in azimuth and tilt angle with respect to all objects in the scene. Thus, although Ogawa does not describe a "vector directed upwards from the target location," we agree with the Examiner (see Final Act. 15; Ans. 27-28) that Ogawa teaches the disputed limitation because changing tilt and azimuth of a virtual camera with respect to objects in view results in changing the tilt and azimuth "relative to a vector directed upwards from the target location," as recited in claim 38. Accordingly, we are not persuaded the Examiner improperly relies on Ogawa as teaching changing a tilt or an azimuth value of a virtual camera relative to a vector directed upwards from a target location, as recited in independent claim 38. Issue 4 Appellants argue the Examiner improperly relies on Ogawa as teaching receiving a user input indicating that two objects have moved on the touch screen approximately the same distance in approximately the same direction, as recited in independent claim 42. In particular, Appellants argue 7 Appeal2014-000489 Application 12/546,274 Ogawa's scaling operation, which detects changes in the distance between two fingers on the touch screen, is different from receiving a user input indicating that two objects have moved on the touch screen approximately the same distance in approximately the same direction. (Br. 19-20 (citing Ogawa iTiT 60, 62).) In the Answer, the Examiner further finds Ogawa' s viewpoint shift operations described in paragraphs 69 through 72 teach determining the distance and direction two objects have moved and generating corresponding shift information. (Ans. 28.) We agree with the Examiner that, based on Ogawa's teachings in paragraphs 69 to 72, moving two objects on the touch screen approximately the same distance in approximately the same direction provides the claimed input and results in a change in orientation of the virtual camera. We note Appellants have not filed a reply brief to rebut the Examiner's findings. Accordingly, we are not persuaded the Examiner improperly relies on Ogawa as teaching receiving a user input indicating that two objects have moved on the touch screen approximately the same distance in approximately the same direction, as recited in independent claim 42. CONCLUSION In view of the foregoing, we sustain the rejection of independent claims 1, 9, 16, 28, 38, 39, 41, and 42 under 35 U.S.C. § 103(a) as unpatentable over Ogawa, Sullivan, and Han. For the same reasons, we sustain the rejections of dependent claims 2-8, 10-15, 17-27, 29-37, and 43--47, for which no additional arguments are presented (see App. Br. 10). 8 Appeal2014-000489 Application 12/546,274 DECISION We affirm the Examiner's decision to reject claims 1-39 and 41--47. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED 9 Copy with citationCopy as parenthetical citation