Ex Parte Kornmann et alDownload PDFPatent Trial and Appeal BoardFeb 26, 201612546245 (P.T.A.B. Feb. 26, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 12/546,245 08/24/2009 100462 7590 02/29/2016 Dority & Manning P,A, and Google Inc, Post Office Box 1449 Greenville, SC 29602 FIRST NAMED INVENTOR David Kornmann UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. GGL-346A 3041 EXAMINER GOOD JOHNSON, MOTILEWA ART UNIT PAPER NUMBER 2616 MAILDATE DELIVERY MODE 02/29/2016 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte DAVID KORNMANN, PETER BIRCH, and MICHAEL MORTON Appeal2014-000570 Application 12/546,245 Technology Center 2600 Before JOHN G. NEW, JEFFREY A. STEPHENS, and KEVIN C. TROCK, Administrative Patent Judges. STEPHENS, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Appellants 1 seek our review under 35 U.S.C. § 134(a) from the Examiner's final rejection of claims 1-31, 33, and 34, which are all the claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b ). We affirm. 1 The real party in interest is identified as Google Inc. (Br. 3.) Appeal2014-000570 Application 12/546,245 Claimed Subject Matter The invention generally relates to user interface gestures for moving a virtual camera on a mobile device. (Title.) Claims 1 and 19, reproduced below with disputed limitations emphasized, are illustrative: 1. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising: (a) receiving a user input indicating that two objects have touched a view of the mobile device and the two objects have moved relative to each other; (b) determining a speed that the objects have moved relative to each other based on the user input; ( c) determining a speed of the virtual camera based on the speed of the objects that the objects have moved relative to each other; and ( d) moving the virtual camera within the three dimensional environment according to the speed of the virtual camera determined in (c). 19. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising: (a) receiving a user input indicating that a user has selected a position on a view of the mobile device; (b) determining a target location based on the position of the view of the mobile device selected by the user; ( c) moving the virtual camera in the three dimensional environment toward the target location; and ( d) rotating the virtual camera in the three dimensional environment toward the target location. 2 Appeal2014-000570 Application 12/546,245 Rejections Claims 1-16, 18-31, and 33 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Bradski (US 7,271,795 B2; issued Sept. 18, 2007) and Han et al. (US 2008/0180406 Al; published July 31, 2008) ("Han"). (Final Act. 2-10.) Claims 17 and 18 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Bradski, Han, and Schmidt et al. (US 2004/0125114 A 1; published July 1, 2004) ("Schmidt"). (Final Act. 11-12.) Claim 34 stands rejected under 35 U.S.C. § 103(a) as unpatentable over Bradski, Han, and Westerman et al. (US 2008/0042987 Al; published Feb. 21, 2008) ("Westerman"). (Final Act. 12-13.) ANALYSIS We have reviewed the Examiner's rejections in light of Appellants' arguments the Examiner erred (Br. 6-16). We are not persuaded by Appellants' arguments. We adopt as our own the findings and reasons set forth by the Examiner in the action from which this appeal is taken and in the Answer (see Ans. 15-19). We highlight and address specific arguments and findings for emphasis as follows. Claims 1, 2, 4-11, 13-18, and 34 Appellants argue Bradski and Han fail to teach or suggest determining a speed that the objects have moved relative to each other, determining a speed of the virtual camera based on the speed of the objects, and moving the virtual camera within the three dimensional environment according to the speed of the virtual camera, as recited in claim 1. (Br. 7-11.) Regarding 3 Appeal2014-000570 Application 12/546,245 Bradski, Appellants argue the device receives input through motion or orientation of the device, rather than through objects touching the touch screen. (Br. 8.) Appellants acknowledge Bradski's teaching that "the angle of rotation determines the speed at which the data scrolls on the display screen" (Bradski 3:64--66), but argue this is not related to receiving a user input indicating that two objects have touched a view of the mobile device or determining a speed that the objects have moved relative to each other, as recited in claim 1. (Br. 8-9.) Appellants also contend "Bradski determines a speed at which data scrolls on a display screen, which is not related to, and much less teaches or suggests, 'determining a speed that the objects have moved relative to each other based on the user input."' (Br. 9; see also Br. 11.) Regarding Han, Appellants argue paragraph 58 of Han teaches the rate of rotation of an object is based on the amount of time an input is received, such as a third finger touching a touch screen, not the speed that objects move relative to each other. (Br. 9-10.) We are not persuaded by Appellants' arguments, which attack the references individually. In re Keller, 642 F .2d 413, 426 (CCP A 1981) ("[O]ne cannot show non-obviousness by attacking references individually where, as here, the rejections are based on combinations of references."). We agree with the Examiner's findings that Bradski teaches the speed of the virtual camera may be controlled based on the speed of the input (Ans. 15 (citing Bradski 3:56-67)),2 and Han teaches receiving input for moving a 2 Appellants "do not concede" that Bradski's virtual window "can be equated to a virtual camera as claimed" (Br. 11 ), but do not explain why the Examiner's finding is in error. We agree with the Examiner (Final Act. 2) that Bradski' s virtual window is a virtual camera because it provides different views of Bradski' s data depending on where it is directed. 4 Appeal2014-000570 Application 12/546,245 displayed object or scene based on movement of objects touching the screen (id. (citing Han i-fi-139--40); Ans. 16 (citing Han i-fi-144--45)). Combining the references to include as an input in Bradski the movement of objects such as fingers on the touch screen, as taught by Han, results in determining a speed that the objects have moved relative to each other, determining a speed of the virtual camera based on the speed of the objects, and moving the virtual camera within the three dimensional environment according to the speed of the virtual camera, as recited in claim 1. Appellants also argue the combination of Bradski and Han relies on impermissible hindsight reasoning. (Br. 11-12.) In particular, Appellants contend the Examiner does not articulate a reason why a person of ordinary skill in the art would apply a particular finger gesture to control a virtual camera, and Appellants assert that the Examiner's reasoning is circular and conclusory. (Br. 12.) The Examiner finds: It would have been obvious to one of ordinary skill in the art at the time of the invention to include in the gestures disclosed by Bradski configuring the movements of the virtual camera and objects using the finger gestures disclosed by Han, to create the gestures for users to change the input for controlling the display. (Final Act. 3.) We find the Examiner's reasoning to be sufficient under the circumstances of this case and supported by the teachings of Bradski and Han. The Examiner's findings highlight the similarities between Bradski and Han, which both receive various inputs to control the view on the display, including the speed at which the camera or objects in view may move. (See Final Act. 2-3 (citing Bradski 3:63---67; Han i-fi-139--40, 49, 58).) This combination of familiar elements-Han's input using movement of 5 Appeal2014-000570 Application 12/546,245 objects on a touch screen with Bradski's movement of the virtual camera based on the speed of the input---does no more than yield the predictable result of moving the virtual camera based on the speed of the objects that have moved relative to each other. KSR Int 'l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) ("The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results."). In view of the foregoing, we sustain the rejection of claim 1 under 35 U.S.C. § 103(a) as unpatentable over Bradski and Han. For the same reasons, we sustain the rejection of claims 2, 4--11, and 13-16, which are not separately argued. Appellants argue claims 17 and 18 are patentable in view of their dependence from claim 10, and that claim 34 is patentable in view of its dependence from claim 1. (Br. 15.) Thus, for the same reasons as claim 1, we sustain the rejection of claims 17 and 18 under 35 U.S.C. § 103(a) as unpatentable over Bradski, Han, and Schmidt, and the rejection of claim 34 under 35 U.S.C. § 103(a) as unpatentable over Bradski, Han, and Westerman. Claims 3 and 12 Appellants argue Bradski and Han do not teach or suggest decelerating the virtual camera in the three dimensional environment gradually, as recited in claim 3 and similarly recited in claim 12. (Br. 12- 13.) According to Appellants, Bradski at most describes detecting acceleration of a device, which is different from decelerating the virtual camera. (Br. 13 (citing Bradski 6:35--41).) We agree with the Examiner, 6 Appeal2014-000570 Application 12/546,245 however, that detecting absolute accelerations or absolute movement necessarily detects decelerations. (See Final Act. 4 (citing Bradski 5 :4 7--48, 6:31--43).) Based on the combined teachings of Bradski and Han, as discussed above for claim 1, detecting decelerations of the input (i.e., movement of objects on the touch screen as taught in Han) results in deceleration of the virtual camera being controlled by the input. Accordingly, we sustain the rejection of claims 3 and 12 under 35 U.S.C. § 103(a) as unpatentable over Bradski and Han. Claims 19-31 and 33 Appellants argue scrolling a data set based on device tilt in Bradski does not teach or suggest (c) moving the virtual camera in the three dimensional environment toward the target location or ( d) rotating the virtual camera in the three dimensional environment toward the target location, as recited in claim 19. (Br. 14--15 (citing Bradski 3:35-52, 5:20- 25).) We agree with the Examiner, however, that Han's "PZR (pan, zoom, rotation) control applied to 3D globe navigation" (Ans. 18) teaches moving and rotating the virtual camera toward the target location. We note Appellants do not file a reply brief to rebut the Examiner's findings. Accordingly, we sustain the rejections of claim 19 under 35 U.S.C. § 103(a) as unpatentable over Bradski and Han. For the same reasons, we sustain the rejection of claims 20-31 and 33, which are not separately argued. DECISION We affirm the Examiner's decision to reject claims 1-31, 33, and 34. 7 Appeal2014-000570 Application 12/546,245 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED 8 Copy with citationCopy as parenthetical citation