Ex Parte Kelsey et alDownload PDFPatent Trial and Appeal BoardSep 26, 201614079692 (P.T.A.B. Sep. 26, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 14/079,692 11/14/2013 128836 7590 09/28/2016 Womble Carlyle Sandridge & Rice LLP Attn: IP Docketing P.O. Box 7037 Atlanta, GA 30357-0037 FIRST NAMED INVENTOR William D. Kelsey UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. B86918 1400US.CP1 3848 EXAMINER GUPTA, PARUL H ART UNIT PAPER NUMBER 2627 NOTIFICATION DATE DELIVERY MODE 09/28/2016 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address( es): ipdocketing@wcsr.com patentadmin@Boeing.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte WILLIAM D. KELSEY, BRIAND. LAUGHLIN, and RICHARD N. BLAIR Appeal2015-006694 Application 14/079,692 Technology Center 2600 Before CARLA M. KRIVAK, ERIC B. CHEN, and JOHN F. HORVATH, Administrative Patent Judges. CHEN, Administrative Patent Judge. DECISION ON APPEAL This is an appeal under 35 U.S.C. § 134(a) from the final rejection of claims 1-21, all the claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in-part. STATEMENT OF THE CASE Appellants' invention relates to a system configured to provide sensed input, including measurements of motion of a user during performance of a task, where the motion may include a gesture performed in one of multiple 3D zones in an environment defined to accept respective, distinct gestures. (Abstract.) Appeal2015-006694 Application 14/079,692 Claims 1, 5, and 6 are exemplary, with disputed limitations in italics: 1. A ubiquitous natural user system comprising: one or more sensors configured to provide sensed input including measurements of motion of a user during performance of a task by the user, the motion including a gesture performed in a three- dimensional (3D) zone in an environment of the user, the 3D zone being one of a plurality of 3D zones in the environment that are defined to accept respective, distinct gestures of the user; and a front-end system coupled to the one or more sensors, and configured to receive and process the sensed input including the measurements to identify the gesture and from the gesture, identify operations of an electronic resource, the front-end system being configured to identify the gesture based on the one of the plurality of 3D zones in which the gesture is performed, wherein the front-end system is configured to from and communicate an input to cause the electronic resource to perform the operations and produce an output, and wherein the front-end system is configured to receive the output from the electronic resource, and communicate the output to a display device; audio output device or haptic sensor. 5. The ubiquitous natural user system of Claim 1, wherein the gesture is a first gesture, and the motion further includes a second gesture performed in the environment of the user and that is distinct from the respective, distinct gestures that the plurality of 3D zones are defined to accept, and wherein the front-end system being configured to receive and process the sensed input includes being configured to receive and process the sensed input to further identify the second gesture, the operations of the electronic resource including operations identified from the first gesture and operations identified from the second gesture, the front-end system being configured to identifY the second gesture without regard to the plurality of 3D zones. 6. The ubiquitous natural user system of Claim 1, wherein the front-end system being configured to communicate the output 2 Appeal2015-006694 Application 14/079,692 includes being configured to communicate the output to the display device, the output being communicated in one of a plurality of distinct desktop environments displayable in respective facets of a three- dimensional, multifaceted graphical user interface. Claims 1-21 stand rejected under the judicially created doctrine of obviousness-type double patenting as unpatentable over claims 1-21 of commonly owned Laughlin Application (US 2014/0354529 Al; Dec. 4, 2014). Claims 1-21 stand rejected under the judicially created doctrine of obviousness-type double patenting as unpatentable over claims 1-21 of commonly owned Laughlin (US 9,395,810 B2; July 19, 2016). Claims 1-21 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Pretlove (US 2004/0189675 Al; Sept. 30, 2004) and Izumi (US 2012/0056989 Al; Mar. 8, 2012). Double Patenting Rejections We are persuaded by Appellants' arguments (App. Br. 8; see also Reply Br. 1-2) that the Examiner has not shown that claims 1-21 are unpatentable under the judicially created doctrine of obviousness-type double patenting over claims 1-21 of commonly owned Laughlin Application and over claims 1-21 of commonly owned Laughlin. The Examiner found that: Although the claims at issue are not identical, they are not patentably distinct from each other because the instant application discusses potential use of a haptic sensor as an option while the copending application discusses a three dimensional model. As this is merely an option, the same set of main 3 Appeal2015-006694 Application 14/079,692 limitations is discussed in both sets of claims. The maJor hardware and use is similar in both applications. (Final Act. 3--4.) We do not agree. The key question in any obviousness double patenting analysis is: "Does any claim in the application define merely an obvious variation of an invention claimed in the patent asserted as supporting double patenting?" General Foods Corp. v. Studiengesellschaft Kahle mbH, 972 F.2d 1272, 1278 (Fed. Cir. 1992) (discussing In re Vogel, 422 F.2d438(CCPA1970)). Answering this question requires that the decision maker first construe the claims in the patent and the claims under review and determine the differences between them. Eli Lilly and Co. v. Barr Labs., Inc., 251 F .3d 955, 970 (Fed. Cir. 2001). After determining the differences, the decision maker must determine whether the differences in the subject matter render the claims patentably distinct. Id. However, the Examiner has neither: (i) construed the claims under review, (ii) nor construed the claims of commonly owned Laughlin Application or commonly owned Laughlin, much less determined the differences between such claims. Instead, the Examiner merely provides a conclusory statement that the "instant application discusses potential use of a haptic sensor as an option while the copending application discusses a three- dimensional model" and "[t]he major hardware and use is similar in both applications" (Final Act. 3--4), without providing a citation to the conflicting claims and without providing any supporting rationale or explanation. Thus, on the record before us, the Examiner has not established a prima facie case that claims 1-21 would have been an obvious variation to one of ordinary skill in the art over claims 1-21 of commonly owned Laughlin Application and claims 1-21 of commonly owned Laughlin. 4 Appeal2015-006694 Application 14/079,692 Accordingly, we are persuaded by Appellants' arguments that the Examiner provides "broad conclusory statements ... without any explanation to support the conclusion that the claims are patentably indistinct from either the '252 application [Laughlin Application] or the '242 application [Laughlin], and therefore cannot form the basis of a proper rejection." (App. Br. 8.) Thus, we do not sustain the rejection of claims 1-21 under the judicially created doctrine of obviousness-type double patenting over claims 1-21 of commonly owned Laughlin Application or over claims 1-21 of Laughlin. § 103 Rejection-Pretlove and Izumi Claims 1--4, 8-11, and 15-18 We are unpersuaded by Appellants' arguments (App. Br. 12-13; see also Reply Br. 4--5) that the combination of Pretlove and Izumi would not have rendered obvious independent claim 1, which includes the limitation "the 3D zone being one of a plurality of 3D zones in the environment that are defined to accept respective, distinct gestures of the user." The Examiner finds that different positions and sizes for the virtual operation screen of Izumi, as illustrated in Figure 11, and the three-layered operation region of Izumi, as illustrated in Figure 27, collectively correspond to the limitation "the 3D zone being one of a plurality of 3D zones in the environment that are defined to accept respective, distinct gestures of the user." (Ans. 2-3; see also Final Act. 6.) We agree with the Examiner. Izumi relates to "an image recognition apparatus and an operation determining method for determining a movement of a measurement target 5 Appeal2015-006694 Application 14/079,692 from an image photographed by a video camera." (if 1.) Figure 1 of Izumi illustrates operation input system (if 31 ), including three-dimensional display device 111 and operator 102, such that operator 102 "can perform an operation to a virtual operation screen stereoscopically displayed in a constant position between the operator 102 and the three-dimensional display device 111" (if 84 ). Figure 11 of Izumi illustrates an embodiment with multiple operation regions, including operation region 811 for adult operator 810 and operation region 821 for child operator 820, having a lower height and a shorter arm length. (if 114.) Because Izumi explains that the virtual operation screen includes operation region 811 and operation region 821, Izumi teaches the limitation "the 3D zone being one of a plurality of 3D zones." Figure 27 of Izumi illustrates another embodiment, such that when "finger 601 enters in the z axis direction from the trigger face 701, it is determined that the operation is performed" in one of three layers (i.e., layer A to layer C). (if 129.) Izumi explains that "in layer A, at the time when the finger 601 passes the trigger face 701, the object pointed around a position shown in an icon showing the finger 601, for example, in a rotating icon 4503 is rotated in response to the movement of the finger 601." (Id.) Similarly, Izumi explains that "in layer C, a movement icon 4505 is displayed in a position of the finger 601 on the target displayed and pointed on the three-dimensional display device 111, thereby making it possible to move the object in accordance with the movement of the finger 601." (if 130.) Because Izumi explains that rotating icon 4503 in layer A is rotated in response to the movement of the finger 601 (i.e., rotation motion) and movement icon 4505 is moved in response to the movement of the 6 Appeal2015-006694 Application 14/079,692 finger 601 (i.e., vertical or horizontal motion), Izumi teaches the limitation "in the environment that are defined to accept respective, distinct gestures of the user." Appellants argue that "nowhere does Izumi disclose a plurality of 3D operation regions defined for respective, distinct gestures; or identifying a gesture based on the 3D operation region in which the gesture is performed." (App. Br. 13.) Similarly, Appellants argue "[e]ven if one could argue that Izumi discloses different 3D operation regions for adults and children, this still does not meet the claimed invention, which requires not only different 3D zones, but that those 3D zones be defined to accept respective, distinct gestures." (Reply Br. 5.) Contrary to Appellants' argument, Figures 11 and 27 of Izumi illustrate movement icon 4505 and rotating icon 4503, such that movement of each icon requires a different motion for finger 601. Thus, we agree with the Examiner that the combination of Pretlove and Izumi would have rendered obvious independent claim 1, which includes the limitation "the 3D zone being one of a plurality of 3D zones in the environment that are defined to accept respective, distinct gestures of the user." Accordingly, we sustain the rejection of independent claim 1 under 35 U.S.C. § 103(a). Claims 2--4 depend from claim 1, and Appellants have not presented any substantive arguments with respect to these claims. Therefore, we sustain the rejection of claims 2--4 under 35 U.S.C. § 103(a), for the same reasons discussed with respect to independent claim 1. Independent claims 8 and 15 recite limitations similar to those discussed with respect to independent claim 1, and Appellants have not presented any additional substantive arguments with respect to these claims. 7 Appeal2015-006694 Application 14/079,692 We, therefore, sustain the rejection of claims 8 and 15, as well as dependent claims 9-11 and 16-18, not separately argued, for the same reasons discussed with respect to claim 1. Dependent Claim 5, 12, and 19 We are persuaded by Appellants' arguments (App. Br. 13-14) that the combination of Pretlove and Izumi would not have rendered obvious dependent claim 5, which includes the limitation "the front-end system being configured to identify the second gesture without regard to the plurality of 3D zones." The Examiner also found that different positions and sizes for the virtual operation screen of Izumi, as illustrated in Figure 11, and the three- layered operation region of Izumi, as illustrated in Figure 27, collectively correspond to the limitation "the front-end system being configured to identify the second gesture without regard to the plurality of 3D zones." (Ans. 2-3; Final Act. 8-9.) We do not agree. As discussed previously, in reference to Figure 27, Izumi explains that "in layer A, at the time when the finger 601 passes the trigger face 701, the object pointed around a position shown in an icon showing the finger 601, for example, in a rotating icon 4503 is rotated in response to the movement of the finger 601." (i-f 129.) Similarly, Izumi explains that "in layer C, a movement icon 4505 is displayed in a position of the finger 601 on the target displayed and pointed on the three-dimensional display device 111, thereby making it possible to move the object in accordance with the movement of the finger 601." (i-f 130.) 8 Appeal2015-006694 Application 14/079,692 Although the Examiner cited to the embodiments of Figures 11 and 27 of Izumi (Ans. 2-3; Final Act. 8-9), the Examiner has provided insufficient evidence to support a finding that Izumi teaches the limitation "the front-end system being configured to identify the second gesture without regard to the plurality of 3D zones." In particular, Izumi explains that "rotating icon 4503 is rotated in response to the movement of the finger 601" (i.e., rotation motion) in layer A (i-f 129) and "movement icon 4505 ... making it possible to move the object in accordance with the movement of the finger 601" (i.e., vertical or horizontal motion) in layer C (i-f 130). However, Izumi is silent with respect to the use of the rotating finger motion being operable, for example, in a different operation region other than in zone A. In other words, Izumi does not state that the rotating finger motion is operable in zone C of Izumi. On this record, the Examiner has not demonstrated that Izumi teaches the limitation "the front-end system being configured to identify the second gesture without regard to the plurality of 3D zones." Thus, we are persuaded by Appellants' arguments that "nowhere does Izumi here disclose a gesture distinct from those that a plurality of 3D zones are defined to accept, or identification of a gesture without regard to a plurality of 3D zones (or even without regard to its 3D operation region)." (App. Br. 13-14.) Accordingly, we do not sustain the rejection of dependent claim 5 under 35 U.S.C. § 102(e). Dependent claims 12 and 19 recite limitations similar to those discussed with respect to dependent claim 5. We do not sustain the rejection of claims 12 and 19 for the same reasons discussed with respect to claim 5. 9 Appeal2015-006694 Application 14/079,692 Dependent Claims 6, 7, 13, 14, 20, and 21 We are further persuaded by Appellants' arguments (App. Br. 14--15; see also Reply Br. 6-7) that the combination of Pretlove and Izumi would not have rendered obvious dependent claims 6 and 7, which include the limitation "three-dimensional, multifaceted graphical user interface." The Examiner found that the 3D graphical representation of the visual information of Pretlove, or alternatively, the three-dimensional display device of Izumi corresponds to the limitation "three-dimensional, multifaceted graphical user interface." (Final Act. 9; Ans. 3.) We do not agree. Pretlove relates to an augmented reality system (i-f l) in which an operator can be physically separated from the location where the tasks or processes are preformed (i-f 2). Figure 2 of Pretlove illustrates a block diagram of an augmented reality system, such that "[t]he remote operator 3 is able to interact with the system through the pointing and interaction device 13" (i-f 50) and that the output from pointing device 13 can be used by graphics unit 37 to generate a 3D graphical representation of visual information by graphics unit 37 (i-f 52). Izumi explains that for three-dimensional display device 111, movement of the operator 102 is photographed by video camera 201 and "computer 110 produces a stereoscopic image of the operator 102 from data obtained from the video camera 201 and calculates an optimal position of the virtual operation screen." (i-f 92.) Izumi further explains that virtual operation screen is two-dimensional. (i-f 94; see also Fig. 5.) Although the Examiner cited to the 3D graphical representation of Pretlove produced by graphics unit 37 or alternatively, three-dimensional 10 Appeal2015-006694 Application 14/079,692 display device 111 of Izumi (Final Act. 9; Ans. 3), the Examiner has provided insufficient evidence to support a finding that Pretlove and Izumi, either individually or in combination, teach the limitation "three- dimensional, multifaceted graphical user interface." In particular, graphics unit 37 of Pretlove produces a 3D graphical representation of visual information, but is silent with respect to such 3D graphical representation being utilized as a graphical user interface, much less a "three-dimensional, multifaceted graphical user interface," as claimed. Moreover, the virtual operation screen of Izumi is a two-dimensional, single faceted graphical user interface, rather than a "three-dimensional, multifaceted graphical user interface," as claimed. On this record, the Examiner has not demonstrated that Pretlove and Izumi, either individually or in combination, teach the limitation "three-dimensional, multifaceted graphical user interface." Thus, we are persuaded by Appellants' arguments that "nowhere does Pretlove ... disclose a 3D, multifaceted GUI, much less one in which a plurality of distinct desktop environments are displayable in respective facets" (App. Br. 14) and "Izumi's ... GUI is not a 3D, multifaceted GUI, nor does it display distinct desktop environments displayable in respective facets of a 3D, multifaceted GUI" (id. at 15). Accordingly, we do not sustain the rejection of dependent claims 6 and 7 under 35 U.S.C. § 103(a). Dependent claims 13, 14, 20, and 21 recite limitations similar to those discussed with respect to dependent claims 6 and 7. We do not sustain the rejection of claims 13, 14, 20, and 21 for the same reasons discussed with respect to claims 6 and 7. 11 Appeal2015-006694 Application 14/079,692 DECISION The Examiner's decision rejecting claims 1-21 under the judicially created doctrine of obviousness-type double patenting is reversed. The Examiner's decision rejecting claims 1--4, 8-11, and 15-18 under 35 U.S.C. § 103(a) is affirmed. The Examiner's decision rejecting claims 5-7, 12-14, and 19-21 under 35 U.S.C. § 103(a) is reversed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED-IN-PART 12 Copy with citationCopy as parenthetical citation