Ex Parte Huston et alDownload PDFPatent Trial and Appeal BoardDec 19, 201613774710 (P.T.A.B. Dec. 19, 2016) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 5624-00101 6402 EXAMINER ROBINSON, TERRELL M ART UNIT PAPER NUMBER 2618 MAIL DATE DELIVERY MODE 13/774,710 02/22/2013 Charles D. Huston 36275 7590 12/19/2016 Egan, Peterman, Enders & Huston LLP. 1101 S. Capital of Texas Highway, Suite C200 Austin, TX 78746 12/19/2016 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte CHARLES D. HUSTON and CHRIS COLEMAN Appeal 2016-002843 Application 13/774,710 Technology Center 2600 Before THU A. DANG, JOHN P. PINKERTON, and JOHN D. HAMANN, Administrative Patent Judges. DANG, Administrative Patent Judge. DECISION ON APPEAL I. STATEMENT OF THE CASE Appellants appeal under 35 U.S.C. § 134(a) from the Examiner’ Final Rejection of claims 1—24. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. Appeal 2016-002843 Application 13/774,710 A. INVENTION According to Appellants, the invention relates to “creating indoor and outdoor environments that include virtual models and images” wherein “the environments are created in part using crowd sourced images and metadata and the environments are applied to social media applications” (Spec. 12). B. REPRESENTATIVE CLAIM Claim 1 is exemplary: 1. A system for creating and sharing an environment comprising: a network for receiving images and metadata from a plurality of devices each having a camera employed near a point of interest to capture random images and associated metadata near said point of interest, wherein the metadata for each image includes location of the device and the orientation of the camera; an image processing server connected to the network for receiving said images and metadata, wherein the server processes the images and metadata to build a 3D model of one or more targets proximate the point of interest based at least in part on said images; an experience platform connected to the image processing server for storing the 3D model of one or more targets, whereby users can connect to the experience platform to view the point of interest from a user selected location and orientation and view the 3D model of one or more targets. C. REJECTIONS 1. Claims 1—5, 7, 10, and 11 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the teachings of Krupka et al. (US 2011/0211737 Al, pub. Sept. 1, 2011), Wallace et al. (US 2011/0052073 Al, pub. Mar. 3, 2011), and Vaittinen et al. (US 2011/0283223 Al, pub. Nov. 17, 2011). 2 Appeal 2016-002843 Application 13/774,710 2. Claim 6 stands rejected under 35 U.S.C. § 103(a) as unpatentable over the teachings of Krupka, Wallace, Vaittinen, and Wang et al. (US 2011/0072047 Al, pub. Mar. 24, 2011). 3. Claims 8, 9, 16, 17, and 23 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the teachings of Krupka, Wallace, Vaittinen, and Bathiche (US 2011/0319166 Al, pub. Dec. 29, 2011). 4. Claims 12, 14, 15, 18, and 23 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the teachings of Krupka, and Vaittinen. 5. Claim 13 stands rejected under 35 U.S.C. § 103(a) as unpatentable over the teachings of Krupka, Vaittinen, and Wang. 6. Claims 19-22 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the teachings of Krupka, Vaittinen, and Krill et al. (US 2009/0256904 Al, pub. Oct. 15, 2009). 7. Claim 24 stands rejected under 35 U.S.C. § 103(a) as unpatentable over the teachings of Krupka, Wallace, Vaittinen, and Tuite et al. (“PhotoCity”). II. ISSUES The principal issues before us are whether the Examiner erred in finding that the combination of Krupka, and Vaittinen (and Wallace) teaches or would have suggested a “camera employed near a point of interest to capture random images” and an “image processing server” that “processes the images and metadata to build a 3D model of one or more targets proximate the point of interest” whereby “users can connect to the experience platform to view the point of interest from a user selected 3 Appeal 2016-002843 Application 13/774,710 location and orientation and view the 3D model of one or more targets” (claim 1). III. FINDINGS OF FACT The following Findings of Fact (FF) are shown by a preponderance of the evidence. Krupka 1. Krupka discloses cameras that capture images for part of a user’s image collection (| 42), wherein each image includes metadata such as timestamp, location information, image size, title and various tags (157). Vaittinen 2. Vaittinen relates to development of mapping and navigating graphics and/or images (e.g., street-level views of various locations and points of interest) augmented with location relevant content (| 1), which comprises rendering of a user interface for a location-based service that simultaneously includes both a main view portion and a preview portion, one of which displays a perspective view of objects in a field of view (13). 3. A perspective view provides perspective to an object, which includes 3D modeling in virtual reality to show real or virtual depth to the object or its surroundings (120). That is, the perspective view of the surrounding area is shown in the preview portion of the user interface in order to give the user an idea of the 3D panoramic view of the surrounding area, wherein the perspective view is generated using the camera of the user device to capture images of the surrounding area in real-time, by using pre-stored images, or a combination of real-time images and pre-stored images. The field of vision 4 Appeal 2016-002843 Application 13/774,710 can be adjusted by adjusting the orientation of the user device to allow a user to navigate to a POI or otherwise determine their location. (| 29). 4. The system utilizes the augmented reality or virtuality (e.g., using 3D models and 3D mapping information) to insert rich content information relevant to the POI (135). In particular, after the user selects a POI and retrieves tags content information of the POI, the system saves the POI and tagged content and presents updated content information to the user in the live image view and/or prerecorded panoramic view, wherein the content information includes live media, stored media, and metadata associated therewith (136). 5. Content and mapping information is then presented to the user via a user interface, wherein the user is presented with 3D or augmented reality presentations of particular locations and related objects (e.g., buildings, terrain features, POIs, etc. at the particular location) as part of a graphical user interface (1 60). For example, the user can touch the preview portion to switch the view shown or tilt the device a particular angle such that the angle can automatically trigger a switching of the views (H 62—64). IV. ANALYSIS Appellants contend “Vaittenen does not relate to building a 3D model from random images” (App. Br. 5). According to Appellants, “[tjhere is no description of using randomly captured images and metadata to build a 3D model” (id.). Appellants further contend “there is nothing in Vaittinen that teaches or suggests that a user can select a location and orientation for viewing a target in a 3D model” (id.) because Vaittinen selects an image “based on the best focus, best vantage point, and best image” (App. Br. 5—6). Although 5 Appeal 2016-002843 Application 13/774,710 Appellants concede Vaittinen suggests that “a user may specify the location from which to retrieve content,” Appellants contend that the perspective view “is selected from a ‘real time image’ from the smart phone camera or a ‘pre-stored’ image” (id. at 6). We have considered all of Appellants’ arguments and evidence presented. However, we disagree with Appellants’ contentions regarding the Examiner’s rejections of the claims. We agree with the Examiner’s findings, and find no error with the Examiner’s conclusion that the claims would have been obvious over the combined teachings. As a preliminary matter of claim construction, we give the claims their broadest reasonable interpretation consistent with the Specification. See In re Morris, 111 F.3d 1048, 1054 (Fed. Cir. 1997). While we interpret claims broadly but reasonably in light of the Specification, we nonetheless must not import limitations from the Specification into the claims. See Phillips v. AWHCorp., 415 F.3d 1303, 1323 (Fed. Cir. 2005) (en banc). Although Appellants contend, in Vaittinen, “[tjhere is no description of using randomly captured images and metadata to build a 3D model” (App. Br. 5), we note claim 1 does not require “randomly” capturing images. Rather, claim 1 recites capturing “random” images, but does not define as to what is a “random” image. That is, nothing in the claims precludes the “random” images from being captured images in an image collection, as broadly but reasonable interpreted by the Examiner (Ans. 29). That is, we are not persuaded of error in the Examiner’s broad but reasonable interpretation that “the capture of images for user’s collections” that “does not disclose any specific or necessary ordering of those images” can be “random” (id.). 6 Appeal 2016-002843 Application 13/774,710 Nevertheless, the test for obviousness is what the combined teachings of Krupka in view of Vaittinen would have suggested to one of ordinary skill in the art. See In re Merck & Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986). Thus, although Appellants contend that, in Vaittinen, “[tjhere is no description of using randomly captured images and metadata to build a 3D model” (App. Br. 5), as the Examiner points out, the Examiner relies on Krupka to address this feature (Ans. 29). As discussed above, based on the broadest reasonable interpretation discussed above, we agree with the Examiner’s reliance on Krupka for disclosing and suggesting “a network for receiving images and metadata from a plurality of devices,” each having a “camera employed near a point of interest to capture random images and associated metadata near said point of interest” (Ans. 4, emphasis omitted; FF 1), as recited in claim 1. Furthermore, Vaittinen discloses rendering of a user interface which displays a perspective view of objects in a field of view (FF 2). The perspective view provides perspective to an object, which includes 3D modeling in virtual reality to show real or virtual depth to the object or its surroundings, wherein the perspective view is generated using real-time captured images of the surrounding area, as well as pre-stored images, and the field of vision can be adjusted by adjusting the orientation of the user device to allow a user to navigate to a POI (FF 3). The system utilizes the augmented reality or virtuality (e.g., using 3D models) to insert rich content information relevant to the POI, wherein the content information includes live media, stored media, and metadata associated therewith (FF 4). We agree with the Examiner’s finding that Vaittinen discloses a “perspective view concept where views can be real-time images for 7 Appeal 2016-002843 Application 13/774,710 performing 3D modeling in virtual reality with respect to the point of interest” (Ans. 6), wherein “a virtual reality representation” is built “based on 3D modeling which is constructed with the previously taken images” (Ans. 29). Thus, we agree with the Examiner’s reliance on Vaittinen for disclosing and suggesting “the server processes the images and metadata to build a 3D model of one or more targets proximate the point of interest based at least in part on said images” as recited in claim 1 (Ans. 6). Accordingly, contrary to Appellants’ contention that “Vaittenen does not relate to building a 3D model from random images” (App. Br. 5), we find no error in the Examiner’s reliance on the combination of Krupka and Vaittinen for teaching and suggesting the contested limitation. Although Appellants contend that “there is nothing in Vaittinen that teaches or suggests that a user can select a location and orientation for viewing a target in a 3D model” (App. Br. 5), we find no error with the Examiner’s finding that Vaittinen does teach “the user request or selection of a specific location and orientation and the receiving of that content regarding 3D or augmented reality models of various objects such as a point of interest or building” (Ans. 31—32, emphasis omitted). As Appellants concede, Vaittinen suggests that “a user may specify the location from which to retrieve content” (App. Br. 6). In particular, Vaittinen discloses presenting content and mapping information to the user via a user interface, wherein the user is presented with 3D or augmented reality presentations of particular locations of interest and related objects, and the user can switch the view shown or tilt the device a particular angle such that the angle can automatically trigger a switching of the views/orientation (FF 5). That is 8 Appeal 2016-002843 Application 13/774,710 Vaittinen discloses and suggests a user can select the location of interest as well as orientation for viewing (id.). Although Appellants contend that Vaittinen selects an image “based on the best focus, best vantage point, and best image” (App. Br. 6), nothing in the claims precludes selecting also based on focus, vantage point or best image. Similarly, nothing in the claims precludes the perspective view from also being “selected from a ‘real time image’ from the smart phone camera or a ‘pre-stored’ image” (App. Br. 6) as Appellants contend. Rather, claim 1 merely requires “users can connect to the experience platform to view the point of interest from a user selected location and orientation and view the 3D model of one or more targets” (claim 1). We are unpersuaded that Vaittinen does not disclose or suggest this contested limitation. Based on this record, we find no error in the Examiner’s rejection of independent claim 1 as obvious over the combination of Krupka, Wallace, and Vaittinen. As for claim 7, Appellants merely repeat that “Vaittinen is not concerned with building 3D models” (App. Br. 6). As discussed above, we find no error with the Examiner’s finding that Vaittinen discloses and suggests this contested limitation. On this record, we also affirm the rejection of claim 7 over Krupka, Wallace, and Vaittinen. As for claims 8, 9, 16, and 17, Appellants contend Bathiche and Vaittinen are not in the same field of endeavor (App. Br. 7). Further, Appellants contend “nothing in Bathiche relates to building 3D models” (id.). However, as discussed above, we find no error with the Examiner’s reliance on Vaittinen for disclosing and suggesting the contested limitation. Further, we are unpersuaded that the Examiner erred in finding Bathiche 9 Appeal 2016-002843 Application 13/774,710 provides “an enhanced viewing experience” which is from the same field of endeavor analogous to the goal of “improving users interaction via a mobile device experience” (Ans. 32—33). On this record, we also affirm the rejection of claims 8, 9, 16, and 17 over Krupa, Wallace, and Vaittinen, in further view of Bathiche. Appellants provide similar arguments for claim 12 as set forth above with respect to claim 1 (App. Br. 8—9). As set forth above and set forth in the Answer, to which we agree, Vaittinen teaches and suggests the contested limitations (Ans. 33—38). On this record, we are also unconvinced of error in the Examiner’s rejection of claim 12, and claims 14, 15, 18, and 23 depending therefrom, over Krupka and Vaittinen. As for claim 24, although Appellants contend “Tuite seeks to avoid random images” (App. Br. 9-10), given the broadest, reasonable interpretation of the claims, we are unconvinced that the Examiner erred in finding that Tuite at least suggests the contested limitation (Ans. 38—39). Appellants do not provided substantive arguments for the other dependent claims (App. Br. 10), and thus, we adopt the Examiner’s findings, which we incorporate herein by reference, and affirm the respective rejections of the claims. V. CONCLUSION AND DECISION We affirm the Examiner’s rejections of claims 1—24 under 35 U.S.C. § 103(a). 10 Appeal 2016-002843 Application 13/774,710 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 11 Copy with citationCopy as parenthetical citation