Ex Parte Shpunt et alDownload PDFPatent Trial and Appeal BoardSep 8, 201612844864 (P.T.A.B. Sep. 8, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 12/844,864 07/28/2010 123590 7590 09/12/2016 D. KUGLER LP. SERVICES LTD. P.O. BOX 57651 TEL A VIV, 61576 ISRAEL FIRST NAMED INVENTOR Alexander Shpunt UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 1020-1029 .1 8885 EXAMINER RETALLICK, KAITLIN A ART UNIT PAPER NUMBER 2482 NOTIFICATION DATE DELIVERY MODE 09/12/2016 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address( es): INFO@DKPAT.CO.IL daniel@dkpat.co.il alon@dkpat.co.il PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte ALEXANDER SHPUNT, GERARD MEDIONI, DANIEL COHEN, EREZ SALI, and RONEN DEITCH1 Appeal2014-008240 Application 12/844,864 Technology Center 2400 Before MICHAEL J. STRAUSS, DANIEL N. FISHMAN, and JAMES W. DEJMEK, Administrative Patent Judges. FISHMAN, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from a rejection of claims 1, 3-13, 15-18, 20-30, 32--40, and 42--45.2 Claims 2, 14, 19, and 31 have been cancelled (Final Act. 2) and claims 41 and 46 are indicated as 1 Appellants identify PrimeSense Ltd. as the real party in interest. Appeal Brief 1. 2 In this Decision, we refer to Appellants' Appeal Brief ("App. Br.," filed March 25, 2014); Appellants' Reply Brief ("Reply Br.," filed July 17, 2014); the Final Office Action ("Final Act.," mailed September 26, 2013); the Examiner's Answer ("Ans.," mailed on May 19, 2014); and the original Specification ("Spec.," filed July 28, 2010). Appeal2014-008240 Application 12/844,864 allowable if rewritten in independent form (Final Act. 2). We have jurisdiction over the remaining pending claims under 35 U.S.C. § 6(b ). We affirm-in-part. THE INVENTION Appellants' invention is directed to "depth mapping based on pattern matching and stereoscopic information." Spec. Title. Independent claims 1, 13, and 37, reproduced below, are illustrative: 1. A method for depth mapping, comprising: projecting a pattern of optical radiation onto an object; capturing a first image of the pattern on the object using a first image sensor, and processing the first image alone to generate pattern-based depth data with respect to the object; capturing a second image of the object using a second image sensor, wherein the projected pattern does not appear in the second image, and processing the second image together with the first image to generate stereoscopic depth data with respect to the object; and combining the pattern-based depth data with the stereoscopic depth data to create a depth map of the object. 13. A method for depth mapping, comprising: receiving at least one image of an object, captured by an image sensor, the image comprising multiple pixels; processing the at least one image to generate depth data comprising multiple candidate depth coordinates and respective measures of confidence associated with the candidate depth coordinates for each of a plurality of the pixels; applying a weighted voting process to the depth data, wherein votes for the candidate depth coordinates are weighted responsively to the respective measures of confidence, in order to select one of the candidate depth coordinates at each pixel; and 2 Appeal2014-008240 Application 12/844,864 outputting a depth map of the object compnsmg the selected one of the candidate depth coordinates at each pixel. 37. A method for depth mapping, comprising: capturing first and second images of an object using first and second Image capture subassemblies, respectively; comparing the first and second Images m order to estimate a misalignment between the first and second image capture subassemblies; processing the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data with respect to the object; and outputting a depth map comprising the stereoscopic depth data. THE REJECTIONS Claims 1, 3-7, 18, 20-24, and 35 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Polzin et al. (US 2010/0303289 Al; Dec. 2, 2010 (filed iviay 29, 2009)) ("Polzin"), iviack et al. (US 6,377,700 Bl; Apr. 23, 2002) ("Mack"), and Jiejie Zhu et al., Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps, IEEE Conference on Computer Vision and Pattern Recognition, June 2008, at 1-8 ("Zhu"). Final Act. 3-12. Claims 8, 9, 11, 12, 25, 26, 28, and 29 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Polzin, Mack, Zhu, and Hager et al. (US 2004/0105580 Al; June 3, 2004) ("Hager"). Final Act. 12-17. Claims 10 and 27 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Polzin, Mack, Zhu, Hager, and Ghanem et al. (US 2009/0060307 Al; Mar. 5, 2009) ("Ghanem"). Final Act. 17-19. 3 Appeal2014-008240 Application 12/844,864 Claims 13, 16, 17, 30, 33, 34, and 36 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Polzin, Zhu, and Ghanem. Final Act. 19-26. Claims 15 and 32 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Polzin, Zhu, Ghanem, and Hager. Final Act. 26-27. Claims 37--40 and 42--45 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over Polzin, Hager, and Zhu. Final Act. 28-34. ANALYSIS Claims 1, 3, 6, 11, 18, 20, 23, 28, and 35 Appellants argue the Examiner's proposed combination fails to teach or suggest "use of a pair of different images for both pattern-based and stereoscopic depth mapping, as required by claims 1, 18, and 35." App. Br. 9-11. Although Appellants acknowledge Polzin teaches using two cameras (3-D and RGB), Appellants argue Polzin does not teach "that an image of a projected pattern might be combined with another image ... in order to generate stereoscopic depth data." Id. at 10; see also Reply Br. 2. We are not persuaded the Examiner erred. Specifically, the Examiner finds Polzin teaches capturing a first image of a projected pattern on an object using a 3-D camera, and capturing a second image of the object using an RGB camera. Final Act. 3--4 (citing Polzin i-fi-154 and 56); see also Ans. 34. The Examiner further finds Polzin teaches processing the second image together with the first image to generate stereoscopic depth data. Final Act. 4 (citing Polzin i156); see also Ans. 34 (citing Polzin i157). We agree with the Examiner's findings and adopt them as our own. 4 Appeal2014-008240 Application 12/844,864 Polzin teaches "a deformation of the [projected] pattern may be captured by, for example, the 3-D camera 74 and/or the RGB camera 76 and may then be analyzed to determine a physical distance from the capture device to a particular location on the targets or objects." Polzin i-f 56. Polzin further teaches the use of "two or more physically separated cameras that may view a capture area from different angles, to obtain visual stereo data that may be resolved to generate depth information." Polzin i-f 57. Polzin, in disclosing capturing the deformed pattern on the object using the 3-D camera 74 and/or the RGB camera 76 to determine the physical distance between the cameras and the object teaches or suggests processing the images from the 3-D camera 74 and the RGB camera 76 together. Polzin, in disclosing two cameras capturing images to obtain stereo data and generate depth information teaches or suggests processing the images from the 3-D camera 74 and the RGB camera 76 together to generate stereoscopic depth data with respect to the object. Appellants further argue Polzin does not teach a first image with a projected pattern may be combined with another image in which the pattern does not appear. App. Br. 1 O; see also Reply Br. 2. We are not persuaded the Examiner erred. "[O]ne cannot show non-obviousness by attacking references individually where, as here, the rejections are based on combinations of references." In re Keller, 642 F.2d 413, 426 (CCPA 1981). The Examiner finds Mack, in combination with Polzin, teaches capturing an image wherein a projected pattern does not appear in the image. Final Act. 4--5 (citing Mack col. 4, 1. 64--col. 5, 1. 3; Figure 4). Thus, it is the combination of Polzin and Mack that teaches or suggests an image with a 5 Appeal2014-008240 Application 12/844,864 projected pattern combined with another image in which the pattern does not appear in order to generate stereoscopic depth data. Appellants next argue Zhu does not teach using the same image sensor and same image to generate pattern-based depth data and stereoscopic depth data, as recited in the claim. App. Br. 10. We are not persuaded the Examiner erred because, as discussed supra, the Examiner relies on the combination of Polzin and Mack, not Zhu, for this limitation. Appellants further argue Mack does not teach using solely a first image sensor for generating pattern-based depth data. App. Br. 10. Appellants further argue Mack's textual data is a two-dimensional surface image, and not depth data. Id. at 11 (citing Mack col. 3, 11. 63-65). We are not persuaded the Examiner erred because, as discussed supra, the Examiner relies on the proposed combination, not Mack alone. In view of the above discussion, we are not persuaded the Examiner erred by finding the proposed combination teaches or suggests the limitations of claim 1. Appellants present similar arguments regarding independent claims 18 and 35 (App. Br. 8-11) and, for the same reasons as claim 1, we are not persuaded of Examiner error. Thus, we sustain the rejection of independent claim 1 and, for the same reasons, the rejection of independent claims 18 and 35 together with the rejections of dependent claims 3, 6, 11, 20, 23, and 28, not separately argued with particularity. App. Br. 24. Claims 4 and 21 Claim 4 depends from claim 3, which depends from claim 1. Claim 3 recites projecting and capturing an infrared pattern on the object. Claim 4 6 Appeal2014-008240 Application 12/844,864 further recites "wherein capturing the second image comprises capturing a color image of the object." Appellants contend Polzin does not teach or suggest "that an [infrared (IR)] image might be used together with the color image in generating stereoscopic depth data, as required by claims 4 and 21." App. Br. 18 (emphasis omitted). We remain unpersuaded of Examiner error. The Examiner finds Polzin teaches the use of an infrared pattern in the rejection of claim 3 (Final Act. 6-7) from which claim 4 depends and, thus, Appellants' argument reduce to the same arguments as claim 1 that Polzin fails to disclose combining two images as claimed. Therefore, we are unpersuaded of error for the same reasons discussed supra with respect to claim 1. Thus, we sustain the obviousness rejection of claim 4 and, for the same reasons, claim 21. Claims 5 and 22 Claim 5 depends from claim 4 and further recites, in relevant part, "wherein the color image comprises pixels ... and wherein the method comprises outputting the color image to a display together with the depth coordinates that are associated with the pixels." Appellants argue Polzin teaches displaying depth data using "pseudocolor," not "actual color captured by an image sensor." App. Br. 19 (citing Polzin i-f 90) (emphases omitted). We are not persuaded the Examiner erred. At the outset, we note Appellants' argument is not commensurate with the scope of claim 5 and, thus, does not persuade us of error in the Examiner's rejection. See In re Self, 671F.2d1344, 1348 (CCPA 1982) (limitations not appearing in the 7 Appeal2014-008240 Application 12/844,864 claims cannot be relied upon for patentability). Claim 5 does not require displaying actual color captured by an image sensor. Contrary to Appellants' argument, claim 5 recites "outputting the color image to a display." The Examiner finds Polzin's displayed depth image comprises a plurality of pixels, wherein each pixel has a depth value. Ans. 53 (citing Polzin i-f 90). The Examiner further finds "Polzin discloses a color image that comprises pixels with a depth map comprising depth values displayed with the depth coordinates being associated with the pixels." Id. We agree with the Examiner's findings and adopt them as our own. Therefore, we sustain the Examiner's rejection of claim 5 and, for the same reasons, claim 22. Claims 8-10 and 25-27 Claim 8 depends from claim 1 and recites "wherein combining the pattern-based depth data with the stereoscopic depth data comprises defining multiple candidate depth coordinates for each of a plurality of pixels in the depth map, and selecting one of the candidate depth coordinates at each pixel for inclusion in the depth map." Appellants argue each of Hager's pixels have a single depth coordinate, not multiple candidate depth coordinates as required by claim 8. App. Br. 20 (citing Hager i-fi-1 8 and 22). The Examiner finds Hager's pixel matching process selects one of the candidate depth coordinates at each pixel by "us[ing] multiple pixels ('candidates') and matches them through the matching process in order to result in the selection of matched pixels ('selection of one at each pixel') to 8 Appeal2014-008240 Application 12/844,864 be used to determine range to points in the scene ('depth values')." Ans. 55 (citing Hager i-f 22). We agree with Appellants. Hager's matching process identifies matching pixels from two or more images of an object, from different positions, the object illuminated with a pattern, to determine the depth of a pixel. See Hager i-f 22. The Examiner has not provided sufficient evidence or technical reasoning explaining how Hager's pixel matching process teaches or suggests "multiple candidate depth coordinates for each of a plurality of pixels," as claimed. In particular, although Hager's pixel matching process involves, for each pixel, finding its matching pixel in another image from another position, the Examiner fails to adequately explain how Hager teaches or suggests each pixel possesses multiple candidate depth coordinates. Thus, on the record before us, we are persuaded the Examiner erred in the rejection of dependent claims 8 and 25, along with the rejections of claims 9, 10, 26, and 27, which depend therefrom. Claims 12 and 29 Claim 12 depends from claim 11, which depends from claim 1. Claim 11 adds that the step of combining includes calibrating the first and second images to correct for relative misalignment. Claim 12 adds that the calibration procedure comprises correcting for "a change in alignment between the pattern of optical radiation and the first image sensor." Appellants argue Zhu's alignment matrix, as relied upon by the Examiner, relates to an alignment between two different sets of depth coordinates, and not an "alignment between a pattern of optical radiation and an image sensor 9 Appeal2014-008240 Application 12/844,864 that captures an image of the pattern." App. Br. 21-22 (citing Zhu p. 3, col. 2) (emphasis omitted). In response, the Examiner finds "Polzin, Hager, and Mack all disclose projecting a pattern of optical radiation (i.e., structured light pattern) onto an object and imaging it." Ans. 57-58. The Examiner further finds Zhu teaches calculating an alignment matrix to calibrate the alignment of a 3-D coordinate Xst from a stereo coordinate system (CS) and a 3-D coordinate Xds from a sensor CS. Id. at 57 (citing Zhu p. 3, s. 4. 1 ). We find Appellants' contention unpersuasive of Examiner error. Appellants' argument that Zhu is deficient does not address the Examiner's finding that it would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teachings of Zhu "with the use of a projected optical radiation pattern on the object as taught by Polzin, Mack, and Hager in order to have the alignment occur between the sensor and the pattern." Ans. 58. We agree with the Examiner's findings and adopt them as our own. Thus, we sustain the rejection of claim 12 and, for the same reasons, claim 29. Claims 7, 13, 15-17, 24, 30, 32-34, and 36 Appellants contend the Examiner erred in finding the combination of Polzin, Zhu, and Ghanem teaches or suggests processing at least one image to generate depth data comprising multiple candidate depth coordinates and "respective measures of confidence associated with the candidate depth coordinates for each of a plurality of the pixels," as recited in claim 13. App. Br. 12-15; Reply Br. 3. In particular, Appellants argue Polzin's confidence values that indicate whether a region of pixels belongs to a body 10 Appeal2014-008240 Application 12/844,864 part "have nothing to do with candidate depth coordinates or any other feature of specific pixels." App. Br. 13 (citing Polzin i-fi-197-99 and Figure 37) (emphasis omitted); see also Reply Br. 3. Appellants further assert "selecting depth coordinates from among the pattern-based and stereoscopic depth data responsively to the respective measures of confidence," as recited in claim 7, "is similar to the limitations of claims 13, 30 and 36." App. Br. 19 (emphasis omitted). Appellants make a similar argument, as discussed supra, with respect to claim 13. Id. at 19-20. We agree with Appellants. Although the Examiner finds "each pixel in the 2-D pixel area represents a depth value such as distance or length of a target or object in the scene" (Ans. 41 (citing Polzin i199)), the Examiner has not provided sufficient evidence or technical reasoning explaining how Polzin teaches or suggests multiple "candidate depth coordinates for each of a plurality of the pixels," as claimed. Therefore, based on the record before us, and for the reasons discussed supra, we do not sustain the Examiner's rejection of independent claim 13 or the rejection of independent claims 30 and 36, which contain similar limitations. For similar reasons, we do not sustain the Examiner's rejections of dependent claims 7, 15-17, 24 and 32- 34. Claims 3 7, 40, 42, and 45 Appellants contend the Examiner's proposed combination fails to teach or suggest "comparing the first and second images in order to estimate a misalignment" between first and second image capture subassemblies, and "processing the first and second images together while correcting for the misalignment so as to generate stereoscopic depth data," as recited in 11 Appeal2014-008240 Application 12/844,864 claim 37. App. Br. 15-18. Appellants argue Hager's calibration process lacks details how it is carried out. Id. at 16-17 (citing Hager i-fi-132 and 97). In particular, Appellants assert paragraphs 30-32 of Hager teach that "prior calibration is used in correcting for misalignment," but argue Hager does not teach "that the misalignment may be estimated and corrected using the same images as are used for depth mapping, as required by claims 37 and 42." Id. at 17 (citing Hager i-fi-130-32) (emphasis omitted); see also Reply Br. 4. We are not persuaded the Examiner erred. As discussed supra, we agree with the Examiner Polzin teaches capturing a first image using a 3-D camera and a second image using a RGB camera, and processing the first image and the second image to generate stereoscopic depth data. See Ans. 34 and 48 (citing Polzin i156). Further, the Examiner finds Hager, in combination with Polzin, teaches matching pixels between two images to create a disparity map. Ans. 48 (citing Hager i126). The Examiner further finds Hager teaches the two images undergo rectification to correct for pixel misalignment. Id. (citing Hager i-fi-1 30-32). The Examiner further finds, and we agree that "it would have been obvious to use the same images for the misalignment (calibration) and the depth image." Thus, it is the combination of Polzin and Hager that teaches or suggests the disputed limitation. Appellants further argue Zhu does not teach misalignment between stereoscopic sensors, and thus does not teach correcting a misalignment. App. Br. 16. We are not persuaded the Examiner erred because, as discussed supra, the Examiner relies on the combination of Polzin and Hager, not Zhu, for teaching this feature. In view of the above discussion, we are not persuaded the Examiner erred by finding the combination of Polzin, Hager, and Zhu teaches or 12 Appeal2014-008240 Application 12/844,864 suggests the limitations of claim 3 7. Appellants present similar arguments regarding independent claim 42 (App. Br. 15-18) and, for the same reasons as claim 3 7, we are not persuaded of Examiner error. Thus, we sustain the rejection of independent claim 3 7 and, for the same reasons, the rejection of independent claim 42 together with the rejection of dependent claims 40 and 45, not separately argued with particularity. App. Br. 24. Claims 38 and 43 Appellants contend the Examiner erred in finding the combination of Polzin, Hager and Zhu teaches or suggests "applying [collected] statistics in updating the estimate of the misalignment for use creating a second subsequent depth map," as recited in claim 38 and as similarly recited in claim 43. In particular, Appellants argue "[n]owhere in Zhu ... is there any mention or suggestion of updating the estimated misalignment, whether using pixel statistics or any other means." App. Br. 23 (citing Zhu, Figures 8, 9, and 10) (emphasis omitted). The Examiner finds Zhu teaches calculating a distance correction value for Xds using an alignment matrix between Xst and Xds. Ans. 59. The Examiner further finds Zhu' s distance correction value is added to a distance lookup table (LUT) that is used to refine sensor depth. Ans. 59---60. The Examiner further finds "it would have been obvious to update the estimated misalignment using the [distance] LUT correction involved with the alignment matrix as taught by Zhu." Ans. 60 (citing Figure 8). We agree with Appellants. The Examiner has not provided sufficient evidence or technical reasoning explaining why Zhu teaches or suggests collecting statistics, and applying collected statistics in updating a 13 Appeal2014-008240 Application 12/844,864 misalignment estimate, as claimed. Although the distance LUT of Zhu represents "a 3D table in which each cell stores the [calculated distance] correction value Xd" (Zhu p. 3, s. 4.1 ), the Examiner fails to adequately explain how Zhu teaches or suggests applying Zhu's distance LUT "in updating the estimate of the misalignment for use creating a second, subsequent depth map," as recited in claim 38. Therefore, we do not sustain the rejection of dependent claims 38 and 43. Claims 39 and 44 Appellants argue "neither Zhu nor any of the other cited references makes any mention or suggestion at all of estimating differences in relative magnification between [the first and second images], as required by claims 39 and 44." App. Br. 23-24 (emphasis omitted). Appellants further argue the Examiner did not explain how Zhu's time-of-flight (TOP) sensor relates to an estimation of relative magnification. Id. We are not persuaded by Appellants' arguments. In response, the Examiner finds Zhu's 3-D points from different coordinate systems "are compared based on their coordinates (i.e., x, y, z) which can be seen as the magnification of the point in the images." Ans. 60. The Examiner further finds the different magnifications are corrected by distance correction values from each of the points. Ans. 60-61 (citing Zhu s. 4.1 ). We agree with, and adopt as our own, the Examiner's findings that Zhu teaches or suggests estimating differences in relative magnification of claim 39. As discussed supra, we agree with the Examiner Polzin teaches capturing a first image using a 3-D camera and a second image using an RGB camera, and processing the first image and the second image to 14 Appeal2014-008240 Application 12/844,864 generate stereoscopic depth data. See Ans. 34 and 48 (citing Polzin if 56). Thus, it is the combination of Polzin and Zhu that teaches or suggests estimating differences in relative magnification between the first and second images. Accordingly, we sustain the rejection of claim 39 and, for the same reasons, claim 44. DECISION We affirm the Examiner's decision to reject claims 1, 3---6, 11, 12, 18, 20-23, 28, 29, 35, 37, 39, 40, 42, 44, and 45. We reverse the Examiner's decision to reject claims 7-10, 13, 15-17, 24--27, 30, 32-34, 36, 38, and 43. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED-IN-PART 15 Copy with citationCopy as parenthetical citation