KONINKLIJKE PHILIPS N.V.Download PDFPatent Trials and Appeals BoardMar 14, 20222021001308 (P.T.A.B. Mar. 14, 2022) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 16/352,916 03/14/2019 PHILIP S. NEWTON 2009P00133US 01 1151 24737 7590 03/14/2022 PHILIPS INTELLECTUAL PROPERTY & STANDARDS 1600 Summer Street 5th Floor Stamford, CT 06905 EXAMINER ZHOU, ZHIHAN ART UNIT PAPER NUMBER 2482 NOTIFICATION DATE DELIVERY MODE 03/14/2022 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): marianne.fox@philips.com patti.demichele@Philips.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte PHILIP S. NEWTON and GERARDUS W.T. VAN DER HEIJDEN ____________ Appeal 2021-001308 Application 16/352,916 Technology Center 2400 ____________ Before KARL D. EASTHOM, NORMAN H. BEAMER, and SCOTT B. HOWARD, Administrative Patent Judges. EASTHOM, Administrative Patent Judge. DECISION ON APPEAL I. STATEMENT OF THE CASE Appellant1 appeals under 35 U.S.C. § 134(a) from the Examiner’s decision finally rejecting claims 1, 2, and 4-21, which constitute all of the claims pending in this application. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 “Appellant” refers to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as Koninklijke Philips Electronics N.V. Appeal Br. 2. Appeal 2021-001308 Application 16/352,916 2 II. DISCLOSED AND CLAIMED SUBJECT MATTER The disclosed invention relates “to the field of transferring, via a high- speed digital interface, e.g. HDMI, three-dimensional image data, e.g. 3D video, for display on a 3D display device.” Spec. 1. Conventional methods for displaying 3D image data includes “two 3D frame types (left/right) are detected based on the existing horizontal line synchronization pulses in the traditional analog video signal.” Id. at 2. “However, in particular for interlaced video signals, there are no options for manipulating the horizontal line synchronization pulses,” resulting in “no absolute synchronization of the left and right video frames with the shutter glasses.” Id. Consequently, conventional methods cause “the left and right images [to be] swapped so that the left eye sees the images intended for the right eye and vice versa,” and “users of a stereoscopic 3D system . . . do not properly recognize when the left and right images are swapped.” Id. at 2-3. The disclosed system aims to “provide a more flexible and reliable system for transferring of 3D video signals to a display device.” Id. at 3. Independent claim 1 follows: 1. A three dimensional (3D) source device for transferring of 3D image data to a 3D display device, the 3D source device comprising: a processor circuit that: creates source image data so as to generate a 3D display signal, the 3D display signal comprising a plurality Appeal 2021-001308 Application 16/352,916 3 of frames constituting the 3D image data according to a 3D video transfer format, an output interface, wherein the output interface outputs the 3D display signal, wherein each frame has a data structure for representing a sequence of digital image pixel data, wherein the data structure of at least one frame in the 3D display signal differs from at least one other frame in the 3D display signal, wherein the data structure of each frame is dependent upon a frame type of the frame, a transmit synchronization circuit, wherein the transmit synchronization circuit provides a frame type synchronization indicator for each frame of the plurality of frames in the 3D display signal, wherein the frame type synchronization indicator of each frame comprises a frame type indicator corresponding to the frame type of the frame, and wherein the frame type synchronization indicator provides for synchronizing in time the data structure of each frame based on the frame type of the frame. III. REFERENCES The prior art relied upon by the Examiner as evidence in rejecting the claims on appeal follows: Name Reference Date Takayama et al. (“Takayama”) US 2006/0238613 A1 Oct. 26, 2006 Lowry et al. (“Lowry”) US 2007/0247477 A1 Oct. 25, 2007 Suzuki US 2007/0296859 A1 Dec. 27, 2007 Kim et al. (“Kim”) WO 2008/150111 A1 Dec. 11, 2008 Appeal 2021-001308 Application 16/352,916 4 Name Reference Date Okubo et al. (“Okubo”) US 2009/0324202 A1 Dec. 31, 2009 Izzat et al. (“Izzat”) US 9,769,462 B2 Sept. 19, 2017 Newton et al. (“Newton”) “Opportunities for 3D on Blu-ray Disc” 2008 IV. REJECTIONS Claims 1, 2, 14 and 15 stand rejected under 35 U.S.C. § 103 as unpatentable over Takayama, Newton, and Suzuki. Final Act. 2. Claims 10, 11, 16, and 17 stand rejected under 35 U.S.C. § 103 as unpatentable over Takayama, Newton, Suzuki, and Kim. Final Act. 8. Claims 4, 5, 8, 9, 13, and 19-21 stand rejected under 35 U.S.C. § 103 as unpatentable over Takayama, Newton, Suzuki, and Izzat. Final Act. 9. Claims 12 and 18 stand rejected under 35 U.S.C. § 103 as unpatentable over Takayama, Newton, Suzuki, and Lowry. Final Act. 11. Claims 6 and 7 stand rejected under 35 U.S.C. § 103 as unpatentable over Takayama, Newton, Suzuki, Izzat, and Okubo. Final Act. 12. V. OPINION Claims 1, 2, 11, 12, 14, 15, 17, and 18 Claim 1 recites “wherein the data structure of at least one frame in the 3D display signal differs from at least one other frame in the 3D display signal” and “wherein the data structure of each frame is dependent upon a frame type of the frame.” The Examiner finds that Newton’s stereo and image plus depth formats teaches that at least one frame in the 3D display signal differs from another in the 3D display signal and that the data structure of each frame is Appeal 2021-001308 Application 16/352,916 5 dependent upon a frame type of the frame. Final Act. 3. Specifically, the Examiner finds that Newton’s “3D video transfer format comprising at least two different 3D frame types, wherein each of the two different 3D frame types comprises a corresponding different one of three 3D data structures for representing a sequence of digital image pixel data” including “a stereoscope data comprising left and right image data; a 2D image and depth map; and 3D data comprising occlusion or transparency information,” in combination with “Takayama’s signal processing and image display apparatus,” teaches the disputed limitation. Ans. 16-17; see also id. at 24-25. At cited page 2, Newton describes a combination approach “whereby the disc contains stereo as well as image plus depth.” Newton 2, § V (emphasis added). In other words, Newton teaches using stereo display formats. Appellant argues that Newton teaches “the existence of multiple 3D display formats,” but not “a 3D display signal comprising multiple 3D formats (i.e. different frame types/data structures within the display signal,” nor “how such differing formats in the same 3D display signal could be accommodated in 3D source devices or display devices.” Appeal Br. 8; see also id. at 11-12. Specifically, Appellant argues that “nowhere in Newton’s three suggested ‘approaches’ does Newton teach or suggest combining the stereo and the image-plus-depth formats (data structure) in a single data stream,” and that Newton instead teaches “recording the material in both a stereo format and an image-plus-depth format on the disc, then outputting either a stereo display signal or an image-plus-depth display signal.” Reply Br. 3. Appeal 2021-001308 Application 16/352,916 6 This argument does not undermine the Examiner’s showing. As acknowledged by Appellant, it is “well known in the art” that “the specific data structure within each frame of a 3D display signal may differ, particularly when the 3D display signal comprises a variety of 3D formats.” Appeal Br. 7 (citing Spec. 5). At cited page 5, the Specification discloses that “the different frame types in the 3D video transfer format comprise at least one of a left frame type [and] a right frame type.” Spec. 5. The Specification further discloses that the 3D video formats “may range from a basic combination of left and right frame types for stereoscopic images, to complex 3D formats having a combination of left [and] right . . . frames.” Id. (emphases added). The Specification also admits that “US 4,979,033 provides an example of a display device where two 3D frame types (left/right) are detected based on the existing horizontal line synchronization pulses in the traditional analog video signal.” Id. at 2 (emphasis added). The claimed “data structure” that “is dependent upon a frame type of the frame” of “at least one frame in the 3D display signal” that “differs from at least one other frame in the 3D display signal,” as identified by Appellant and supported by the Specification, does not preclude stereo images, which include different frame types (i.e., left and right frames). Appellant does not persuasively explain how the claimed differing data structures, dependent on frame types, for the frames, including stereo images with left and right frames, is not taught or otherwise suggested by Newton’s stereo display formats. Claim 1 further recites “wherein the frame type synchronization indicator of each frame comprises a frame type indicator corresponding to the frame type of the frame.” Appeal 2021-001308 Application 16/352,916 7 The Examiner finds that Takayama teaches the claimed frame type synchronization indicator corresponding to the frame type. Final Act. 4 (citing Takayama ¶¶ 444-447, 471-473, Figs. 18, 20, 21). As discussed above, the Examiner relies on Newton’s supported various 3D frame types involving 3D data structures such as stereo images to teach different frame types. Final Act. 4. At cited paragraphs 444-447, Takayama discloses “a plurality of frame-by-frame storage areas 330 are assigned to the RAM 314, such that the frame-by-frame storage areas 330(L) for the left eye and the frame-by- frame storage areas 330(R) for the right eye are alternately arranged.” Takayama ¶ 444. Takayama also discloses the “separating circuit 404 first separates a composite signal received from the receiving circuit 402 into a composite signal for the left eye and a composite signal for the right eye.” Id. ¶ 447. Appellant argues that Takayama’s signal S cannot be “considered to correspond to a frame type synchronization signal as defined” in the claims. Reply Br. 8. Specifically, Appellant argues that “Takayama’s signal S that identifies whether the frame is a left eye image or a right eye image does not indicate the data structure of the frame.” Id. According to Appellant, in Takayama, “the data structure of each frame” of a “left eye” frame and a “right eye” frame “is the same, regardless of the identifier of left and right frames.” Id. at 6-7. This argument does not undermine the Examiner’s showing. Appellant’s arguments are not aligned with the claim language or supported by the Specification. Specifically, Appellant’s Specification discloses that “the different frame types in the 3D video transfer format comprise at least Appeal 2021-001308 Application 16/352,916 8 one of a left frame type [and] a right frame type.” Spec. 5. The claimed “frame type indicator corresponding to the frame type of the frame,” as supported by the Specification, includes indicators corresponding to left frame types and right frame types. Appellant acknowledges that “Takayama discloses a signal S in each frame that distinguishes between left and right eye images.” Reply Br. 8. Appellant does not persuasively argue how the claimed indicator “corresponding to the frame type of the frame,” including indicators corresponding to left frame types and right frame types, is not taught or suggested by Takayama’s signal with left eye signal and storage area for the left eye, or right eye signal and storage area for the right eye. Furthermore, as discussed above, the claimed “data structure” that “is dependent upon a frame type of the frame” of “at least one frame in the 3D display signal” that “differs from at least one other frame in the 3D display signal,” as identified by Appellant and supported by the Specification, does not preclude stereo images, which include different frame types (i.e., left and right frames). Appellant’s arguments that Takayama and Suzuki do not teach “different data structures” required in the claims do not undermine the Examiner’s showings. See Appeal Br. 13-15. The Examiner relies on Newton, not Takayama or Suzuki, to teach the differing data structures that are dependent on frame types. See Final Act. 3-4; see also Ans. 15-19, 23- 27, 29-30. One cannot show non-obviousness by attacking references individually, where the rejections are based on combinations of references. In re Merck & Co., Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986); In re Keller, 642 F.2d 413, 425 (CCPA 1981). Appeal 2021-001308 Application 16/352,916 9 Appellant also argues that the “combination of Takayama and Newton would not result in a display signal that included a mix of stereo data structures and image-plus-depth data structures, absent the (impermissible) hindsight provided by the applicant’s disclosure.” Reply Br. 4. Appellant also argues that the advantages purported “by the proposed combination of Takayama, Newton, and Suzuki . . . do not provide any suggestion or motivation to one of skill in the art to add an identifier of the data structure of each frame in a single-data-structure display signal.” Id. at 7. This argument does not undermine the Examiner’s showing. Here, the Examiner cites to both Takayama and Suzuki for processing and displaying 3D image signals, and to Newton for supporting multiple different frame types. Ans. 17, 18-19. The Examiner cites to Suzuki’s readily using an existing video data transmission standard to perform encoding and decoding for left eye video data and right eye video data, and Newton’s support of multiple different frame types in 3D video (such as stereo left and right images), as motivation to combine the references. Id. at 19 (citing Suzuki ¶¶ 15, 20, Newton 1). Appellant’s argument does not persuasively explain why it is unclear how combining Takayama and Suzuki with Newton would result in an improved system for 3D image display. In summary, Appellant does not persuasively explain how the Examiner erred in finding that the combination of Takayama, Newton, and Suzuki teaches or suggests “wherein the data structure of at least one frame in the 3D display signal differs from at least one other frame in the 3D display signal, wherein the data structure of each frame is dependent upon a frame type of the frame,” and “wherein the frame type synchronization Appeal 2021-001308 Application 16/352,916 10 indicator of each frame comprises a frame type indicator corresponding to the frame type of the frame” as recited in claim 1. Based on the foregoing discussion, Appellant does not show error in the Examiner’s findings and determination of the obviousness of claim 1. Appellant does not challenge the Examiner’s findings and determination of the obviousness of independent claim 14, or dependent claims 2, 11, 12, 15, 17, and 18 separately from claim 1. See Appeal Br. 15. Accordingly, claims 2, 11, 12, 14, 15, 17, and 18 fall with claim 1. Claims 10 and 16 Claim 10 recites “wherein the frame type synchronization indicator comprises a frame sequence indicator indicating frequency of the frame type.” The Examiner finds Kim’s disclosure of the “frame rates of the two left and right images” teaches “indicat[ing] a frequency of the frame type.” Ans. 31 (citing Kim ¶¶ 20-21, 121-122, 128-129). As cited in paragraph 20, Kim discloses that “the sample description container may further include information on frame rates of the left and right images which constitute the frame to be encoded and/or disparity information.” Kim ¶ 20. Kim also discloses that “the information on the frame rate may include information on whether a frame rate of the left image is the same as that of the right image and information for matching the frame rates of the left and right images with each other when displaying the stereoscopic image.” Id. ¶ 21. Appellant argues that Kim’s teaching that “the conventional technique of identifying the characteristics of the display signal, including the frame rate(s) used throughout the duration of the display signal, in a header-file, or Appeal 2021-001308 Application 16/352,916 11 meta-data, associated with the recording of the display signal” does not teach the claimed “inclusion of [] data in an indicator that is provided for each frame in the display signal in order to support the display of frames having different data structures and frequencies, based on the particular type of frame.” Appeal Br. 16. This argument does not undermine the Examiner’s showing. As discussed above, the claimed frame type synchronization indicators of each frame corresponding to the frame types of the frame are taught by the combination of Takayama, Newton, and Suzuki, as supported by the Specification. Specifically, the claimed “frame type indicator corresponding to the frame type of the frame,” as supported by the Specification, includes indicators corresponding to left frame types and right frame types from stereo images. See Spec. 5. Appellant does not explain how the claimed indicator “indicating frequency of the frame type,” which includes left frame types and right frame types for stereo images, is not taught by Kim’s frame rate information for the left images and right images. Appellant also does not address the Examiner’s finding that “Kim discloses one example of a frame type synchronization indicator comprising a frame sequence indicator indicating frequency of the frame type” despite whether or not the claims consider the inclusion of “metadata provided for each frame in the display signal in order to support the display of frames having different data structures and frequencies.” Ans. 31. Based on the foregoing discussion, Appellant does not show error in the Examiner’s findings and determination of the obviousness of claim 10. Appellant does not challenge the Examiner’s findings and determination of Appeal 2021-001308 Application 16/352,916 12 the obviousness of dependent claim 16 separately from claim 10. See Appeal Br. 16. Accordingly, claim 16 falls with claim 10. Claims 4-9, 13, and 19-21 Claim 4 recites “wherein the 3D video transfer format comprises a main video and at least one auxiliary video layer transferred via respective frame types” and “wherein the at least two different frame types comprise a main frame type indicator and an auxiliary layer frame type indicator.” The Examiner finds that the claimed main video is taught by Izzat’s 3D video content left eye image and right eye image, and that the claimed auxiliary video layer is taught by Izzat’s 3D subtitles/subtitle regions. Ans. 32 (citing Izzat ¶¶ 29, 31-36, 38-44, Figs. 3, 7). The cited sections of Izzat “cover[s] stereoscopic content,” with “the text in either the left or the right eye image is shifted to match the largest depth value in the stereoscopic image.” Izzat col. 6, ll. 40-48. Appellant argues that Izzat does not teach “using different frames and frame types for providing the ‘main’ video and the ‘auxiliary’ video (such as text or graphics).” Appeal Br. 17-18. This argument does not undermine the Examiner’s showing. As discussed above, the claimed frame types are taught by the combination of Takayama, Newton, and Suzuki, as supported by the Specification. Specifically, the claimed frame types, as supported by the Specification, include left frame types and right frame types from stereo images. See Spec. 5. Appellant does not explain how the claimed main video with frame types is not taught by Izzat’s stereoscopic content including left eye and right eye images. Furthermore, Appellant acknowledges that the claimed auxiliary Appeal 2021-001308 Application 16/352,916 13 video includes “text or graphics.” Appeal Br. 17-18. Appellant does not explain how the claimed auxiliary video layer with frame types, including text, is not taught by Izzat’s text in the left and right eye images. Based on the foregoing discussion, Appellant does not show error in the Examiner’s findings and determination of the obviousness of claim 4. Appellant does not challenge the Examiner’s findings and determination of the obviousness of dependent claims 5-9, 13, and 19-21 separately from claim 4. See Appeal Br. 18-19. Accordingly, claims 5-9, 13, and 19-21 fall with claim 4. VI. CONCLUSION We affirm the Examiner’s decision rejecting claims 1, 2, and 4-21 under § 103. In summary: Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 2, 14, 15 103 Takayama, Newton, Suzuki 1, 2, 14, 15 10, 11, 16, 17 103 Takayama, Newton, Suzuki, Kim 10, 11, 16, 17 4, 5, 8, 9, 13, 19-21 103 Takayama, Newton, Suzuki, Izzat 4, 5, 8, 9, 13, 19-21 12, 18 103 Takayama, Newton, Suzuki, Lowry 12, 18 6, 7 103 Takayama, Newton, Suzuki, Izzat, Okubo 6, 7 Overall Outcome 1, 2, 4-21 Appeal 2021-001308 Application 16/352,916 14 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation