Qiang Li et al.Download PDFPatent Trials and Appeals BoardDec 2, 201914128996 - (D) (P.T.A.B. Dec. 2, 2019) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/128,996 12/23/2013 Qiang Li P47336US/45631-227951 1083 73486 7590 12/02/2019 Barnes & Thornburg LLP (Intel) 11 S. Meridian Steet Indianapolis, IN 46204 EXAMINER GARCIA-CHING, KARINA J ART UNIT PAPER NUMBER 2449 NOTIFICATION DATE DELIVERY MODE 12/02/2019 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): INdocket@btlaw.com Inteldocs_docketing@cpaglobal.com inteldocket@btlaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte QIANG LI, YANGZHOU DU, WENLONG LI, XIAOFENG TONG, WEI HU, LIN XU, and YIMIN ZHANG ____________________ Appeal 2018-008709 Application 14/128,9961 Technology Center 2400 ____________________ Before ALLEN R. MACDONALD, MICHAEL J. ENGLE, and IFTIKHAR AHMED, Administrative Patent Judges. AHMED, Administrative Patent Judge. DECISION ON APPEAL Appellant appeals under 35 U.S.C. § 134(a) from a final rejection of claims 31–41, 43–52, and 54–56, which are the only claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM-IN-PART. SUMMARY OF THE INVENTION The application relates to “video and audio sharing, more specifically, to video and audio sharing using [an] Avatar.” Spec. ¶ 1. 1 We use the word Appellant to refer to “applicant” as defined in 37 C.F.R. § 1.42(a). According to Appellant, the real party in interest is Intel Corp. App. Br. 2. Appeal 2018-008709 Application 14/128,996 2 Illustrative Claim Claims 31 and 38 are illustrative and are reproduced below with certain limitations at issue under 35 U.S.C. § 103 underlined and certain limitations at issue under 35 U.S.C. § 101 italicized and bolded: 31. A communication device, comprising: a processor; a memory; an audio encoding module to encode a piece of audio into an audio bit stream; an avatar data extraction module to extract 3D avatar data from a piece of video of a user of the communication device and generate an avatar data bit stream, wherein to extract the 3D avatar data comprises to extract one or more parameters indicative of an out-of-plane rotation or a z-axis translation of the user; and a synchronization module to generate synchronization information for synchronizing the audio bit stream with the avatar parameter stream. 38. A method, comprising: encoding a piece of audio into an audio bit stream; extracting 3D avatar data from a piece of video to generate an avatar data bit stream, wherein extracting the 3D avatar data from the piece of video comprises extracting one or more parameters from the piece of video indicative of an out- of-plane-rotation or a z-axis translation of the user; generating synchronization information for synchronizing the audio bit stream with the avatar parameter stream; packing the audio bit stream, avatar data bit stream and the synchronization information into a packet; and transmitting the packet to a server. Appeal 2018-008709 Application 14/128,996 3 Rejections Claims 31–41, 43–52, and 54–56 stand rejected under 35 U.S.C. § 101 as being directed to ineligible subject matter without significantly more. Final Act. 7. We select claim 38 as representative for this rejection. Claims 31, 32, 34, 44, 47, 48, 50, 54, and 56 stand rejected under 35 U.S.C. § 103(a) as obvious over the combination of Dimtrva (US 2006/0290699 A1; Dec. 28, 2006), Lee (US 2010/0141611 A1; June 10, 2010), and Tsai (US 2006/0092772 A1; May 4, 2006). Final Act. 9. Claims 33, 45, and 51 stand rejected under 35 U.S.C. § 103(a) as obvious over the combination of Dimtrva, Lee, Tsai, and Nakagawa (US 6,643,330 B1; Nov. 4, 2003). Final Act. 21. Claims 35, 36, 38, 39, 41, 46, and 52 stand rejected under 35 U.S.C. § 103(a) as obvious over the combination of Dimtrva, Lee, Tsai, and Treadwell (US 2004/0015610 A1; Jan. 22, 2004). Final Act. 23. Claims 37, 43, 49, and 55 stand rejected under 35 U.S.C. § 103(a) as obvious over the combination of Dimtrva, Lee, Tsai, Treadwell, and Bishop (US 2007/0188502 A1; Aug. 16, 2007). Final Act. 32. Claim 40 stands rejected under 35 U.S.C. § 103(a) as obvious over the combination of Dimtrva, Lee, Tsai, Treadwell, and Nakagawa. Final Act. 35. ISSUES 1. Did the Examiner err in concluding that claim 38 is directed to ineligible subject matter without significantly more under § 101? 2. Did the Examiner err in finding claims 31, 44, 50, and 56 obvious in view of Dimtrva, Lee, and Tsai? Appeal 2018-008709 Application 14/128,996 4 ANALYSIS § 101 Rejection of Claims 31–41, 43–52, and 54–56 Section 101 defines patentable subject matter: “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.” 35 U.S.C. § 101. The Supreme Court, however, has “long held that this provision contains an important implicit exception” that “[l]aws of nature, natural phenomena, and abstract ideas are not patentable.” Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 70 (2012) (quotation omitted). “Eligibility under 35 U.S.C. § 101 is a question of law, based on underlying facts.” SAP Am., Inc. v. InvestPic, LLC, 898 F.3d 1161, 1166 (Fed. Cir. 2018). To determine patentable subject matter, the Supreme Court has set forth a two part test. “First, we determine whether the claims at issue are directed to one of those patent-ineligible concepts” of “laws of nature, natural phenomena, and abstract ideas.” Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, 217 (2014). “The inquiry often is whether the claims are directed to ‘a specific means or method’ for improving technology or whether they are simply directed to an abstract end-result.” RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1326 (Fed. Cir. 2017). A court must be cognizant that “all inventions at some level embody, use, reflect, rest upon, or apply laws of nature, natural phenomena, or abstract ideas” (Mayo, 566 U.S. at 71), and “describing the claims at . . . a high level of abstraction and untethered from the language of the claims all but ensures that the exceptions to § 101 swallow the rule.” Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1337 (Fed. Cir. 2016). Appeal 2018-008709 Application 14/128,996 5 Instead, “the claims are considered in their entirety to ascertain whether their character as a whole is directed to excluded subject matter.” Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1346 (Fed. Cir. 2015). If the claims are directed to an abstract idea or other ineligible concept, then we continue to the second step and “consider the elements of each claim both individually and ‘as an ordered combination’ to determine whether the additional elements ‘transform the nature of the claim’ into a patent-eligible application.” Alice, 573 U.S. at 217 (quoting Mayo, 566 U.S. at 79, 78). The Supreme Court has “described step two of this analysis as a search for an ‘inventive concept’—i.e., an element or combination of elements that is sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the ineligible concept itself.” Id. at 217–18 (quotation omitted). The U.S. Patent & Trademark Office has published revised guidance on the application of § 101. USPTO, 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (Jan. 7, 2019) (“Guidance”). Under that guidance, we look to whether the claim recites (1) a judicial exception, such as a law of nature or any of the following groupings of abstract ideas: (a) mathematical concepts, such as mathematical formulas; (b) certain methods of organizing human activity, such as a fundamental economic practice; or (c) mental processes, such as an observation or evaluation performed in the human mind; Appeal 2018-008709 Application 14/128,996 6 (2) any additional limitations that integrate the judicial exception into a practical application (see MPEP § 2106.05(a)–(c), (e)–(h)); and (3) any additional limitations beyond the judicial exception that, alone or in combination, were not “well-understood, routine, conventional” in the field (see MPEP § 2106.05(d)). See Guidance 52, 55, 56. Under the Guidance, if the claim does not recite a judicial exception, then it is eligible under § 101 and no further analysis is necessary. Id. at 54. Similarly, under the Guidance, “if the claim as a whole integrates the recited judicial exception into a practical application of that exception,” then no further analysis is necessary. Id. at 53, 54. A) Claim 38 The Examiner concludes that claims 31–41, 43–52, and 54–56 are directed to patent-ineligible subject matter. Final Act. 7–9. We select independent claim 38 as representative for this rejection. The Examiner determines that the series of steps recited in claim 38 are directed to the “abstract idea of obtaining/extracting avatar data from a piece of video in order to generate an avatar bit stream, and synchronizing the audio bit stream with the avatar parameter stream.” Id. at 8. According to the Examiner, the claim limitations are “merely instructions to implement the abstract idea on a computer and require no more than a generic computer to perform generic computer functions that are well understood, routine and conventional activities previously known to the industry.” Id. at 36–37. The claimed concept, the Examiner determines, “is not ‘necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks.’” Id. at 37 (citing DDR Holdings, LLC v. Appeal 2018-008709 Application 14/128,996 7 Hotels.com, L.P., 773 F.3d 1245, 1257 (Fed. Cir. 2014)). Instead, the Examiner concludes, the claimed “concept is similar to the concepts collecting information, analyzing it, and displaying certain results of the collection and analysis as in Electric Power Group [v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016)], which have all been found by the courts to be abstract.” Id. The Examiner also determines that “[t]he claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they are merely an abstract idea with additional generic computer elements,” and at best, any improvement claimed is related to the abstract idea, not to any device or another existing technology. Id. at 37–39; see also id. at 7–9. Appellant argues that the rejected claims, including claim 38, “as a whole are directed to low-bandwidth video communication by employing non-conventional techniques,” not an abstract idea. App. Br. 9. According to Appellant, the claimed avatar extraction method “extracts 3D avatar data from video of a user, which can then be sent to a remote communication device, facilitating, e.g., video instant messaging or real-time video chat.” 2 Id. Appellant points out that claim 38 specifically requires “transmitting a packet including the audio bit stream, avatar bit stream, and synchronization information to a server,” which, according to Appellant, “demonstrate[s] that the audio data, avatar data, and synchronization information is sent [to] a remote compute device for use in facilitating low-bandwidth video 2 Appellant’s arguments on the Examiner’s § 101 rejection of claim 38 incorporate by reference, Appellant’s arguments for patent eligibility of claim 31. App. Br. 13. We therefore address Appellant’s arguments relating to both those claims in context of claim 38. Appeal 2018-008709 Application 14/128,996 8 communication, which amounts to a technical improvement in a technical field.” Id. at 13–14. The advantage provided by the invention, Appellant argues, is “lower[ing] bandwidth consumption while keeping reality of facial expression and/or motion of an object presenting in the video,” and “sav[ing] the bandwidth resource with much less quality sacrifice, at least partly because of the offline Avatar data extraction, the Avatar animation, and the synchronization of the Avatar data bit streams and the audio bit streams.” Id. (quoting Spec. ¶¶ 26, 28). Attempting to distinguish the Federal Circuit’s decision in Electric Power Group, Appellant argues that claim 38 recites the specific technical method which performs the claimed function, “such as extracting 3D avatar data indicative of an out-of-plane rotation or a z-axis translation,” and “is directed to an improvement in the functionality of the computer itself as used in video communication.” Id. at 10. “Even if, arguendo, the claim were directed to an abstract idea,” Appellant contends, “specific additional features in the claim are not well-understood, routine, and conventional,” and that “extracting parameters and generating synchronization information as recited in claim [38] are additional features not directed to any abstract idea.” Reply Br. 5–6 (citing Berkheimer v. HP, Inc., 881 F.3d, 1360 (Fed. Cir. 2018)). We are persuaded by Appellant’s arguments that the Examiner has not satisfied the proper burden for making a prima facie case for patent ineligibility under 35 U.S.C. § 101. B) USPTO Step 2A, Prong 1 Claim 38 recites, in part, (A) “encoding a piece of audio into an audio bit stream”; (B) “extracting 3D avatar data from a piece of video [and] Appeal 2018-008709 Application 14/128,996 9 generat[ing] an avatar data bit stream”; and (C) “generating synchronization information for synchronizing the audio bit stream with the avatar parameter stream.” App. Br. 21. Additionally, claim 38 recites the steps of (D) “packing the audio bit stream, avatar data bit stream and the synchronization information into a packet” and (E) “transmitting the packet to a server.” Id. For purposes of this decision, we agree with the Examiner that claim 38 recites a mental process because step (C) listed above is recited at a high level of generality such that it could practically be performed in the human mind or by a human with a pen and paper. See Elec. Power Grp., 830 F.3d at 1354 (“we have treated analyzing information by steps people go through in their minds, or by mathematical algorithms, without more, as essentially mental processes within the abstract-idea category”). Step (C) recites synchronizing the two bit streams obtained by performing steps (A) and (B), i.e., encoding audio data and extracting avatar data from video data. At a high level, step (C) simply recites a process of taking two data sets and synchronizing them, i.e., aligning them in a way to generate synchronization information, which is a process that can be performed within the human mind (e.g., synchronizing what the eyes see with what the ears hear) or with a pen and paper. See CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372 (Fed. Cir. 2011) (reasoning that when a person may implement claimed steps by simply writing down the claimed data elements, those steps can all be performed in the human mind); see also Univ. of Utah Research Found. v. Ambry Genetics Corp., 774 F.3d 755, 763 (Fed. Cir. 2014) (finding claims to comparing BRCA sequences, where such comparison can practically be performed in the human mind, to be directed to an abstract idea); Guidance 52 & n.14 (listing cases). Appeal 2018-008709 Application 14/128,996 10 Therefore, on the record before us, we determine that the Examiner’s articulated reasoning is sufficient on USPTO Step 2A, Prong 1, and that synchronizing two bit streams comprises a mental process, which is an abstract idea. C) USPTO Step 2A, Prong 2 Although we agree with the Examiner that claim 1 recites the mental process of generating synchronization information for two data sets, the Examiner has not shown that the claim, as a whole, fails to “integrate[] the recited judicial exception into a practical application of the exception.” Guidance 54 (emphasis added). Put another way, the Examiner has not sufficiently addressed whether the claims “apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception.” Id. (emphasis added). Further, the analysis under prong 2 considers the claim as a whole, i.e., “the limitations containing the judicial exception as well as the additional elements in the claim besides the judicial exception need to be evaluated together to determine whether the claim integrates the judicial exception into a practical application.” October 2019 Patent Eligibility Guidance Update, at 12, available at http://www.uspto.gov/PatentEligibility. Here, Appellant argues that “the claims as a whole are directed to low-bandwidth video communication by employing non-conventional techniques,” such as “extract[ing] 3D avatar data from video of a user,” and “facilitating, e.g., video instant messaging or real-time video chat” using that avatar data. App. Br. 9. Appellant points to the Specification as elaborating on the advantages provided by the claimed invention, including “lower[ing] Appeal 2018-008709 Application 14/128,996 11 bandwidth consumption while keeping reality of facial expression and/or motion of an object presenting in the video,” and “sav[ing] the bandwidth resource with much less quality sacrifice, at least partly because of the offline Avatar data extraction, the Avatar animation, and the synchronization of the Avatar data bit streams and the audio bit streams.” Id. (quoting Spec. ¶¶ 26, 28). Therefore, Appellant concludes: The claims read in light of at least those statements of the specification clearly establish that the claims are directed to a technical improvement in the technical field of video communication by lowering the bandwidth requirement without much sacrifice of video quality by extracting parameters as described in the claims in a similar manner as the claims in Enfish were directed to an improvement in storing information on a computer. Id. at 9–10. We are persuaded that the Examiner has erred. In the context of revised Step 2A, claim limitations “that reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field” are indicative of a recited judicial exception being integrated into a practical application. Guidance 55 (citing DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1257 (Fed. Cir. 2014)); see also MPEP § 2106.05(a). A limitation that “applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment” similarly integrates the exception into a practical application. Guidance 55 (citing Diamond v. Diehr, 450 U.S. 175, 184 (1981)); see also MPEP § 2106.05(e). Here, claim 38 is specifically directed to extracting 3D avatar data from a piece of video to generate an avatar data bit stream, synchronizing it Appeal 2018-008709 Application 14/128,996 12 with an audio bit stream, packing that data into a packet, and transmitting the packet to a server. Other claims (e.g., claim 50) recite receiving the two bit streams and synchronization information, and utilizing that information to synchronize and render an animated avatar model. The Specification explains how the claimed use of synchronized avatar video and audio streams combined offline with the corresponding synchronization information into a single network packet, results in an improvement, i.e., lowering bandwidth utilization while preserving facial expressions or other movements of the person or object in the video. See Spec. ¶ 26. The Specification explains the improvement in the context of instant messaging: Compared with transmitting the video through the instant message, the above-stated scheme may be useful to save the bandwidth resource with much less quality sacrifice, at least partly because of the offline Avatar data extraction, the Avatar animation, and the synchronization of the Avatar data bit streams and the audio bit streams. Id. ¶ 28 (emphasis added). As noted in the Background of the invention, at the time of the invention, “high bandwidth consumption . . . significantly hinder[ed] a widespread use of . . . off-line video and audio sharing,” including, e.g., instant message communications over wireless networks. Id. ¶ 2. The claimed invention purports to provide a solution to that problem. Besides lowering bandwidth usage, the claimed invention also allows the sender of the message to use an avatar in place of an actual video “to keep secrecy if a message sender doesn’t want to reveal his/her real image.” Id. ¶ 28. Moreover, claim 38 recites that the extraction of 3D avatar data “comprises to extract one or more parameters indicative of an out-of-plane Appeal 2018-008709 Application 14/128,996 13 rotation or a z-axis translation of the user” (App. Br. 20), thereby requiring extraction of specific avatar parameters that are synchronized with the audio bit stream, which itself is encoded from a separate piece of audio. Claim 38 further recites packing the avatar parameter stream along with the audio stream and synchronization information into a packet that is transmitted to a server. Id. The claim language, when read in view of the Specification, thus supports that claim 38 is not only limited to the technical field of video communications using avatars and encoded audio, but also improves the efficiency with which such communications can occur. Claim 38 improves the technical functioning of the computer by reciting a specific technique for improving video communications that replaces video with 3D avatar images while preserving, within the avatar images, certain characteristics of the actual video. See SRI Int’l, Inc. v. Cisco Sys., Inc., 930 F.3d 1295, 1303 (Fed. Cir. 2019) (concluding that a claim that recites using a plurality of network monitors to analyze specific network traffic data and integrate generated reports from the monitors to identify hackers and intruders on the network constitutes an improvement in computer network technology); see also BASCOM Glob. Internet Servs., Inc. v. AT&T Mobility LLC, 827 F.3d 1341, 1350 (Fed. Cir. 2016) (holding that even though the claim at issue recites the abstract idea of filtering, the claimed invention improves technology when the filtering limitations are considered in combination with the remaining limitations). Because claim 38 as a whole integrates the recited abstract idea into a practical application of that idea under the Guidance, it is not “directed to” the recited abstract idea and thus qualifies as eligible subject matter under § 101. The Examiner thus erred in rejecting independent claim 38. Because Appeal 2018-008709 Application 14/128,996 14 the Examiner rejects claims 31–37, 39–41, 43–52, and 54–56 (Final Act. 7– 9) for the same reasons as claim 38, we do not sustain the rejection of claims 31–41, 43–52, and 54–56 under § 101.3 § 103 Rejection of Claim 31–37 Independent claim 31 recites “an avatar data extraction module to extract 3D avatar data from a piece of video of a user of the communication device . . . wherein to extract the 3D avatar data comprises to extract one or more parameters indicative of an out-of-plane rotation or a z-axis translation of the user.”4 App. Br. 20 (emphasis added). The Examiner finds that the combination of Lee and Dimtrva teaches or suggests this limitation. The Examiner finds that “Dimtrva teaches extraction from a video, but is silent on extraction of particular parameters, and Lee teaches the extraction of those parameters, thus the combination teaches extraction of the particular parameters in Lee from a video as in Dimtrva.” Ans. 11 (citing Dimtrva ¶¶ 29, 72). Specifically, the Examiner determines that “Dimtrva does not explicitly mention wherein to extract the 3D avatar data comprises to extract one or more parameters indicative of an out-of-plane rotation or a z-axis translation of the user,” which, the Examiner determines, Lee teaches. Final Act. 11 (citing Lee ¶ 66). Therefore, the Examiner concludes, “the modified Dimtrva teaches the argued limitation and as such meets the scope of the claimed subject matter.” Id. at 41. 3 We note however that the term “the avatar parameter stream” is used in claims 31 and 38 without any antecedent basis. App. Br. 20–21. For the purposes of this Appeal, we presume that the claim limitation refers to the “avatar data bit stream” recited in a prior limitation in both the claims. 4 Appellant presents arguments only as to independent claim 31, asserting that those arguments “are equally applicable to dependent claims 32-37.” App. Br. 5. Appeal 2018-008709 Application 14/128,996 15 Appellant disputes the Examiner’s findings, arguing that “Lee fails to disclose extracting 3D avatar data from a video as claimed in claim 31 because Lee is not directed to extracting data from a video.” App. Br. 12. According to Appellant, “Lee is directed to techniques for 3D control of an object that is being displayed on a display device,” and “discloses determining motion information of an avatar (such as ‘movement along the Z axis’) based on input provided by the user in input sensors, and not by extracting 3D avatar data from a piece of video, as required by the claim.” Id. at 12–13 (citing Lee ¶¶ 60, 63, 66). Appellant contends “[a]ccepting an input from a user is simply not the same technique as extracting parameters from a video.” Reply Br. 7. Appellant further argues that Dimtrva’s teaching is limited to “extraction of 2D avatar data from a video,” and therefore, combining Lee with Dimtrva simply yields “the result of extracting the 2D parameters from a video as taught by Dimtrva and accepting an input from a user indicative of a z-axis translation as taught by Lee,” not “extracting a parameter indicative of a z-axis translation,” as claimed. Id. We are not persuaded of error. Appellant’s argument fails to address the Examiner’s rationale to combine the references to yield the data extraction limitation. By arguing that neither Lee nor Dimtrva alone teaches all aspects of the limitation at issue, Appellant does not address the rejection as articulated, in which the Examiner relies on the combined teachings of Dimtrva and Lee. See Final Act. 10–11; see also In re Keller, 642 F.2d 413, 425 (CCPA 1981) (“[T]he test [for obviousness] is what the combined teachings of the references would have suggested to those of ordinary skill in the art.”). Appeal 2018-008709 Application 14/128,996 16 Appellant agrees that Dimtrva teaches extraction of avatar data from a video. Reply Br. 7. In fact, Dimtrva discloses: Content synthesis application processor 190 extracts audio features and visual features from the audiovisual input signals from source 130 and uses the audio features and visual features to create a computer generated animated version of the face of the speaker and synchronizes the animated version of the face of the speaker with the speaker’s speech. Dimtrva ¶ 29. Further, Dimtrva discloses “three dimensional (3D) facial model module 540” that provides inputs to “facial animation for selected parameters module 370,” which “synthesizes the speaker’s face (i.e., creates a computer generated animated version of the speaker’s face) using facial animation parameters.” Id. ¶ 72 (emphasis added), Fig. 5. Thus, contrary to Appellant’s contentions (Reply Br. 7), Dimtrva’s disclosure is not limited to “extraction of 2D avatar data from a video.”5 The Examiner relies on Lee for its disclosure of “generating parameters including a z-axis translation and using those parameters for a display for the benefit of the displayed item.” Ans. 11. Appellant agrees that “Lee is directed to techniques for 3D control of an object,” and that Lee discloses “estimat[ing] the motion of [an] avatar by tracking at least one location of a location of the first contact and a location of the second contact,” thereby “clearly disclos[ing] determining motion information of an 5 Figure 5 of Dimtrva “illustrates how content synthesis application processor 190 uses speaking face movement components (SFMC) and other parameters to synthesize and synchronize a speaking face animation with a speaker’s speech.” See Dimtrva ¶ 69. Dimtrva further discloses that facial audio-visual feature matching and classification module 360, the module that classifies audio-visual features, receives various visual parameters from speaking face visual parameters module 510. Dimtrva ¶ 70. Appeal 2018-008709 Application 14/128,996 17 avatar (such as ‘movement along the Z axis’).” Reply Br. 12. The Examiner merely relies on that 3D parameter generation teaching of Lee, i.e., tracking of “the movement along the Z axis,” in the obviousness rejection of claim 31. Ans. 11. Appellant fails to explain why a person of ordinary skill in the art would find measuring movement along the Z-axis for an avatar, as disclosed in Lee, to be meaningfully different from measuring the same for an object presented in a video. We therefore agree with the Examiner that one of ordinary skill in the art would have understood that modifying the avatar extraction teaching of Dimtrva to further determine z-axis translation parameters as taught in Lee would have resulted in “extract[ing] 3D avatar data from a piece of video of a user . . . wherein to extract the 3D avatar data comprises to extract one or more parameters indicative of an out-of-plane rotation or a z-axis translation of the user,” as recited in claim 31. We are therefore not persuaded of error in the Examiner’s findings regarding the cited prior art combination to teach or suggest the avatar data extraction limitation of claim 31. Thus, the § 103 rejection of claim 31 and dependent claims 32–37, are sustained. § 103(a) Rejections of Claims 38–41 and 43 Appellant asserts that the § 103(a) rejection of claims 38–41 and 43 should be reversed “for at least the reasons presented above in regard to claim 31.” App. Br. 14. The rejection of these claims therefore turn on our decision as to claim 31, and are sustained. § 103(a) Rejections of Claims 44–49 With respect to claims 44–49, which are directed to a communication device that receives an audio bit stream and an avatar data bit stream, Appeal 2018-008709 Application 14/128,996 18 Appellant asserts the combination of Dimtrva and Lee fail to disclose the recited “avatar data bit stream” which comprises “one or more parameters indicative of an out-of-plane rotation or a z-axis translation of the animated Avatar model.” Id. at 15. Appellant contends that “[e]ven if, arguendo, the data in Lee could be construed to be a ‘data bit stream’ as stated in the claim, . . . the Examiner failed to explain how the data in Lee extracted from user input could be combined with the data bit stream in Dimtrva.” Id. The Examiner responds, The rejection does not rest on the notion that the inputs could be bodily combined, the rejection is that one of skill, having read both Dimtrva and Lee, would find it obvious to extract z-axis translation parameters from a video because 1) Dimtrva teaches parameter extraction from a video, because 2) Lee teaches of z- axis translation parameters, and because 3) one of skill would have been enabled and motivated to extract z-axis translation parameters from the video; none of which Appellant attacks. Ans. 15. For the reasons discussed above (on claim 31), we agree with the Examiner that one of ordinary skill in the art would have understood that modifying the avatar extraction disclosure of Dimtrva with Lee’s teaching would have resulted in a teaching of extraction of 3D avatar data including z-axis translation parameters from a piece of video of a user. For similar reasons, we are not persuaded of error with respect to the Examiner’s findings as to claim 44. Although we agree with Appellant that Lee’s disclosure relates to “extract[ing] information relating to an object that is being controlled by the user” (App Br. 15), Lee does teach “extract[ing] motion information associated with the movement of [an] avatar along the Z axis,” and Appeal 2018-008709 Application 14/128,996 19 including that information in an input signal that is transmitted to the display device. Lee ¶¶ 66, 67 (emphasis added). The parameters recited in claim 44 are “parameters indicative of . . . a z-axis translation of the animated Avatar model.” Emphasis added. Dimtrva discloses that “[t]he computer generated animated version of the face of the speaker (with synchronized speech) may be displayed on display screen 115 of the display unit 110” (Dimtrva ¶ 29), thereby teaching an avatar data bit stream that is transmitted to the display unit to generate an animated avatar model. Although we agree with Appellant that the two data streams disclosed in Lee and Dimtrva respectively may possibly be different because of their different sources6 (App. Br. 15), the fact that certain modifications might be required for a person of skill to integrate the teachings of multiple prior-art references does not mean that the combination of those references is unpredictable or cannot support an obviousness rejection. MCM Portfolio LLC v. Hewlett-Packard Co., 812 F.3d 1284, 1294 (Fed. Cir. 2015) (“The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference.” (quoting In re Keller, 642 F.2d at 425)); In re Sneed, 710 F.2d 1544, 1550 (Fed. Cir. 1983) (“[I]t is not necessary that the inventions of the references be physically combinable to render obvious the invention under review.”). We are therefore not persuaded of error in the Examiner’s findings regarding cited prior art combination to teach or suggest the disputed limitation of claim 44. Because Appellant does not 6 Notably, claim 44 merely recites “animat[ing] an Avatar model based on an Avatar data bit stream,” with no limitation as to the source of the data bit stream. App. Br. 22. The claim further recites that the avatar model exists “prior to receipt of the Avatar data bit stream.” Appeal 2018-008709 Application 14/128,996 20 separately argue the Examiner’s rejection of dependent claims 45–49, we sustain the Examiner’s obviousness rejection of claims 44–49. § 103(a) Rejections of Claims 50–52, 54, 55 With respect to claims 50–55, which are directed to a method that receives an audio bit stream and an avatar data bit stream, Appellant asserts that the Examiner’s rejection should be reversed “for at least the reasons presented above in regard to claim 44,” and further argues that “the combination of Dimtrva and Lee fail to disclose ‘an Avatar data . . . bit stream generated based on a piece of video of a user that also comprises parameters indicative of an out-of-plane rotation or a z-axis translation of the Avatar model.” App. Br. 16–17. Appellant argues that “[s]ince claim 50 specifies both that (i) the Avatar data bit stream comprises one or more parameters indicative of an out-of-plane rotation and a z-axis translation of the animated Avatar model and that (ii) the Avatar data bit stream has been generated based on a piece of video of a user, then the parameters indicative of an out-of-plane rotation and a z-axis translation must be generated based on the video of the user.” Id. at 17. “Even assuming that Dimtrva teaches an Avatar data bit stream generated based on a piece of a video of a user,” Appellant argues, “neither Dimtrva nor Lee discloses determining parameters indicative of an out-of-plane rotation and a z-axis translation must be generated based on the video of the user.” Id. In response, the “Examiner fails to see how the argument here is appreciably different” from the one that Appellants make for claim 31, and states that “the same response applies.” Ans. 17. We agree with the Examiner that the cited prior art combination teach or suggest the disputed limitation of claim 50 for the same reasons as those Appeal 2018-008709 Application 14/128,996 21 discussed above. Appellant agrees that Dimtrva teaches extraction of avatar data from a video. Reply Br. 7 (“The teaching of . . . Dimtrva . . . is extraction of 2D avatar data from a video.”) (emphasis added). Although we agree with Appellant that Lee fails to disclose determining parameters indicative of a z-axis translation based on the video of the user, Appellant fails to explain why a person of ordinary skill in the art would find determining those parameters for an avatar model, as disclosed in Lee, to be meaningfully different from measuring the same for an object presented in a video. We are therefore not persuaded of error in the Examiner’s findings. Because Appellant does not separately argue the Examiner’s rejection of dependent claims 51, 52, 54, and 55, we sustain the Examiner’s obviousness rejection of claims 50–52, 54, and 55. § 103 Rejections of Claim 56 Claim 56 depends from independent claim 31 and recites “the synchronization information comprises a first time marker inserted in the audio bit stream, a second time marker inserted in the avatar data bit stream, and correlating information correlating the first time marker and the second time marker.” (App. Br. 25), which the Examiner finds is also taught by Dimtrva. Ans. 18 (citing Dimtrva ¶ 7). The Examiner determines that: Although [Dimtrva] does not explicitly use the phrase ‘time marker’ and ‘correlating information’ . . . a synchronization is a tie between at least one point in, e.g. the audio stream an a corresponding point in, e.g. the video stream, and thus is a first marker (the point in the audio), a second marker (the point in the video) and correlating information (the fact that they correspond). Appeal 2018-008709 Application 14/128,996 22 Id. Synchronization, the Examiner concludes “inherently has at least one tie between an audio point and a video point, and that fulfills the markers and correlation requirements.” Id. at 19. Appellant argues that “Dimtrva does not disclose the specific features of claim 56 of including time markers and correlating information in the synchronization information.” App. Br. 18. Appellant further contends that “synchronization does not inherently require the insertion of time markers and correlating information as recited in claim 56.” Reply Br. 8. Appellant explains that “Dimtrva could, for example, synchronize the video and audio by editing the video to correspond to the audio, without any insertion of time markers or other correlating information.” Id. Thus, Appellant argues, “the Examiner did not meet the threshold to establish inherency.” Id. (citing Ex parte Levy, 17 U.S.P.Q.2d 1461 (BPAI 1990)). We are persuaded of Examiner error. Claim 56 requires time markers inserted in the two bit streams and requires correlating information correlating the markers. Although we agree with the Examiner that “[t]he claim does not require the markers be structured in a particular manner” or “that the time markers be placed in particular areas within in the stream” (Ans. 18), there is no teaching or suggestion in Dimtrva of any marker that is inserted in either of the bit streams for synchronization. In fact, Dimtrva teaches use of other methods, e.g., “semantic association,” to match video representations to audio features. See Dimtrva ¶ 81. We therefore agree with Appellant that the Examiner has failed to meet the threshold to establish inherency. See Honeywell Int’l Inc. v. Mexichem Amanco Holding S.A., 865 F.3d 1348, 1354 (Fed. Cir. 2017) (stating that “the use of inherency in the context of obviousness must be carefully circumscribed Appeal 2018-008709 Application 14/128,996 23 because ‘[t]hat which may be inherent is not necessarily known’ and that which is unknown cannot be obvious”) (quoting In re Rijckaert, 9 F.3d 1531, 1534 (Fed. Cir. 1993)). Thus, we agree with Appellant that the Examiner has not shown that Dimtrva teaches or suggests “synchronization information compris[ing] a first time marker inserted in the audio bit stream, a second time marker inserted in the avatar data bit stream, and correlating information correlating the first time marker and the second time marker,” as recited in claim 56. The Examiner also does not rely on Lee or Tsai as teaching this limitation. Final Act. 20. Accordingly, we do not sustain the Examiner’s obviousness rejection of claim 56. DECISION For the reasons above, we affirm the Examiner’s decision rejecting 31–41, 43–52, and 54–55 under § 103(a), but we reverse the Examiner’s decision rejecting 31–41, 43–52, and 54–56 under § 101, and rejecting claim 56 under § 103(a). Claims Rejected 35 U.S.C. § Basis Affirmed Reversed 31–41, 43– 52, 54–56 101 Ineligible subject matter 31–41, 43– 52, 54–56 31, 32, 34, 44, 47, 48, 50, 54, 56 103(a) Dimtrva, Lee, Tsai 31, 32, 34, 44, 47, 48, 50, 54, 56 33, 45, 51 103(a) Dimtrva, Lee, Tsai, Nakagawa 33, 45, 51 35, 36, 38, 39, 41, 46, 52 103(a) Dimtrva, Lee, Tsai, Treadwell 35, 36, 38, 39, 41, 46, 52 Appeal 2018-008709 Application 14/128,996 24 37, 43, 49, 55 103(a) Dimtrva, Lee, Tsai, Treadwell, Bishop 37, 43, 49, 55 Overall Outcome 31–41, 43–52, 54–55 56 TIME TO RESPOND No time for taking subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 41.50(f). AFFIRMED-IN-PART Copy with citationCopy as parenthetical citation