Ex Parte Miller et alDownload PDFPatent Trial and Appeal BoardMay 5, 201410977271 (P.T.A.B. May. 5, 2014) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE __________ BEFORE THE PATENT TRIAL AND APPEAL BOARD __________ Ex parte MARK S. MILLER, ALAN H. KARP, MARK YOSHIKAWA, and SUSIE WEE __________ Appeal 2012-000515 Application 10/977,271 Technology Center 2100 __________ Before, JEFFREY N. FREDMAN, ULRIKE W. JENKS, and SUSAN L. C. MITCHELL, Administrative Patent Judges. FREDMAN, Administrative Patent Judge. DECISION ON APPEAL This is an appeal1 under 35 U.S.C. § 134 involving claims to a method of providing a user a non-degraded presentation experience from data. The Examiner rejected the claims as anticipated and as obvious. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 Appellants identify the Real Party in Interest as Hewlett-Packard Development Company, LP (see App. Br. 1). Appeal 2012-000515 Application 10/977,271 2 Statement of the Case Background “Substantial effort and costs have been invested in protecting every type of electronic data (e.g., software programs, movies, music, books, text, graphics, etc.) from unauthorized use” (Spec. 1, ll. 13-15). According to the Specification, “[e]ven if an impenetrable protection scheme is crafted, the data may still be susceptible to unauthorized copying via the ‘analog hole’” (Spec. 1, ll. 28-29). The Specification teaches that “the ‘analog hole’ is ‘plugged’ by introducing customization into the presentation experience. The customization is achieved by adapting the data using nondeterministic information (e.g., user attribute from the user, environmental attribute, presentation attribute of a presentation device)” (Spec. 6, ll. 16-19). The Claims Claims 1-36 and 51-68 are on appeal. Claim 1 is representative and reads as follows: 1. A method of providing a user a non-degraded presentation experience from data while limiting access to said non-degraded presentation experience, said method comprising: gathering one or more attributes from one or more sources; accessing said data; and adapting said data using said one or more attributes so that said non-degraded presentation is available solely to said user and is dependent on said one or more attributes. Appeal 2012-000515 Application 10/977,271 3 The issues A. The Examiner rejected claims 1, 2, 4-6, 8-11, 16-19, and 24 under 35 U.S.C. § 102(b) as anticipated by Andreyko2,3 (Ans. 5-14). B. The Examiner rejected claims 3, 7, 25-27, 30-33, 36, 51, 59, 60, and 68 under 35 U.S.C. § 103(a) as obvious over Andreyko and Shuttleworth4 (Ans. 14-26). C. The Examiner rejected claims 12, 13, 20, and 21 under 35 U.S.C. § 103(a) as obvious over Andreyko and Pellegrini5,6 (Ans. 26-29). D. The Examiner rejected claims 14, 15, 22, and 23 under 35 U.S.C. § 103(a) as obvious over Andreyko and Kake7 (Ans. 29-31). E. The Examiner rejected claims 28, 29, 34, 35, 56-58, and 65-67 under 35 U.S.C. § 103(a) as obvious over Andreyko, Shuttleworth, and Barth8 (Ans. 31-34). F. The Examiner rejected claims 52-55 and 61-64 under 35 U.S.C. § 103(a) as obvious over Andreyko, Shuttleworth, Pellegrini, and Kake (Ans. 34-36). 2 Andreyko et al., WO 03/063470 A1, published Jul. 31, 2003. 3 The Examiner relies upon Andreyko et al., US 2005/0073576 A1, published April 7, 2005 for translation of the Andreyko ’470 WO document. All further citations to “Andreyko” will reference pages and paragraphs in the US published application. 4 Shuttleworth et al., US 2005/0063552 A1, published Mar. 24, 2005. 5 Pellegrini et al., WO 03/101150 A1, published Dec. 4, 2003. 6 The Examiner relies upon Pellegrini et al., US 2005/0226437 A1, published Oct. 13, 2005 for translation of the Pellegrini ’150 WO document. All further citations to “Pellegrini” will reference pages and paragraphs in the US published application. 7 Kake et al., US 2003/0126035 A1, published Jul. 3, 2003. 8 Barth et al., US 6,229,577 B1, issued May 8, 2001. Appeal 2012-000515 Application 10/977,271 4 A. 35 U.S.C. § 102(b) over Andreyko The Examiner finds that Andreyko discloses “A method of interactive television providing for generating, transforming, or displaying video information taking into account individual peculiarities of the user’s eye.” Andreyko, Abstract. Further, Andreyko discloses that the method results in “protection of video information again non-authorized access, distribution and use.” . . . Andreyko discloses that “sensor or sensors are used for the dynamic determination of the eye characteristics with respect to the video image perceived by the user with formation of data interrogation signals coding eye characteristics.” . . . Andreyko discloses that “Video signals are transmitted through data channels from one source of video signals to one consumer of video signals.” . . . Andreyko discloses that “. . . the task of the minimum video data redundancy is solved by reduction of the video signal quality level down to the minimum level, when the user perceives video image as the real image within the stated eye sectors.” . . . Accordingly, the computing facility degrades the video outside of the determined sector that the authorized user perceives.) (Ans. 6-8.) The issue with respect to this rejection is: Does the evidence of record support the Examiner’s conclusion that Andreyko anticipates claims 1 and 9? Findings of Fact 1. The Specification teaches that the “user attributes can be static or dynamic. Examples of static user attributes are user’s audio acuity and user’s visual acuity. Examples of dynamic user attributes include eye Appeal 2012-000515 Application 10/977,271 5 movement, head movement, and virtual movement in a virtual environment” (Spec. 9, ll. 6-9). 2. The Specification teaches that [T]he user attribute unit 30 may utilize one or more eye tracking techniques. Examples of eye tracking techniques include reflected light tracking techniques, electro- aculography tracking techniques, and contact lens tracking techniques. . . . [O]ther eye tracking techniques are also well-suited for the user attribute unit 30. (Spec. 9, ll. 22-28.) 3. The Specification teaches that “adaptation of the data using static or dynamic user attributes ensures that a non-degraded presentation experience produced by the presentation device 40 is available solely to the user 50” (Spec. 10, ll. 12-15). 4. Andreyko teaches a “method of interactive television using eye foveal faculties of individual and grouped users, which protects video information against non-authorized access and distribution . . . and for, at least, one individual user perceiving a video item in one information display facility” (Andreyko 6 ¶ 0087). 5. Andreyko teaches that [D]ynamically changed characteristics such as coordinates and direction of eye optical axes, eye accommodation depth is dynamically transmitted (O1-1), whereas slowly changed characteristics, such as eye apple diameter, function of eye resolution dependence with respect to the eye optical axis are transmitted to the computing facility by periodical or initial entering into the memory of the computing facility. Appeal 2012-000515 Application 10/977,271 6 (Andreyko 8 ¶ 0131.) 6. Andreyko teaches providing coding information (i.e. data) regarding the “quality level of video image (K2), within which the requirements of the user’s or the users’ eye perceiving video image are generated by the signals coding user’s eye dynamical characteristics taking into account the function of the eye resolution dependence in the computing facility” (Andreyko 8 ¶ 0133). 7. Andreyko teaches that: When boundaries and quality levels of video image sectors are determined, the task of the minimum video data redundancy is solved by reduction of the video signal quality level down to the minimum level, when the user perceives video image as the real image within the stated eye sectors. Quality level of video image sectors is minimized and dimensions of video image sectors of high level are reduced at the earliest possible stages of video signal formation, conversion, transmission or data information display. (Andreyko 8 ¶ 0134.) 8. Andreyko teaches that: The video signal sequentially reduces its redundancy down to the level corresponding to the eye requirement of one individual user taking into account individual faculties of its eye and its demand with the display of video image covering sectors with different quality level on the screen of the data information display facility. (Andreyko 9 ¶ 0140.) 9. Andreyko teaches that “eye characteristics may be used dynamically measured by means of a sensor or sensors: eye orientation, eye Appeal 2012-000515 Application 10/977,271 7 coordinates with respect to the video image and other characteristics” (Andreyko 8 ¶ 0130). Principles of Law “A single prior art reference that discloses, either expressly or inherently, each limitation of a claim invalidates that claim by anticipation.” Perricone v. Medicis Pharm. Corp., 432 F.3d 1368, 1375 (Fed. Cir. 2005). Analysis We adopt the Examiner’s findings regarding the scope and content of the prior art (Ans. 4-53; FF 1-9). Andreyko teaches a method “of interactive television using eye foveal faculties of individual and grouped users, which protects video information against non-authorized access and distribution” (Andreyko 6 ¶ 0087; FF 4). Andreyko teaches gathering attributes, consistent with the Specification, which defines static user attributes as encompassing visual acuity and dynamic user attributes as encompassing eye movement (FF 1). Specifically Andreyko teaches static attributes such as eye resolution (i.e., visual acuity) as well as “dynamically changed characteristics such as coordinates and direction of eye optical axes” which represent eye movement (Andreyko 8 ¶ 0131; FF 5). Andreyko teaches accessing this data where “the requirements of the user’s or the users’ eye perceiving video image are generated by the signals coding user’s eye dynamical characteristics taking into account the function of the eye resolution dependence in the computing facility” (Andreyko 8 ¶ 0133; FF 6). Appeal 2012-000515 Application 10/977,271 8 Andreyko teaches to adapt the data based on the attributes, to present a non-degraded presentation to a user, where the “video signal sequentially reduces its redundancy down to the level corresponding to the eye requirement of one individual user taking into account individual faculties of its eye and its demand with the display of video image covering sectors with different quality level on the screen” (Andreyko 9 ¶ 0140; FF 8). We therefore agree with the Examiner that Andreyko teaches each element of claims 1 and 9 as arranged in the claim (see Ans. 5-8). We address Appellants’ arguments below. Appellants contend that “that Andreyko does not teach ‘adapting said data using said one or more attributes so that non-degraded presentation is available solely to said user. . . ,’ (emphasis added) as recited by Claim 1” (App. Br. 12). We are not persuaded. Andreyko teaches that the method may be applied to a single user, teaching application of a method which “protects video information against non-authorized access and distribution . . . for, at least, one individual user perceiving a video item in one information display facility” (Andreyko 6 ¶ 0087; FF 4). Andreyko further teaches that The video signal sequentially reduces its redundancy down to the level corresponding to the eye requirement of one individual user taking into account individual faculties of its eye and its demand with the display of video image covering sectors with different quality level on the screen of the data information display facility. (Andreyko 9 ¶ 0140; FF 8.) Andreyko is clearly teaching adapting the data to obtain a non-degraded presentation that is available to a single user based on the attributes of the user (FF 4-8). Appeal 2012-000515 Application 10/977,271 9 To the extent that Appellants rely upon the word “solely,” in the situation where the method is being provided to a single user as expressly taught by Andreyko (FF 4, 8), the method necessarily, and thus inherently, provides the non-degraded presentation solely to that single user as required by claim 1. See MEHL/Biophile Int’l Corp. v. Milgraum, 192 F.3d 1362, 1366 (Fed. Cir. 1999) (“Where … the result is a necessary consequence of what was deliberately intended, it is of no import that the article’s authors did not appreciate the results.”) There is no requirement in claim 1 that other users be present, from whom the non-degraded presentation is shielded. Appellants also contend that they [U]nderstand Andreyko to teach using sectors to limit the area for collecting information for generating interrogation signals (eye or eye characteristics) to the boundaries of the sectors (see Andreyko first sentence of 0133 quoted herein). Therefore, Appellants do not understand Andreyko’s sector to be the result of determining Andreyko’s eye characteristics (App. Br. 13). We are not persuaded. Claim 1 simply requires the use of “attributes.” The Specification discusses “attributes” as either static or dynamic, where visual acuity is identified by the Specification as a static attribute, and eye movement is identified by the Specification as a dynamic attribute (FF 1). As discussed above, Andreyko teaches a sensor to measure eye characteristics (FF 9) where dynamic characteristics include direction of the eye optical axes, i.e., eye movement as well as characteristics such as eye resolution, i.e., visual acuity (FF 5). Appeal 2012-000515 Application 10/977,271 10 Andreyko then specifically teaches that “quality level of video image (K2), within which the requirements of the user’s or the users’ eye perceiving video image are generated by the signals coding user’s eye dynamical characteristics taking into account the function of the eye resolution dependence in the computing facility” (Andreyko 8 ¶ 0133; FF 6). Thus, Andreyko is adapting the image quality or data, using the eye attributes, so that the quality of the image is dependent upon the attributes (FF 6). Conclusion of Law The evidence of record supports the Examiner’s conclusion that Andreyko anticipates claims 1 and 9. B. 35 U.S.C. § 103(a) over Andreyko and Shuttleworth Appellants separately argue claims 25 and 51, which were not rejected in the anticipation rejection over Andreyko, but rather, were subject to the Examiner’s obviousness rejection over Andreyko and Shuttleworth. Appellants contend that “none of Shuttleworth, Pellegrini, and Kake remedy the deficiency in Andreyko in that none of Andreyko Shuttleworth, Pellegrini and Kake describe[s], teach[es,] or suggest[s] ‘adapting said data using said one or more user attributes and said one or more environmental attributes to generate adapted data that is presented to said user,’ as recited by Claim 51” (App. Br. 25). We are not persuaded. The Examiner finds that: Shuttleworth illustrates a system 500 that compensates for ambient noise levels in figure 5. The ambient noise compensator 514 receives the ambient noise level signal 560 and calculates a compensation signal 564. see Id., ¶ 0070. Shuttleworth discloses that “the ambient noise compensator Appeal 2012-000515 Application 10/977,271 11 514 may generate the compensation signal 564 in such a way as to ensure that the audio output signal 554 has a sound level that is higher than the ambient noise.” Id., ¶ 0071.), for the purpose of adapting data “so that the desired sound is audible over undesirable noise, whose sound level may vary over time in the listening area.” see Shuttleworth, ¶ 0014. (Ans. 25.) The Examiner finds that “[a]ccordingly, a user in listening area 101 (as illustrated in FIG. 1 [of Shuttleworth]) is presented a presentation that is not degraded by background and ambient noise. Users outside of listening area 101 would be presented a presentation without the compensation for background and ambient noise, i.e. they would hear additional noise to the presentation” (Ans. 43-44). We again note that where a single user is present in the listening area 101, which is not excluded by the claims, the data will be adapted solely to that user, and will not be available to users outside that area in a non-degraded state. Appellants respectfully submit that Shuttleworth [T]eaches away from “adapting said data using said one or more environmental attributes so that said nondegraded presentation is available solely in said environment when said adapted data is presented to said user,” (emphasis added) as recited by independent Claim 25 since for example people outside of an auditorium can hear what is going on inside of an auditorium (App. Br. 19). Appellants “submit that Shuttleworth teaches away from ‘adapting said data using said one or more user attributes and said one or more environmental attributes to generate adapted data that is presented to said user,’ as recited by independent Claim 51” (App. Br. 25). Appeal 2012-000515 Application 10/977,271 12 We are not persuaded. A teaching away requires a reference to actually criticize, discredit, or otherwise discourage the claimed solution. See In re Fulton, 391 F.3d 1195, 1201 (Fed. Cir. 2004) (“The prior art’s mere disclosure of more than one alternative does not constitute a teaching away from any of these alternatives because such disclosure does not criticize, discredit, or otherwise discourage the solution claimed”). Appellants do not identify, and we do not find, any teaching in either Andreyko or Shuttleworth that criticizes, discredits or discourages the use of Shuttleworth’s sound compensation system in a video presentation, such as that of Andreyko. Further, while Appellants speculate that people outside an auditorium may hear some sound “data” being presented within the auditorium, Appellants have not established that that the sound “data” is not degraded when heard outside the auditorium. As the Examiner points out “[u]sers outside of listening area 101 would be presented a presentation without the compensation for background and ambient noise, i.e. they would hear additional noise to the presentation” (Ans. 44). C-F. 35 U.S.C. § 103(a) Appellants do not argue separately the claims in these obviousness rejections, instead repeating arguments regarding claims 1, 9, 25, and 51. Having affirmed the anticipation rejection of claims 1 and 9 over Andreyko, and the obviousness rejection of claims 25 and 51 over Andreyko and Shuttleworth, we also find that the further combinations with Pellegrini, Kake, and Barth renders the remaining claims obvious for the reasons given by the Examiner (see Ans. 14-34). Appeal 2012-000515 Application 10/977,271 13 SUMMARY In summary, we affirm the rejection of claims 1 and 9 under 35 U.S.C. § 102(b) as anticipated by Andreyko. Pursuant to 37 C.F.R. § 41.37(c)(1), we also affirm the rejection of claims 2, 4-6, 8, 10, 11, 16-19, and 24, as these claims were not argued separately. We affirm the rejection of claims 25 and 51 under 35 U.S.C. § 103(a) as obvious over Andreyko and Shuttleworth. Pursuant to 37 C.F.R. § 41.37(c)(1), we also affirm the rejection of claims 3, 7, 26, 27, 30-33, 36, 59, 60, and 68, as these claims were not argued separately. We affirm the rejection of claims 12, 13, 20, and 21 under 35 U.S.C. § 103(a) as obvious over Andreyko and Pellegrini (37 C.F.R. § 41.37(c)(1)). We affirm the rejection of claims 14, 15, 22, and 23 under 35 U.S.C. § 103(a) as obvious over Andreyko and Kake (37 C.F.R. § 41.37(c)(1)). We affirm the rejection of claims 28, 29, 34, 35, 56-58, and 65-67 under 35 U.S.C. § 103(a) as obvious over Andreyko, Shuttleworth, and Barth (37 C.F.R. § 41.37(c)(1)). We affirm the rejection of claims 52-55 and 61-64 under 35 U.S.C. § 103(a) as obvious over Andreyko, Shuttleworth, Pellegrini, and Kake (37 C.F.R. § 41.37(c)(1)). No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). AFFIRMED cdc Copy with citationCopy as parenthetical citation