Ex Parte MorrisDownload PDFPatent Trial and Appeal BoardAug 9, 201311704703 (P.T.A.B. Aug. 9, 2013) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte MARGARET MORRIS ____________ Appeal 2011-003451 Application 11/704,703 Technology Center 2100 ____________ Before DONALD E. ADAMS, JEFFREY N. FREDMAN, and ULRIKE W. JENKS, Administrative Patent Judges. ADAMS, Administrative Patent Judge. DECISION ON APPEAL1 This appeal under 35 U.S.C. § 134 involves claims 1-3, 5-8, 10-13, 15, 17, and 18 (App. Br. 6; Reply Br. 3; Ans. 2). Examiner entered rejections under 35 U.S.C. § 103(a). We have jurisdiction under 35 U.S.C. § 6(b). We reverse. STATEMENT OF THE CASE The claims are directed to a system, method and machine-readable medium containing instructions. Claims 1, 6, and 11 are representative and are reproduced in the Claims Appendix of Appellant’s Brief. 1 The Real Party in Interest is Intel Corporation (App. Br. 4). Appeal 2011-003451 Application 11/704,703 2 Claims 1-3, 6-8, 11-13, 17, and 18 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the combination of Hayes-Roth2 and Hoffberg.3 Claims 5, 10, and 15 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the combination of Hayes-Roth, Hoffberg, and Krichilsky.4 ISSUE Does the preponderance of evidence relied upon by Examiner support a conclusion of obviousness? FACTUAL FINDINGS (FF) FF 1. Appellant discloses that a “Positive and Negative Affect Scale (‘PANAS’) questionnaire is a self-report mood scale that measures current mood” (Spec. 1: ¶ [0001]). FF 2. Appellant discloses “a mood mapping graphical user interface is implemented on a mobile device that allows an individual to track emotional states in a manner that is more immediate and intuitive than filling out a questionnaire, such as the PANAS questionnaire” (id. at 5: ¶ [0007]). FF 3. Hayes-Roth suggests: [S]ystems, methods and devices for developing computer and other media systems that interact with people, with one another, and with other computer controlled systems. More particularly, it relates to systems, methods, and devices for developing computer agents and characters that can interact with people, with other computer agents and characters, and with other computational or media entities. (Hayes-Roth 1: ¶ [0002]; see generally Ans. 5.) 2 Hayes-Roth, US 2002/0005865 A1, published January 17, 2002. 3 Hoffberg, US 6,850,252 B1, issued February 1, 2005. 4 Krichilsky et al., US 2002/0152200 A1, published October 17, 2002. Appeal 2011-003451 Application 11/704,703 3 FF 4. Hayes-Roth suggests that “the agent responds to user questions differently when in different moods, and the agent’s moods change in response to user statements or actions the agent performs” (id. at Abstract; see Ans. 5). FF 5. Hayes-Roth suggests that content for a computer-controlled agent can be authored “with the following steps:” [I]dentifying to an author a potential context of the agent; receiving from the author content for the agent in the potential context; and storing the content such that it can be accessed by a run-time system that uses the content to control the behavior of the agent in an actual context that occurs during operation of the agent and that matches the potential context. The authored content can be a persona, application, or role of the agent, or may relate to natural language conversation in which the agent engages. Specific instances of content can refer to dialogue delivered by the agent, gestures, mood changes, precondition values, interactions of the agent with external systems, and an agent’s itinerary or agenda. (Id. at 1: ¶ [0012]; see Ans. 9.) FF 6. Hayes-Roth’s figure 5 is reproduced below: Hayes-Roth suggests that “[i]t may be helpful for you to graph out your character’s mood dimensions and interactions on their axes so you can visualize how your moods interact and overlap. FIG. 5 is a mood diagram for the two-dimensional moods in Appeal 2011-003451 Application 11/704,703 4 the Imp Character Editor. Note that the values 4 and -4 are entirely arbitrary; only the relative areas of each mood and their positions on the plane are important. You can think of your character’s mood as a point on the plane. When the interaction touches on topics that deal with mood, that point might move towards another area on the diagram. If the point moves into another square, there would be an actual change of mood. (Id. at 5: ¶ [0103]; see Ans. 5-6.) FF 7. Hayes-Roth suggests: Potential contexts include user inputs during operation of the agent, an internal event or state of the agent, or an input from a different computer-controlled process. All of the potential contexts can refer to an agent-mood of the agent, an assumed user-mood of the user, messages between the agent and user, other computer applications, actions performed by the agent, an itinerary of the agent, or chat topics known by the agent. Potential contexts can be identified by the user or for the user, preferably in a graphical user interface that contains menus, labeled slots, gesture tables, symbolic indicators, and icons representing functions. (Id. at 1-2: ¶ [0013]; see also Ans. 5-6.) FF 8. Hayes-Roth suggests “[a]n advanced use of the Mood System might be to model the user’s mood based on the words they use and then customize the interaction based on what you perceive is the user’s state of mind” (id. at 4: ¶ [0086]; see generally Ans. 6). FF 9. Examiner finds that “[w]ithin the broadest reasonable interpretation, the ‘individual’ recited in … [Appellant’s] claims can be any type of ‘character’ having qualities to embody a mood and/or behavior” (Ans. 9). FF 10. Hayes-Roth suggests attaching “log cues to specific lines of dialog” such that “[w]hen a character uses a particular line of dialog with log cue attached in a real interaction” a record of the log cue is made and the author Appeal 2011-003451 Application 11/704,703 5 can then use the record for further analysis and refinement of the system (Hayes-Roth 7: ¶¶ [0129]-[0130]). FF 11. Examiner finds that Hayes-Roth fails to suggest a “mood mapping graphical user interface … comprised of a touch screen” (Ans. 6 (emphasis removed)). FF 12. Examiner finds that “Hoffberg teaches an intelligent electronic appliance that includes a graphical user interface, and position sensors for determining the position of a finger or pointer on a display screen (touch- screen input) or on a touch surface,” as well as, “mood detection and receiving mood data from an individual” (id.). FF 13. Examiner finds that the combination of Hayes-Roth and Hoffberg fails to suggest a “mood mapping graphical user interface [that] is used to assess user satisfaction with a consumer device” and relies on Krichilsky to suggest “a technique for retrieving information pertaining to a product and assessing user satisfaction with a consumer device” (Ans. 7-8 (emphasis removed)). ANALYSIS Based on the combination of Hayes-Roth and Hoffberg, Examiner concludes that, at the time Appellant’s invention was made, it would have been prima facie obvious to modify “the system taught by Hayes-Roth to have included Hoffberg’s touch screen graphical user interface, to have achieved a device with a more natural selection technical preferable for direct manipulation of mood detection” (Ans. 6). We are not persuaded. Hayes-Roth refers to a human author and user, as well as, a computerized agent/character (FF 3-7). Hayes-Roth’s author defines the operating conditions of the computerized agent/character (FF 5). As Appeal 2011-003451 Application 11/704,703 6 Appellant explains, “the content [(i.e. mood data)] entered by the author in Hayes-Roth corresponds to the [computerized] agent/character rather than the author” (App. Br. 13). Stated differently, the author defines a set of mood parameters for the computerized agent/character. In contrast, Appellant’s claims require the receipt of mood data from an individual, wherein the mood data corresponds to the individual from whom the mood data is received (see, e.g., Claim 1). Examiner failed to establish that the mood data the author uses to program the computerized agent/character corresponds to the mood of the author and/or user. Similarly, we recognize that Hayes-Roth suggests modeling a user’s mood based on the words they use and then customizing the user’s interaction with the computerized agent/character based on what the author perceives is the user’s state of mind (FF 8). Examiner failed, however, to explain how this suggestion of Hayes-Roth relates to a mood mapping graphical user interface that receives mood data from an individual, wherein the mood data corresponds to the individual from whom the mood data is received, as is required by Appellant’s claimed invention (see Claim 1; Cf. Ans. 11 (“Hayes-Roth teaches a mobile device … as well as several mood shifts that can be exhibited by the agent”) (emphasis added)). We recognize, but are not persuaded by, Examiner’s reliance on Hayes-Roth’s figure 5 (see Ans. 5-6). Hayes-Roth’s figure 5 is a graph of a computerized character’s mood dimensions and interactions (FF 6). Examiner failed to establish that such a mood data graph represents mood data from an individual, wherein the mood data corresponds to the individual from whom the mood data is received as is required by Appellant’s claimed invention (see, e.g., Claim 1). Appeal 2011-003451 Application 11/704,703 7 We also recognize, but are not persuaded by Examiner’s assertion that an individual, within the scope of Appellant’s claimed invention, can be a computerized agent/character (FF 9). Examiner failed to establish that the computerized agent/character interacts with a mood mapping graphical user interface comprising a touch screen that is configured to receive mood data from, in Examiner’s context, the computerized agent/character, as would be required by the claim if it is construed in the context of Examiner’s interpretation of the term “individual” (see FF 9; Cf. claim 1; see also Reply Br. 6 (“Nowhere in Hayes-Roth is there a discussion of the mood diagram of FIG. 5 being part of a mood mapping graphical user interface that is used to receive mood data from an individual, particularly if the individual is an authored character as suggested by the Examiner”)). In this regard, we recognize, but are not persuaded by, Examiner’s reliance on Hayes-Roth’s log cues to suggest the “predefined time intervals” required by Appellants’ claimed invention (see Ans. 5; FF 9; Cf. Claim 1). Appellants’ claimed invention requires the mood mapping graphical user interface to receive mood data from an individual in predefined time intervals (see, e.g., Claim 1). In contrast, Hayes-Roth’s log cues allow the creation of a record capturing a computerized character’s use of a particular line of dialog (FF 9). Examiner failed to explain how Hayes-Roth’s computerized character based log cue relates to the receipt of mood data from an individual in predefined time intervals as is required by Appellants’ claimed invention (see e.g., Claim 1). Examiner’s reliance on “Hoffberg and Krichilsky fail to cure the deficiencies of Hayes-Roth” (Reply Br. 6; Cf. FF 11-13). Appeal 2011-003451 Application 11/704,703 8 CONCLUSION OF LAW The preponderance of evidence relied upon by Examiner fails to support a conclusion of obviousness. The rejection of claims 1-3, 6-8, 11-13, 17, and 18 under 35 U.S.C. § 103(a) as unpatentable over the combination of Hayes-Roth and Hoffberg is reversed. The rejection of claims 5, 10, and 15 under 35 U.S.C. § 103(a) as unpatentable over the combination of Hayes-Roth, Hoffberg, and Krichilsky is reversed. REVERSED cdc Copy with citationCopy as parenthetical citation