Ex Parte Lazay et alDownload PDFPatent Trial and Appeal BoardSep 27, 201310932341 (P.T.A.B. Sep. 27, 2013) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte THOMAS LAZAY, JORDAN COHEN, TRACY MATHER ZLATKOVA, and WILLIAM BARTON ____________________ Appeal 2011-005336 Application 10/932,341 Technology Center 2600 ____________________ Before THU A. DANG, JAMES R. HUGHES, and JEFFREY S. SMITH, Administrative Patent Judges. DANG, Administrative Patent Judge. DECISION ON APPEAL Appeal 2011-005336 Application 10/932,341 2 I. STATEMENT OF THE CASE Appellants appeal under 35 U.S.C. § 134(a) from a Final Rejection of claims 1-3, 5-10, 12-25, and 27. Claims 4, 11, and 26 have been canceled. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. A. INVENTION Appellants’ invention relates to operating wireless communication devices using a user interface having earcons as user prompts (Spec. 1, ¶[0002]). B. ILLUSTRATIVE CLAIM Claim 1 is exemplary: 1. A method for operating a communication device that includes speech recognition capabilities, the method comprising: implementing on the device a user interface that employs a plurality of different user prompts and a plurality of different earcons, wherein each user prompt of said plurality of different user prompts has a corresponding language representation and is for either soliciting a corresponding spoken input from the user or informing the user about an action or state of the device, and wherein each earcon of said plurality of different earcons is mapped to a corresponding different one of said plurality of user prompts and is a non-verbal representation of the user prompt to which it is mapped; and when any selected one of said plurality of user prompts is issued by the user interface on the device, audibly generating the earcon that is mapped to the selected user prompt and audibly generating the corresponding language representation for the selected prompt. Appeal 2011-005336 Application 10/932,341 3 C. REJECTION The prior art relied upon by the Examiner in rejecting the claims on appeal is: French US 6,012,030 Jan. 4, 2000 Claims 1-3, 5-10, 12-25, and 27 stand rejected under 35 U.S.C. § 103(a) as being unpatentable over French. II. ISSUE The main issue before us is whether the Examiner has erred in concluding that French would have suggested “when any selected one of said plurality of user prompts is issued” by the user interface, “audibly generating the earcon that is mapped to the selected user prompt and audibly generating the corresponding language representation for the selected prompt” (claim 1). III. FINDINGS OF FACT The following Findings of Fact (FF) are shown by a preponderance of the evidence. 1. French discloses a multimodal use interface including a speech interface and a non-speech interface (e.g. a graphical or tactile user interface), which comprises means for dynamically switching between a background state of the speech interface and a foreground state of the speech interface, wherein in the foreground state, speech prompts are fully implemented, while in a background state, speech prompts are replaced by earcons (Abstract). 2. A user selects one of several modes of interaction with a unit, Appeal 2011-005336 Application 10/932,341 4 wherein if the user initiates the interaction with a speech input, it is turned on in the foreground state, wherein if the user uses the keypad and/or soft keys on the graphical user interface to initiate interaction, the user places the speech interface in background state (col. 6, ll. 24-43). IV. ANALYSIS Although Appellants concede that French teaches that “both earcons and speech prompts can be employed” (App. Br. 10), Appellants argue that “[t]here is nothing that French says in his patent that ever implies or suggests that a dual prompt (i.e., both an earcon and a speech prompt) is generated” (id.). However, the Examiner concludes that French “suggests wherein each earcon of said plurality of different earcons is mapped to a corresponding different one of said plurality of user prompts (for otherwise the user in a background mode would not understand what the earcon is prompting for)” (Ans. 4), and explains that “earcons” are “audible sounds, which can be, and often are, used to alert a possibly distracted, waiting, user that a voice prompt is about to occur” (Ans. 7). We find no error in the Examiner’s conclusion. We give the claim its broadest reasonable interpretation consistent with the Specification. See In re Morris, 127 F.3d 1048, 1054 (Fed. Cir. 1997). We note that although Appellants argue that French does not disclose that “a dual prompt (i.e., both an earcon and a speech prompt) is generated,” Appellants concede that French teaches that “both earcons and speech prompts can be employed”(App. Br. 10, emphasis added). Thus, Appellants appear to be arguing that French does not disclose employing both earcons and speech prompts at the same time, which is not commensurate in scope Appeal 2011-005336 Application 10/932,341 5 with the recited language of claim 1. Instead, claim 1 merely requires that when a selected user prompt is issued, an earcon that is mapped to the selected user prompt is generated as well as and a corresponding language representation for the selected prompt (at a different time or at the same time). French discloses a user selecting one of several modes of interaction with a unit, wherein if the user initiates the interaction with a speech input, it is turned on in the foreground state, wherein if the user uses the keypad and/or soft keys on the graphical user interface to initiate interaction, the user places the speech interface in background state (FF 2). That is, French discloses a user selecting one of a plurality of user prompts, and in response to the selection, the speech interface is placed in a foreground or background state. In French, the speech interface switches between the foreground and background states, wherein in the foreground state, speech prompts are fully implemented, while in a background state, speech prompts are replaced by earcons (FF 1). That is, French discloses speech prompts to be employed in the foreground state and corresponding earcons to be employed in the background state to replace the speech prompts. In other words, French discloses generating both speech prompts and the corresponding earcons to be employed interchangeably between the background and foreground states. Thus, we conclude that French at the least suggests “when any selected one of said plurality of user prompts is issued” by the user interface, “audibly generating the earcon that is mapped to the selected user prompt Appeal 2011-005336 Application 10/932,341 6 and audibly generating the corresponding language representation for the selected prompt”, as required by claim 1. Further, even if French requires that both earcons and the speech prompts are employed at the same time, we are not persuaded and Appellants have presented no persuasive evidence that modifying the teaching of employing both speech prompts and earcons, as taught by French, to provide dual prompts at the same time, was “uniquely challenging or difficult for one of ordinary skill in the art.” Leapfrog Enters., Inc. v. Fisher-Price, Inc., 485 F.3d 1157, 1162 (Fed. Cir. 2007). Therefore, we find no error with the Examiner’s conclusion that such modification of French would have conveyed a reasonable expectation of success to a person of ordinary skill having common sense at the time of the invention. Accordingly, we find that Appellants has not shown that the Examiner erred in rejecting claim 1 over French. Appellants do not provide arguments for claims 2, 3, 5-10, 12-25, and 27 separate from claim 1 (App. Br.9-10), and thus claims 2, 3, 5-10, 12-25, and 27 fall with claim 1. V. CONCLUSION AND DECISION The Examiner’s rejection of claims 1-3, 5-10, 12-25, and 27 under 35 U.S.C. § 103(a) is affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED llw Copy with citationCopy as parenthetical citation