Ex Parte ShuDownload PDFPatent Trial and Appeal BoardMar 17, 201712396933 (P.T.A.B. Mar. 17, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 12/396,933 03/03/2009 Chang-Qing Shu 0110243 7214 45191 7590 HERBERT L. AT .TEN AT .TEN, DYER, DOPPELT, MILBRATH & GILCHRIST, P.A. 255 SOUTH ORANGE AVENUE, SUITE 1401 P. O. BOX 3791 ORLANDO, EL 32802-3791 EXAMINER MOBIN, HASANUL ART UNIT PAPER NUMBER 2168 NOTIFICATION DATE DELIVERY MODE 03/21/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): Skemraj @ addmg.com jlong@addmg.com nmacdonald @ addmg.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte CHANG-QING SHU1 Appeal 2017-000626 Application 12/396,933 Technology Center 2100 Before HUNG H. BUI, JON M. JURGOVAN, and DAVID J. CUTITTAII, Administrative Patent Judges. CUTITTA, Administrative Patent Judge. DECISION ON APPEAL This is an appeal under 35 U.S.C. § 134(a) from the Examiner’s decision rejecting claims 1—5, 8—17, 19, 22, 25, and 26, all pending claims of the application.2 We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 According to Appellant, the real party in interest is Adacel, Inc. Appeal Br. 2. 2 Claims 6, 7, 18, 20, 21, 23, and 24 are cancelled. Final Act. 2. Appeal 2017-000626 Application 12/396,933 STATEMENT OF THE CASE According to Appellant, the application relates to tuning a user- dependent language model for a speech recognition engine by reviewing data files viewed or drafted by a user to determine the user’s preferred vocabulary. Spec. 4, 15, 16.3 Claims 1, 17, and 22 are independent. Claim 1 is representative and is reproduced below with disputed limitations in italics: 1. A method of making a user dependent language model for a speech recognition engine, the user dependent language model being dependent on a particular user, the method comprising: reviewing a plurality of data files to determine whether the data files include text viewed by the particular user, extracting the user-vie wed texts from the data files; associating weighting factors with the extracted texts', generating a sorted text element list based on the extracted texts and the weighting factors; and compiling the user dependent language model based on the sorted text element list; wherein associating weighting factors with the extracted texts includes weighting user-generated extracted texts higher than other extracted texts. Appeal Br. 23 (Claims App’x). 3 Throughout this Decision, we refer to the following documents: (1) Appellant’s Specification filed March 3, 2009 (Spec.); (2) the Final Office Action (Final Act.) mailed June 4, 2015; (3) the Appeal Brief (Appeal Br.) filed January 5, 2016; and (4) the Examiner’s Answer (Ans.) mailed July 15, 2016. 2 Appeal 2017-000626 Application 12/396,933 REFERENCES The prior art relied upon by the Examiner in rejecting the claims on appeal includes: Smith US 6,308,151 B1 Nguyen et al. (“Nguyen”) US 2003/0050778 A1 Diao et al. (“Diao”) US 8,023,974 B1 Cheng et al. (“Cheng”) WO 8,495,144 B1 REJECTIONS Claims 1, 8—11, and 16 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the combination of Smith and Nguyen. Final Act. 3—9. Claims 2, 3, 4, 5, 17, and 19 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the combination of Smith, Nguyen, and Cheng. Final Act. 9-16. Claims 12, 13, 14, and 15 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the combination of Smith, Nguyen, and Diao. Final Act. 16-19. Claims 22, 25, and 26 stand rejected under 35 U.S.C. § 103(a) as unpatentable over the combination of Smith, Cheng, Nguyen, and Diao. Final Act. 19—24. We review the appealed rejections for error based upon the issues identified by Appellant, and in light of the arguments and evidence produced thereon. See 37 C.F.R. § 41.37(c)(l)(iv) (2014). Oct. 23, 2001 March 13, 2003 Sept. 20, 2011 July 23, 2013 3 Appeal 2017-000626 Application 12/396,933 ISSUES 1. Does the Examiner err in finding the combination of Smith and Nguyen teaches or suggests “reviewing a plurality of data files to determine whether the data files include text viewed by the particular user,” as recited in claim 1 ? 2. Does the Examiner err in finding the combination of Smith and Nguyen teaches or suggests “associating weighting factors with the extracted texts . . . wherein associating weighting factors with the extracted texts includes weighting user-generated extracted texts higher than other extracted texts,” as recited in claim 1 ? DISCUSSION We disagree with Appellant’s contentions, and we adopt as our own (1) the Examiner’s findings and reasoning set forth in the Office Action from which this appeal is taken (Final Act. 4—6) and (2) the Examiner’s reasoning set forth in the Examiner’s Answer (Ans. 3—6). We highlight the following points for emphasis. Issue 1 Appellant argues the Examiner errs in rejecting claim 1 because Smith neither teaches nor suggests reviewing a plurality of data files to determine whether the data files include text viewed by the particular user. Instead, Smith teaches a system and method for the specific speech recognition application of ‘dictating a body of text in response to an available body of text. ’ Appeal Br. 12 (citing Smith, Abstract). 4 Appeal 2017-000626 Application 12/396,933 We are not persuaded by Appellant’s argument. Smith discloses a speech recognition system that “uses the content of [a] received E-mail message to update the language model [] for the user’s dictation session responding to the received E-mail message” to improve the recognition accuracy of the user’s dictated E-mail response. Smith, col. 4,11. 12—17. As such, we agree with the Examiner’s finding that “the particular user’s interaction with the e-mail is the determination that the user has seen or reviewed the e-mail message.” Ans. 4. That is, in dictating a response to a particular email, we find that a user must determine whether the email includes text viewed by the user, and if not, view the text in order to dictate a response. See Smith, col. 4,11. 12—17. Appellant further argues “the nature of the application-specific nature of Smith renders such a determination unnecessary (i.e., if every text to be incorporated would likely have been user-viewed, why is a preliminary determination of that fact necessary or helpful?).” Appeal Br. 13. In indicating every email would likely have been user-viewed, Appellant acknowledges that there is a possibility an email would not have been user-viewed. Appeal Br. 13. Thus, consistent with Appellant’s acknowledgment, when a user is uncertain whether he or she has previously viewed an email, we find the user will determine whether the email includes text previously viewed by the user. Moreover, even if a user had viewed a particular email in the past, the user might still wish to review the email to prepare an appropriate response to the email based on further review. Appellant next argues claim 1 requires that a plurality of data files be reviewed to determine whether they include user-viewed text, and extracted texts from all such data files are used to make the language 5 Appeal 2017-000626 Application 12/396,933 model. The response dictation system and method of Smith does not inherently require text to be extracted from more than one data file (i.e., the original email, chat room or news group conversation, or other body of text). Appeal Br. 14. We find this argument unpersuasive because we agree with the Examiner’s finding that Smith does not limit a user to viewing a single email. Ans. 4 (citing Smith col. 6,11. 45—55). For example, Smith discusses a user “dictating a body of text in response to another body of text already available to the user,” which we agree teaches or suggests reviewing a plurality of emails. Smith, col. 6,11. 48—50. As another example, Smith teaches “reviewing a plurality of data files,” as claimed, by suggesting a user may review two emails in succession. See Smith Fig. 2, 102. Accordingly, we agree with the Examiner’s finding that Smith teaches or suggests “reviewing a plurality of data files to determine whether the data files include text viewed by the particular user,” as recited in claim 1. Issue 2 The Examiner relies on Nguyen to teach or suggest “associating weighting factors with the extracted texts,” as recited in claim 1. Final Act. 5—6; Ans. 3^4 (citing Nguyen || 20—22). Nguyen describes automatically determining a topic from a body of a received email message by weighting the words in the body of the email. Nguyen 121. Appellant argues weighting words within a text based on parts of speech has absolutely nothing to do with assigning weighting factors based on whether extracted texts are user-generated or not. Use of a 6 Appeal 2017-000626 Application 12/396,933 given part of speech - a la Nguyen - in no way distinguishes a user generated text from one that is not user generated. Appeal Br. 15 (emphasis added). We find this argument unpersuasive because claim 1 does not limit the weighting to whether the extracted text is user-generated or not user generated. Rather, the claim recites “weighting user-generated extracted texts higher than other extracted texts.” Hence, Appellant argues for patentability on the basis of limitations that are not recited in the claim. See In re Self, 671 F.2d 1344, 1348 (CCPA 1982). The Examiner finds, and we agree, that Nguyen’s received message is user-generated text. Ans. 5. We also agree with the Examiner’s finding that Nguyen assigns a low weight to certain words of the received message such as articles (“‘the.’ ‘an’” etc.) and assigns a higher weight to other words such as nouns. Ans. 5; see also Nguyen 122. Thus, we find that Nguyen teaches weighting user-generated extracted texts such as nouns higher than other extracted texts such as articles (which, commensurate with the scope of claim 1, may also be user generated). Appellant argues “Nguyen’s teachings regarding weighting of words within a text is not even performed in connection with the inclusion of such words into a language model.” Appeal Br. 16. The Examiner, however, relies on Smith, not Nguyen, to teach a user dependent language model. See Final Act. 4. We conclude, therefore, that Appellant’s argument does not address the actual reasoning of the Examiner’s rejections. Instead, Appellant attacks the references singly for lacking teachings that the Examiner relies on a combination of references to show. It is well established that one cannot show nonobviousness by 7 Appeal 2017-000626 Application 12/396,933 attacking references individually where the rejections are based on combinations of references. See In re Merck & Co., 800 F.2d 1091 (Fed. Cir. 1986). This form of argument is inherently unpersuasive to show Examiner error. Our reviewing court requires that references must be read, not in isolation, but for what they fairly teach in combination with the prior art as a whole. Merck, 800 F.2d at 1097. Appellant argues “even if paragraph [0022] suggested weighting user generated texts higher than other texts, there would still be no clear reason for incorporating such a teaching into a method of making a language model.” Appeal Br. 17. Appellant’s argument that the combination lacks the required motivation is not persuasive of error because Appellant does not address the motivation identified by the Examiner. See Final Act. 6. That is, the Examiner has found actual teachings in the prior art and has additionally provided a rationale for the combination: it would have been obvious to one ordinary skill in the art at the time of invention was made having the teachings of Smith and Nguyen before him/her, to modify Smith with the teaching of Nguyen’s focused language models for improved speech input of structured documents. One would have been motivated to do so for the benefit of providing Smith with a focused language model for generating e-mail and text message (i.e., user dependent language model) to improve speech recognition. Final Act. 6. Further, we find the teachings of Smith and Nguyen suggest that the combination involves the predictable use of prior art elements according to their established functions. “The combination of familiar elements according to known methods is likely to be obvious when it does no more 8 Appeal 2017-000626 Application 12/396,933 than yield predictable results,” KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 416 (2007), especially if the combination would not be “uniquely challenging or difficult for one of ordinary skill in the art,” Leapfrog Enters., Inc. v. Fisher-Price, Inc., 485 F.3d 1157, 1162 (Fed. Cir. 2007) (citing KSR, 550 U.S. at 420). We consequently find the Examiner has provided sufficient motivation for combining Smith and Nguyen. Accordingly, we sustain the Examiner’s obviousness rejection of claim 1. We also sustain the Examiner’s obviousness rejection of independent claims 17 and 22, which are nominally argued separately, and are consequently rejected with independent claim 1 for similar reasons. Appeal Br. 19-22. Dependent claims 2—5, 8—16, 19, 25, and 26, are either nominally argued separately, or are not argued separately, and thus are rejected with their respective independent claims. Appeal Br. 19—22. DECISION We affirm the Examiner’s decision rejecting claims 1—5, 8—17, 19, 22, 25, and 26 under 35 U.S.C. § 103(a).4 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 4 In the event of further prosecution of this application, the Examiner should review and consider rejecting claim 1 under 35 U.S.C. § 101 in light of the Supreme Court Decision in Alice Corporation Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347 (2014) and subsequent agency guidance. 9 Copy with citationCopy as parenthetical citation