John M. Boyer et al.Download PDFPatent Trials and Appeals BoardAug 2, 201914966802 - (D) (P.T.A.B. Aug. 2, 2019) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/966,802 12/11/2015 John M. Boyer AUS920150290US2 7512 50170 7590 08/02/2019 IBM CORP. (WIP) c/o WALDER INTELLECTUAL PROPERTY LAW, P.C. 1701 N. COLLINS BLVD. SUITE 2100 RICHARDSON, TX 75080 EXAMINER RIFKIN, BEN M ART UNIT PAPER NUMBER 2123 MAIL DATE DELIVERY MODE 08/02/2019 PAPER Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte JOHN M. BOYER, KSHITIJ P. FADNIS, COLLIN J. MURRAY, and JUSTIN A. ZINIEL ____________ Appeal 2018-008676 Application 14/966,802 Technology Center 2100 ____________ Before JOSEPH L. DIXON, JAMES W. DEJMEK, and STEPHEN E. BELISLE, Administrative Patent Judges. BELISLE, Administrative Patent Judge. DECISION ON APPEAL Appellants1 appeal under 35 U.S.C. § 134(a) from a Final Rejection of all pending claims, namely, claims 1–22. App. Br. 5. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 Appellants identify International Business Machines Corporation as the real party in interest. App. Br. 2. Appeal 2018-008676 Application 14/966,802 2 STATEMENT OF THE CASE The Claimed Invention Appellants’ invention generally relates to “an improved data processing apparatus and method and more specifically to mechanisms for improving a ground truth answer key of a question answering [(“QA”)] cognitive system using a similar passage cognitive system trained with a ground truth answer key from the question answering cognitive system.” Spec. ¶ 1. According to the Specification, an example of a QA system is the “IBM Watson™” system, which is available from International Business Machines (IBM) Corporation, and is “an application of advanced natural language processing, information retrieval, knowledge representation and reasoning, and machine learning technologies to the field of question answering.” Spec. ¶ 3. Claim 1, reproduced below, is representative of the claimed subject matter on appeal: 1. A method, in a data processing system comprising at least one processor and a memory comprising instructions which, when executed by the at least one processor, causes the at least one processor to improve ground truth in a question answering cognitive system, the method comprising: training, by the data processing system, a similar passage machine learning model for a similar passage cognitive system using a question and answer key to form a trained similar passage machine learning model, wherein the question and answer key comprises a list of question and answer specification pairs forming a ground truth for a question answering cognitive system, wherein each question is a text string and each answer specification references one or more text passages from a corpus of information; Appeal 2018-008676 Application 14/966,802 3 responsive to a search event, sending at least one text input to the similar passage cognitive system operating in accordance with the trained similar passage machine learning model and executing on the at least one processor of the data processing system, wherein the text input comprises a given text passage from a given answer specification of the question and answer key, and receiving from the similar passage cognitive system configured with the trained similar passage machine learning model a response list of references to text passages from the corpus of information that are similar to the given text input; responsive to an acceptance event for at least one text passage from the response list, supplementing, by the data processing system, the question and answer key by adding the at least one text passage to the given answer specification to form a supplemented question and answer key; and training a question answering machine learning model of the data processing system using the supplemented question and answer key such that the question answering cognitive system operates in accordance with the trained question answering machine learning model and executes on the at least one processor of the data processing system. App. Br. 24–25 (Claims Appendix). References The Examiner relied on the following references as evidence of unpatentability of the claims on appeal: Nakazawa US 2008/0195378 A1 Aug. 14, 2008 Heck US 2009/0162824 A1 June 25, 2009 Appeal 2018-008676 Application 14/966,802 4 Rejection The Examiner made the following rejection of the claims on appeal: Claims 1–222 stand rejected under 35 U.S.C. § 103 as being unpatentable over Heck and Nakazawa. Final Act. 2–7. ANALYSIS3 Spread over thirty pages of briefing without any clear argument delineations, Appellants submit what we have categorized as nine arguments as to the nonobviousness of various sets of claims 1–22 over Heck and Nakazawa. See App. Br. 5–22; Reply Br. 2–13. We identify and address each of these arguments below, but find them unpersuasive. We first turn to certain teachings of Heck and Nakazawa. Heck, titled “Automated Learning From a Question and Answering Network of Humans,” generally is directed to: A QA robot [i.e., automated question and answer system] learns how to answer questions by observing human interaction over online social networks. The QA robot observes the way people ask questions and how other users respond to those questions. In one embodiment, the QA robot observes which questions are most helpful and analyzes those questions to identify the characteristics of those questions that are most helpful. The QA 2 In the Final Action, the Examiner inadvertently captioned this rejection as applying to claims “1, 3–9, 11–16, and 18–22,” but substantively included claims 2, 10, and 17 in this rejection. See Final Act. 5–6. We agree with Appellants that the Examiner intended this rejection to apply to claims 1–22. See App. Br. 5. 3 Throughout this Decision, we have considered Appellants’ Appeal Brief filed March 16, 2018 (“App. Br.”); Appellants’ Reply Brief filed August 30, 2018 (“Reply Br.”); the Examiner’s Answer mailed July 2, 2018 (“Ans.”); the Final Office Action mailed September 20, 2017 (“Final Act.”); and Appellants’ Specification filed December 11, 2015 (“Spec.”). Appeal 2018-008676 Application 14/966,802 5 robot then uses those observations to enhance the way questions are asked and answered in the future. Heck ¶¶ 1, 15. Heck discloses that “[o]ne way to teach the QA robot how to answer questions is to boot it into an initial training mode,” where “the QA robot can then be populated with test questions and answers, archived questions and answers from a social network, and information from other sources.” Heck ¶ 31 (emphases added). “The QA robot uses those sources of information to learn.” Heck ¶ 31. “[T]his training may be supervised by people to ensure that the answers to a question are correct and that answers are being stored and indexed properly.” Heck ¶ 31 (emphasis added). Heck also discloses that “[a] QA robot generates answers to questions, but there may be times that more than one answer may be applicable to a question. In one embodiment, the QA robot analyzes candidate answers to determine the most ‘correct’ answer (e.g., the answer most likely to be correct answer).” Heck ¶ 38 (emphasis added). Nakazawa, titled in part “Question and Answer Data Editing Device,” generally is directed to “a detecting unit that detects a part of the dialogue content similar to existing question and answer data stored,” and an “extracting unit that extracts a context in which the dialogue content is made from dialogue content in the proximity of the similar part detected and registers the context extracted as new question and answer data or as index information of the question and answer data.” Nakazawa, Abstract (emphases added). Claims 1, 4, 5, 9, 11, 12, 16, 18, and 19 Appellants argue nonobviousness of the independent claims, namely, claims 1, 9, and 16, along with dependent claims 4, 5, 11, 12, 18, and 19, as a group. We select independent method claim 1 as the representative claim Appeal 2018-008676 Application 14/966,802 6 and address Appellants’ arguments thereto, and any claim in this set not argued separately will stand or fall with our analysis of the rejection of claim 1. See 37 C.F.R. § 41.37(c)(1)(iv) (2017). Our reviewing court has held the relevant inquiry in an obviousness analysis is whether the Examiner has set forth “some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.” In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006) (cited with approval in KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 418 (2007)). The test for obviousness is not whether the claimed invention is expressly suggested in any one or all of the references, but whether the claimed subject matter would have been obvious to those of ordinary skill in the art in light of the combined teachings of those references. See In re Keller, 642 F.2d 413, 425 (CCPA 1981); In re Burckel, 592 F.2d 1175, 1179 (CCPA 1979) (“[A] reference must be considered not only for what it expressly teaches, but also for what it fairly suggests.”). One of ordinary skill can use his or her ordinary skill, creativity, and common sense to make the necessary adjustments and further modifications to result in a properly functioning device. See KSR, 550 U.S. at 418 (“[A] court can take account of the inferences and creative steps that a person of ordinary skill in the art would employ.”). Argument 1: Appellants argue Heck does not teach “training . . . a similar passage machine learning model for a similar passage cognitive system using a question and answer key to form a trained similar passage machine learning model,” as recited in claim 1. See App. Br. 6–7; Reply Br. 2–6. Appeal 2018-008676 Application 14/966,802 7 Here, the Examiner finds: [Heck’s] QA robot learns (i.e. is trained) to generate answers to questions by observing users, particularly experts, as they respond to questions and answers on a social network. Here the questions and answers on the social network represent the question and answer key, which is used to train the QA robot. The QA Robot is the “similar passage machine learning model” which is trained based on questions and answers from a social network. Ans. 9. We agree here with the Examiner, and also find, as discussed above, that Heck expressly discloses training the QA robot using a question and answer key (i.e., “test questions and answers”) to form a trained QA robot. See Heck ¶ 31; see also Heck ¶¶ 32–34. Nevertheless, Appellants argue “[t]he features [of claim 1] being addressed train a machine learning model for a similar passage cognitive system that is completely separate from the claimed question answering cognitive system. Conflating the two cognitive systems is a clear error.” App. Br. 10; see id. at 9 (“The Examiner makes no distinction between a similar passage machine learning model and a question answering machine learning model.”); Reply Br. 4 (“[A] similar passage machine learning model is not trained to answer questions; it is trained to provide a response list of references to text passages from a corpus of information that are similar to a given text input.”); see also Reply Br. 2–4, 9–10. The Examiner responds that “the QA robot providing [the] answers learns from a set of question and answers observed overtime,” and “responds to and provides responses from questions and answers that are similar to the questions asked,” which “clearly denotes training . . . and provid[ing] similar passages” and “meets the broadest reasonable interpretation of the claim . . . .” Ans. 12–13. We again agree with the Examiner. Appeal 2018-008676 Application 14/966,802 8 Appellants’ Specification states that (1) “a Question Answering cognitive system (QA system) is an artificial intelligence application executing on data processing hardware that answers questions pertaining to a given subject-matter domain presented in natural language” (Spec. ¶ 27 (emphasis added)); and (2) “a Similar Passage cognitive system (SP system) is an artificial intelligence application executing on data processing hardware that provides similar passages pertaining to a given subject-matter domain presented in natural language” (Spec. ¶ 28 (emphasis added)). We find that, under the broadest reasonable interpretation of claim 1, an application or system that both (1) answers questions and (2) provides similar passages as recited in claim 1 would teach or at least fairly suggest to the skilled artisan separately articulated applications performing (1) and (2). See KSR, 550 U.S. at 417 (A claim is obvious where it ‘“simply arranges old elements with each performing the same function it had been known to perform’ and yields no more than one would expect from such an arrangement.”); In re Am. Acad. of Sci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004) (during prosecution, an application’s claims are given their broadest reasonable scope consistent with the specification). Based on the foregoing and our review of the Briefs, we find Appellants do not show persuasively that the Examiner erred in finding that Heck at least fairly suggests “training . . . a similar passage machine learning model for a similar passage cognitive system using a question and answer key to form a trained similar passage machine learning model,” as recited in claim 1. Argument 2: Appellants argue neither Heck nor Nakazawa teaches “sending at least one text input to the similar passage cognitive system . . . , Appeal 2018-008676 Application 14/966,802 9 wherein the text input comprises a given text passage from a given answer specification of the question and answer key, and receiving . . . a response list of references to text passages from the corpus of information that are similar to the given text input,” as recited in claim 1. See App. Br. 11 (“Heck does not teach submitting a text passage of answers in the question and answer key to receive other text passages in the corpus that are similar to the text passage.”); id. at 7–11, 13; Reply Br. 2–6, 10. In particular, Appellants argue “Nakazawa teaches matching text from a first database to a second database to supplement the second database. Neither database is used to train a machine learning model.” App. Br. 13–14; see Reply Br. 7– 10. The Examiner finds: The Heck reference explicitly denotes allowing the system to search through its confines for similar questions and answers in order to answer the user. In fact, Heck specifically states “In one embodiment, the QA robot stores the questions and their associated answers directly into its knowledgebase and retrieves that information when similar questions are subsequently asked.” This clearly denotes that the system seeks to find similar answers, and shows that it is not constrained to questions or answers which are worded exactly the same. Ans. 11–12 (emphases in original). The Examiner also finds: [T]he applicant appears to be attempting to view the Heck reference in a vacuum. The applicant is correct that the Heck reference does not disclose “sending a given text passage from a given answer specification of the question and answer key to the similar passage cognitive system operating in accordance with the trained similar passage machine learning model.” The Examiner stated this within the rejection. Heck discloses receiving questions from the user via a search event, and processing that search event. However, the remaining portions of these limitations are not cited in the Heck reference, but are Appeal 2018-008676 Application 14/966,802 10 met by the Nakazawa reference. The Nakazawa reference clearly denotes taking in answer information and searching to find similar answers . . . . When combined with the Heck reference, this denotes allowing the system to take in answers in order to find similar answers, which meets the broadest reasonable interpretation of the claims. Ans. 13–14; see id. at 15–16, 18 (“The Nakazawa reference has been brought in . . . to show it is known in the art of question and answer systems . . . to use both questions and answers to identify new answers for questions.”). The Examiner also explains: The Heck reference makes clear that it can take in information, and search for similar information within its system. The Heck [r]eference makes use of questions in order to perform this action. It does not, as the claim requires, make use of answer information to perform this search, as the examiner has explicitly stated in the rejection. The Nakazawa reference, however, clearly shows that it would be obvious to [the skilled artisan] to make use of answer information to search for new answers to add to the system . . . . The combined references meet the claimed limitations. Ans. 19. We agree with and adopt as our own the Examiner’s explanation of obviousness reproduced above, and find Appellants do not persuasively explain why the combined teachings of Heck and Nakazawa do not at least fairly suggest to the skilled artisan the broadly claimed features at issue in representative claim 1. We also agree with the Examiner that Appellants, despite protestations to the contrary (see, e.g., App. Br. 6; Reply Br. 7), improperly argue nonobviousness here by attacking Heck and Nakazawa individually, rather than addressing their combined teachings. See In re Keller, 642 F.2d at 426 (One cannot show nonobviousness by attacking Appeal 2018-008676 Application 14/966,802 11 references individually where the rejections are based on combinations of references.); In re Merck & Co., 800 F.2d 1091, 1097 (Fed. Cir. 1986). Based on the foregoing and our review of the Briefs, we find Appellants do not show persuasively that the Examiner erred in finding that Heck and Nakazawa at least fairly suggest: sending at least one text input to the similar passage cognitive system . . . , wherein the text input comprises a given text passage from a given answer specification of the question and answer key, and receiving . . . a response list of references to text passages from the corpus of information that are similar to the given text input, as recited in claim 1. Argument 3: Appellants argue neither Heck nor Nakazawa teaches “supplementing . . . the question and answer key by adding the at least one text passage to the given answer specification to form a supplemented question and answer key,” as recited in claim 1. See App. Br. 7; Reply Br. 3–4, 10. In particular, Appellants argue, for example: Heck teaches a specific type of question, which is an opinion question. There is no correct answer, only a better answer. Heck gives the example of “where can I buy good Indian food in Portland, Oreg.?” There are multiple answers, and each answer is only subjectively better than others. This is not the same as improving the ground truth by supplementing the question and answer key. App. Br. 7 (emphasis added); see Reply Br. 3. The Examiner finds: First, the claim at no time requires any particular type of “question” to be asked. The claim at no time excludes opinion based questions. Opinion questions can have answers like any other question can. The claim at no time limits a question to a single answer. The applicant then goes on to state that “this is Appeal 2018-008676 Application 14/966,802 12 not the same as improving the ground truth by supplementing the question and answer key.” The Heck reference clearly discloses that the system actively learns over time by taking in additional question and answers over time in order to provide better answers to the user . . . . This shows that the system is not static, and that new question and answers are added over time. The examiner interprets this to be “supplementing” the system with additional question and answers . . . . Ans. 10. We agree here with the Examiner, and also find that, contrary to Appellants’ position, Heck not only teaches “opinion” questions but also clearly teaches answering “fact” questions. For example, Heck expressly discloses: In the case of non-subjective data, a correct answer can be relatively easy to find, assuming there is an authoritative answer to the question. For example, a user asks who wrote the book “Tale of Two Cities”. In one embodiment, the QA robot can find the information relatively easily in a database that stores informational data. Heck ¶ 38 (emphases added). Also, Appellants concede that Heck teaches accumulating multiple answers to a question and providing such answers in response to a question. See App. Br. 9 (“Appellants agree that Heck teaches answering a question with a list of answers.”). In addition, the Examiner finds: As [for] the argument that Heck fails to teach supplementing the question and answer key with the at least one text passage, the Heck system clearly denotes taking in new question and answer responses over time. However, the Examiner was very clear, Heck does not disclose, “by adding the at least one text passage to the given answer specification.” Once again, the Nakazawa [reference] meets these limitations, clearly denoting looking for similar answers, and adding those similar answers to the system if they meet a matching threshold . . . . Appeal 2018-008676 Application 14/966,802 13 Ans. 15. We again agree with and adopt as our own the Examiner’s explanation of obviousness reproduced above, and find Appellants do not persuasively explain why the combined teachings of Heck and Nakazawa do not at least fairly suggest to the skilled artisan the broadly claimed features at issue in representative claim 1. Based on the foregoing and our review of the Briefs, we find Appellants do not show persuasively that the Examiner erred in finding that Heck and Nakazawa at least fairly suggest “supplementing . . . the question and answer key by adding the at least one text passage to the given answer specification to form a supplemented question and answer key,” as recited in claim 1. Argument 4: Appellants argue neither Heck nor Nakazawa teaches “training a question answering machine learning model . . . using the supplemented question and answer key,” as recited in claim 1. See App. Br. 14; Reply Br. 10. In particular, Appellants argue: The Final Office Action states that Nakazawa “is never called for training a machine learning model for a question and answer system.” Applicants agree. No reference is cited for using similar passages to train a question and answer machine learning model. The Examiner only concludes that Heck and Nakazawa could be combined to result in these features. The Examiner does not explain how the references can be combined or how such a combination would result in the claim features. App. Br. 14–15 (emphases omitted); Reply Br. 9. The Examiner finds: Each and every limitation is met by the combined references of Heck and Nakazawa, and each limitation has been clearly cited in the rejection. Furthermore, the examiner has shown that Heck and Nakazawa are analogous art, as both involve Q&A systems, and they have a motivation to combine, to make use of answers similar to answers you already have in order to improve the Q&A system. Appeal 2018-008676 Application 14/966,802 14 Ans. 19–20; see Final Act. 4–5 (“The motivation for [combining Heck and Nakazawa] would be to ‘detect . . . contents of a dialogue similar to question and answer data from data of a history of dialogues made in the past . . .’” or “in the case of Heck, allow the system to also search for similar answers, now just based on the questions.”). The Examiner repeatedly explains that Heck’s QA robot “learns” (i.e., is trained) to generate answers to questions, which is based on “supplementing” as discussed above in Argument 3. See, e.g., Ans. 9. We again agree with the Examiner, and find Appellants’ arguments unpersuasive. To the extent that Appellants argue the Examiner did not show how Nakazawa could be bodily incorporated into Heck (see, e.g., App. Br. 14– 15; Reply Br. 7 (“[T]he combination would not result in the features of claims 1, 9, and 16”)), obviousness does not require such a showing. It is well-established that a determination of obviousness based on teachings from multiple references does not require an actual, physical substitution of elements. In re Etter, 756 F.2d 852, 859 (Fed. Cir. 1985) (en banc) (“Etter’s assertions that Azure cannot be incorporated in Ambrosio are basically irrelevant, the criterion being not whether the references could be physically combined but whether the claimed inventions are rendered obvious by the teachings of the prior art as a whole.”); In re Sneed, 710 F.2d 1544, 1550 (Fed. Cir. 1983) (citation omitted) (“[I]t is not necessary that the inventions of the references be physically combinable to render obvious the invention under review.”); In re Keller, 642 F.2d at 425 (“The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference . . . .”). Appeal 2018-008676 Application 14/966,802 15 Based on the foregoing and our review of the Briefs, we find Appellants do not show persuasively that the Examiner erred in finding that Heck and Nakazawa at least fairly suggest “training a question answering machine learning model . . . using the supplemented question and answer key,” as recited in claim 1. Accordingly, we sustain the Examiner’s rejection of representative method claim 1 under 35 U.S.C. § 103. As noted above, Appellants do not separately argue patentability of independent claims 9 and 16 and dependent claims 4, 5, 11, 12, 18, and 19, but instead rely only on their arguments for patentability of method claim 1. Accordingly, for the same reasons set forth above for independent claim 1, we sustain the Examiner’s rejection under 35 U.S.C. § 103 of claims 4, 5, 9, 11, 12, 16, 18, and 19. See 37 C.F.R. § 41.37(c)(1)(iv). Claims 2, 10, and 17 Argument 5: Appellants argue Heck does not teach “prompting a user to provide a distinguishing question,” as recited, for example, in claim 2. See App. Br. 15–16; Reply Br. 11. The Examiner finds: Heck clearly discloses the system encountering questions it feels are not acceptable and removing them. (See Heck, paragraphs 0032–0034). In fact, the Heck reference clearly denotes making new question and answer pairs during these actions, as Heck discloses removing an answer in regards to reviews that a restaurant is bad, and instead adding that answer to a question of which restaurants to avoid (see Heck, Paragraph 0034). Furthermore, the Heck reference denotes asking for additional information (i.e. distinguishing questions) when the question is unanswerable (i.e. rejected). (See Heck, paragraph 0055). This paragraph clearly denotes the Heck reference asking the user to clarify the question (i.e. provide a distinguishing question). Appeal 2018-008676 Application 14/966,802 16 Ans. 21. Appellants respond essentially by asserting that the Examiner’s cited passages in Heck do not equate to the claim language. See Reply Br. 11. In particular, Appellants argue “Heck teaches prompting the user that submitted a bad question to clarify the question or provide additional information. This is not equivalent to prompting a user to provide a distinguishing question responsive to an answer rejection event for a given text passage from the response list.” App. Br. 15 (emphasis added); id. at 16 (“[A]sking the user for more information to understand the user’s question is not equivalent to prompting the user to provide a distinguishing question . . . .” (Emphasis added)). But the issue here is not whether Heck discloses the “same” or “equivalent” features as in claim 2, rather, the issue is whether Heck at least fairly suggests to the skilled artisan “prompting a user to provide a distinguishing question” and the other limitations as recited in claim 2. We agree with and adopt as our own the Examiner’s explanation of obviousness reproduced above, and find Appellants do not persuasively explain why Heck does not at least fairly suggest such broad features. Accordingly, we sustain the Examiner’s rejection of claim 2 under 35 U.S.C. § 103. Appellants do not separately argue patentability of claims 10 and 17, but instead rely only on their arguments for patentability of claim 2. Accordingly, for the same reasons set forth above for claim 2, we sustain the Examiner’s rejection under 35 U.S.C. § 103 of claims 10 and 17. See 37 C.F.R. § 41.37(c)(1)(iv). Claim 3 Argument 6: Appellants argue Heck does not teach the limitations recited in claim 3. App. Br. 16–18. In particular, Appellants argue: Appeal 2018-008676 Application 14/966,802 17 Heck teaches sending questions that cannot be answered by the QA robot system to human experts to be answered. This portion of Heck does not teach a question and answer specification pair in the question and answer key, which forms a ground truth for a question answering cognitive system, for which the number of text passages in the answer specification is less than a threshold value. The cited portion of Heck makes no mention of a question and answer key, an answer specification, a number of text passage, or a threshold value. The cited portion of Heck is concerned with a question from the user, not a question and answer key, for which an answer cannot be found. App. Br. 17. The Examiner finds: [T]he Heck reference denotes a threshold for not having enough answers to answer a question, in this case, that threshold is not having an answer at all. (See Heck, Paragraph 0055). In response to this issue, the Heck reference responds by requesting that an expert answer the question (i.e. provide at least one additional text passage reference) and then making use of that answer within the system. (See Heck, Paragraph 0055). The applicant at no time required the Threshold to be more than one, nor do they provide any detail at all as to what this threshold might encompass. In this case, the threshold is having at least a single answer. This meets the broadest reasonable interpretation of the claims . . . . Ans. 22. In response, Appellants again argue that Heck’s disclosed features are “not equivalent” to the limitations recited in claim 3, and argue that claim 3 requires that “every question has one or more answers.” See Reply Br. 11. The issue again here is not whether Heck discloses the “same” or “equivalent” features as in claim 3, rather, the issue is whether Heck at least fairly suggests to the skilled artisan “determining a question and answer specification pair for which the number of text passage references in the answer specification is less than a threshold value,” “prompting a user to Appeal 2018-008676 Application 14/966,802 18 provide at least one additional text passage reference,” and “amending the answer specification to include the at least one additional text passage reference.” We agree with and adopt as our own the Examiner’s explanation of obviousness reproduced above, and find Appellants do not persuasively explain why Heck does not at least fairly suggest such broad features. Accordingly, we sustain the Examiner’s rejection of claim 3 under 35 U.S.C. § 103. Claims 6, 13, and 20 Argument 7: Appellants argue Heck does not teach “automatically generating at least one search event upon completion of an ingestion operation that updates the corpus of information,” as recited, for example, in claim 6. App. Br. 18–19; Reply Br. 11–12. The Examiner finds: “The Heck reference clearly denotes a system that actively learns and seeks to improve by monitoring social networks for question and answers. . . . Furthermore, the reference goes on to state that it not only brings in new information, but actively updates and manages information it already contains . . . .” Ans. 24 (citing Heck ¶¶ 32–34). In response, Appellants argue merely that “the cited portion of Heck makes no mention of [the recited claim language]” (App. Br. 18), and that patentability of claims 6, 13, and 20 derives from their dependency on independent claims 1, 9, and 16, respectively (see App. Br. 18–19; Reply Br. 11–12), which are not substantive arguments as to the independent patentability of dependent claims 6, 13, and 20. Thus, we find Appellants’ arguments unpersuasive of Examiner error. See 37 C.F.R. § 41.37(c)(1)(iv) (Any claim not argued separately will stand or fall with our analysis of the rejection of the underlying claims.); In re Lovin, 652 F.3d 1349, 1357 (Fed. Appeal 2018-008676 Application 14/966,802 19 Cir. 2011) (“[T]he Board reasonably interpreted Rule 41.37 to require more substantive arguments in an appeal brief than a mere recitation of the claim elements and a naked assertion that the corresponding elements were not found in the prior art.”); In re Geisler, 116 F.3d 1465, 1470 (Fed. Cir. 1997) (It is well settled that mere attorney arguments and conclusory statements, which are unsupported by factual evidence, are entitled to little probative value.); In re Pearson, 494 F.2d 1399, 1405 (CCPA 1974) (attorney argument is not evidence). Accordingly, we sustain the Examiner’s rejection of claims 6, 13, and 20 under 35 U.S.C. § 103. Claims 7, 14, and 21 Argument 8: Appellants argue Heck does not teach “invoking at least one search event for at least one text passage of the question and answer key,” as recited, for example, in claim 7. App. Br. 19–20; Reply Br. 12. The Examiner finds: The Heck reference clearly denotes searching for answers in response to questions (see Heck, Paragraph 0023). The Heck reference even clearly denotes that questions can be search queries (see Heck, Paragraph 0023). There is no constraint on the user that this information can’t be a question that the system has already seen before. Furthermore, the system clearly denotes comparing questions to other questions within its database in order to find similar ones (see Heck, paragraph 0040). The Heck reference even explicitly denotes uses using the same questions over and over with the system (see Heck, Paragraph 0064). Ans. 25. Appellants also argue Heck does not teach “presenting a list of answers that are passages similar to a text passage from an answer Appeal 2018-008676 Application 14/966,802 20 specification of the question and answer key.” App. Br. 20–21; Reply Br. 12. The Examiner finds: First, this claim does not call for “presenting a list of answers that are passages similar to a text passage from an answer specification of the question and answer key.” Claims 7, 14, and 21 call for “populating a user interface with at least one response list received [from] the at least one search events.” All the portions of the dependency have been met by the previous claims, and there is no need for these rejections to be repeated in these claims. Heck clearly denotes providing a user interface list of responses to the search event (see Heck, paragraphs 0078- 0079). All the rest of the limitations have been dealt with and responded to in the claims in which they reside . . . . Ans. 26. Appellants respond “the references do not teach these features in either wording.” Reply Br. 12. Here again, Appellants argue merely that the portions of Heck cited by the Examiner do not teach Appellants’ particular claim language (without explaining why that is), and that patentability of claims 7, 14, and 21 derives from their dependency on independent claims 1, 9, and 16, respectively, which are not substantive arguments as to the independent patentability of claims 7, 14, and 21. See App. Br. 19–21; Reply Br. 12. Thus, we find Appellants’ arguments unpersuasive of Examiner error. See 37 C.F.R. § 41.37(c)(1)(iv); In re Lovin, 652 F.3d at 1357; In re Geisler, 116 F.3d at 1470; In re Pearson, 494 F.2d at 1405. Accordingly, we sustain the Examiner’s rejection of claims 7, 14, and 21 under 35 U.S.C. § 103. Claims 8, 15, and 22 Argument 9: Appellants argue “Heck makes no mention of reordering the response list according to a text analytical proximity measure for each Appeal 2018-008676 Application 14/966,802 21 text passage in the response list relative to the text input sent to the search event that returned the response list,” as generally recited, for example, in claim 8. App. Br. 21–22; Reply Br. 12–13. The Examiner finds: The Heck reference clearly denotes judging potential answers based on a confidence score related to the question asked (see Heck, paragraph 0076). This confidence threshold is dependent on the material being asked and can change based upon that information (see Heck, paragraphs 0073–0074). The examiner is interpreting this to be a [ ] “text analytical proximity measure” as it seeks to find the textual answer most likely to answer the user’s question. The applicant at no time quantifies just what a “text analytical proximity measure[”] is. Ans. 27. Appellants respond that “[t]here is no teaching or suggestion that the confidence score [in Heck] is a text analytical proximity measure,” but do so without defining a “text analytical proximity measure” and without explaining why there is no such teaching. Reply Br. 12–13. Again, Appellants argue merely that the portions of Heck cited by the Examiner do not teach Appellants’ particular claim language (without explaining why that is), and that patentability of claims 8, 15, and 22 derives from their dependency on independent claims 1, 9, and 16, respectively, which are not substantive arguments as to the independent patentability of claims 8, 15, and 22. See App. Br. 21–22; Reply Br. 12–13. Thus, we find Appellants’ arguments unpersuasive of Examiner error. See 37 C.F.R. § 41.37(c)(1)(iv); In re Lovin, 652 F.3d at 1357; In re Geisler, 116 F.3d at 1470; In re Pearson, 494 F.2d at 1405. Accordingly, we sustain the Examiner’s rejection of claims 8, 15, and 22 under 35 U.S.C. § 103. Appeal 2018-008676 Application 14/966,802 22 DECISION We affirm the Examiner’s obviousness rejections of claims 1–22. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 41.50(f). AFFIRMED Copy with citationCopy as parenthetical citation