Ex Parte Kim et alDownload PDFPatent Trial and Appeal BoardMar 30, 201713766019 (P.T.A.B. Mar. 30, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/766,019 02/13/2013 Hyun Duk Kim 83106283 4873 56436 7590 Hewlett Packard Enterprise 3404 E. Harmony Road Mail Stop 79 Fort Collins, CO 80528 EXAMINER MISHRA, RICHA ART UNIT PAPER NUMBER 2673 NOTIFICATION DATE DELIVERY MODE 04/03/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): hpe.ip.mail@hpe.com chris. mania @ hpe. com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte HYUN DUK KIM, MARIA G. CASTELLANOS, MEICHUN HSU, and CHENG XIANG ZHAI Appeal 2016-005698 Application lS/766,0191 Technology Center 2600 Before JASON V. MORGAN, NABEEL U. KHAN, and KAMRAN JIVANI, Administrative Patent Judges. KHAN, Administrative Patent Judge. DECISION ON APPEAL Appellants seek our review, under 35 U.S.C. § 134(a), of the Examiner’s final decision rejecting claims 1—20. App. Br., Claims App’x. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 Appellants identify Hewlett-Packard Development Company, LP, as real party in interest. App. Br. 1. Appeal 2016-005698 Application 13/766,019 THE INVENTION The present invention relates to “[a] technique [that] may include generating a plurality of segments from sentences in a data set. . . [and] further include determining the explanatoriness of each segment.” Abst. Independent claims 1,12, and 14 are reproduced below. 1. A method, comprising: generating, using at least one processor, a plurality of segments from sentences in a first data set related to an opinion, the plurality of segments including at least some segments that are smaller than a sentence from which it was generated; determining, using the at least one processor, an explanatoriness score of each segment, wherein determining the explanatoriness of each segment includes at least evaluating the discriminativeness of features of the respective segment by comparing the features to a second data set, wherein the explanatoriness score of each segment indicates a likelihood that the segment describes a reason for the opinion; and ranking, using the at least one processor, the plurality of segments according to their explanatoriness scores. 12. A system, comprising: a segment generator executed by at least one processor to generate a parse tree for each sentence in a first data set and generate a plurality of segments from the parse trees, wherein the first data set is associated with an opinion of a product or service; an explanatoriness scorer executed by the at least one processor to generate an explanatoriness score of each segment based on an explanatoriness evaluation, the explanatoriness evaluation including comparing words in each segment to words in a second data set, wherein the explanatoriness score of each segment indicates a likelihood that the segment describes a reason for the opinion; and 2 Appeal 2016-005698 Application 13/766,019 a summary generator executed by the at least one processor to generate a summary of the first data set based on the explanatoriness scores, the summary including a subset of the plurality of segments. 14. A non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause a computer to: generate a parse tree for each sentence in a data set, the data set related to an opinion; generate a plurality of segments from the parse trees, wherein at least some of the segments are shorter than a sentence from which they were generated; determine an explanatoriness score for each segment, the explanatoriness score indicating a likelihood that the respective segment describes a reason for the opinion; and rank the plurality of segments according to their explanatoriness scores. THE REJECTIONS Claims 1—20 stand rejected, under 35 U.S.C. § 101, as being directed to patent-ineligible subject matter. Final Act. 3^4 (May 12, 2015). Claims 1—9 stand rejected, under 35 U.S.C. § 103(a), as being obvious over Sharp et al. (US 2011/0093467 Al; Apr. 21, 2011) (“Sharp”) and further in view of Reisman et al. (US 2009/0265307 Al; Oct. 22, 2009) (“Reisman”). Final Act. 4—6. Claim 10 stands rejected, under 35 U.S.C. § 103(a), as being obvious over Sharp, Reisman, and Huang et al. (US 2008/0215571 Al; Sept. 4, 2008) (“Huang”). Final Act. 6. 3 Appeal 2016-005698 Application 13/766,019 Claim 11 stands rejected, under 35 U.S.C. § 103(a), as being obvious over Sharp, Reisman, Huang, and Reis et al. (US 2009/0193328 Al; July 30, 2009) (“Reis”). Final Act. 7. Claims 12 and 13 stand rejected, under 35 U.S.C. § 103(a), as being obvious over Reis, Knoll et al. (US 2003/0216904; Nov. 20, 2003) (“Knoll”), and Reisman. Final Act. 7—9. Claim 19 stands rejected, under 35 U.S.C. § 103(a), as being obvious over Reis, Knoll, Reisman, and Huang. Final Act. 9. Claim 20 stands rejected, under 35 U.S.C. § 103(a), as being obvious over Reis, Knoll, Reisman, and Sharp. Final Act. 9—10. Claims 14 and 18 stand rejected, under 35 U.S.C. § 103(a), as being obvious over Huang and Knoll. Final Act. 10-11. Claim 15 stands rejected, under 35 U.S.C. § 103(a), as being obvious over Huang, Knoll, and Reis. Final Act. 11. Claims 16 and 17 stand rejected, under 35 U.S.C. § 103(a), as being obvious over Huang, Knoll, and Sharp. Final Act. 11—12. 35 U.S.C. § 101 Claims 1—20 stand rejected as drawn to patent ineligible subject matter. Appellants address the claims collectively. We select claim 14 as representative. 37 C.F.R. § 41.37(c)(l)(iv) (representative claims). For the reasons below, Appellants fail to show error in the rejection of claim 14. The Examiner analyzed the claims under the Supreme Court’s two- step framework for determining whether claimed subject matter is judicially- excepted from patent eligibility under § 101. Ans. 23; Final Act. 3; see also Alice Corp. Pty. Ltd. v. CLS Bank Inti, 134 S. Ct. 2347, 2355 (2014) (discussing the framework per its introduction by Mayo Collaborative Servs. 4 Appeal 2016-005698 Application 13/766,019 v. Prometheus Labs., Inc., 132 S. Ct. 1289 (2012)). Performing the first step, the Examiner finds the claims recite an abstract idea of determining a sentence segment’s likelihood of describing a reason for an opinion and accordant ranking of the segment. Id. Performing the second step, the Examiner finds the additionally claimed elements “amount to no more than recitation of generic software code to perform generic functions” for such determining and ranking. Id. Appellants argue the Examiner fails to support the above findings. App. Br. 18—20; Reply Br. 11—12. The arguments, below, are also directed to the two steps of the analysis under Alice. As to the first step, Appellants contend the Examiner fails to establish the claims are directed to an abstract idea because the findings have “failed to establish that the asserted abstract idea ... is not similar to any abstract idea previously identified by the courts.” App. Br. 18; see also Reply Br. 11. In support, Appellants further contend “the asserted abstract idea clearly fails to conform to the U.S. Supreme Court’s descriptions of abstract ideas as ‘the basic tools of scientific and technological work’ and the ‘building blocks of human ingenuity.’” App. Br. 18 (quoting Alice, 134 S. Ct. at 2354); see also Reply Br. 12. As to the second step, Appellants contend the Examiner erred in finding the claims lack an inventive concept because the granularity of the claimed segments (“smaller than a sentence”) and measurement of their claimed score (“likelihood . . . segment describes a reason for the opinion”) constitute significantly more than the alleged abstract idea and generic software. App. Br. 19. In support, Appellants present two further contentions. Id. at 19—20. First, Appellants contend the measurement is 5 Appeal 2016-005698 Application 13/766,019 undisclosed by the applied art and, thus, not shown to be generic. Id. at 19. Second, Appellants contend the granularity prevents the claims from “tying up” use of the measurement and, thus, prevents preemption of the alleged abstract idea. Id. at 19—20 (emphasis omitted). We address the above contentions in seriatim, below. Contention 1: Examiner has not shown the alleged abstract idea is similar to an abstract idea previously identified by the courts. Claim 1 recites three steps: generating segments from sentences; determining the explanatoriness score of each segment; and ranking the segments via the scores. All three steps generate information sets—namely segments, respective scores, and accordant rankings—from a prior information set. Such generating of information, from prior generated information, is plainly an abstract idea category of judicially excepted subject matter. See, e.g., Bascom Glob. Internet Servs., Inc. v. AT&T Mobility LLC, 827 F.3d 1341, 1350 (Fed. Cir. 2016) (“filtering content”); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1337—38 (Fed. Cir. 2016) (“organizing information using tabular formats”); Digitech Image Techs., LLCv. Elecs. for Imaging, Inc., 758 F.3d 1344, 1350 (Fed. Cir. 2014) (“organizing information through mathematical correlations”); Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat. Ass ’n, 116 F.3d 1343, 1347 (Fed. Cir. 2014) (“1) collecting data, 2) recognizing certain data within the collected data set, and 3) storing that recognized data in a memory”). Moreover, the Specification elaborates that text “[sjegments may be evaluated for explanatoriness in a variety of ways” {id. 123) including, “based on the conditional probability” equation of Bayes Rule {id. H 27— 6 Appeal 2016-005698 Application 13/766,019 28), steps of “using a unigram language model and taking logarithm of both [Bayes Rule equation] sides to obtain [an] explanatoriness scoring function” {id. 130). As noted above, “organizing information through mathematical correlations” constitutes an abstract idea. Digitech, 758 F.3d at 1350. Thus, “[w]ithout additional limitations, a process that employs mathematical algorithms to manipulate existing information to generate additional information is not patent eligible.” Id. at 1351. And, “‘[i]f a claim is directed essentially to a method of calculating, using a mathematical formula, even if the solution is for a specific purpose, the claimed method is nonstatutory.’” Id. The claimed invention plainly falls within the above categories of abstract ideas. Contention 2: Examiner has not shown the alleged abstract idea is a basic tool of scientific/technological work or building block of human ingenuity. As discussed above, claim 1 recites generating information from prior generated information, and performing calculations on that information, which is a fundamental building block of research and human ingenuity. See supra. Contention 3: Claimed measurement is not generic and thus adds “significantly more ” to the alleged abstract idea. As reflected supra, “there may be close calls about how to characterize what the claims are directed to.” Bascom, 827 F.3d at 1348 (quoting Enfish, 822 F.3d at 1339-40). We thus consider whether the recited granularity and measurement add “specific improvements in the recited computer technology [that] go beyond ‘well-understood, routine, 7 Appeal 2016-005698 Application 13/766,019 conventional activities] ’ and render the invention patent-eligible.” Bascom, 827 F.3d at 1348. That is, because “the claims and their specific limitations do not readily lend themselves to a step-one finding that they are directed to a nonabstract idea[, we] therefore defer our consideration of the specific claim limitations’ narrowing effect for step two” of the Alice two step framework. Id. at 1349. There is no evidence, before us, the recited granularity and measurement provide a “specific technical solution” in computer technology {id. at 1352) much less provide an unconventional, technical solution (id. at 1348). For example, Appellants present no evidence that parsing of text into sentences and sentence segments (e.g., separating semi-colon phrases) improved computer technology or was unconventional itself or in combination with the recited measurement. The Specification also lacks such evidence, merely asserting “[t]he inventors have discovered that using a parse tree to identify segment boundaries may be beneficial because explanatory phrase boundary is likely to match with syntax boundary.” Spec. 12; see also id. 119 (“[A] single sentence may have both relevant and irrelevant information.”). Such a mere allegation of “discovery” cannot alone persuade us the matched combination of granularity (explanatory phrase) and measurement (explanatory likelihood) constitutes an unconventional, technical solution. See Bascom, 827 F.3d at 1348; see also In re Lindner, 457 F.2d 506, 508 (CCPA 1972) (“The affidavit and specification do contain allegations that 8 Appeal 2016-005698 Application 13/766,019 synergistic results are obtained with all the claimed compositions, but those statements are not supported by any factual evidence.”).2 Further, the parsing of sentences into segments and subsequent determination of explanatoriness score of those segments and ranking the segments according to the explanatoriness scores are forms of mental steps or calculations. Thus, these steps fail to transform the claim into something more than an abstract idea. Contention 4: Claimed granularity prevents preemption of the alleged abstract idea. “While preemption may signal patent ineligible subject matter, the absence of complete preemption does not demonstrate patent eligibility.” Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371, 1379 (Fed. Cir. 2015); see also OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1362-63 (Fed. Cir. 2015), cert, denied, 136 S. Ct. 701, 193 (2015) (“[T]hat the claims do not preempt all price optimization or may be limited to price optimization in the e-commerce setting do not make them any less abstract.”). And, “[wjhere a patent’s claims are deemed only to disclose patent ineligible subject matter under the Mayo framework, as they are in this case, preemption concerns are fully addressed and made moot.” Ariosa, 788 F.3d at 1379. Conclusion Accordingly, for the foregoing reasons, we sustain the rejection of claims 1—20 under 35 U.S.C. § 101. 2 The assertion of such a “discovery” is also dubious because artisans, and even laypersons, would know syntax boundaries (e.g., semi colon) signal changes in idea, tone, opinion, etc. 9 Appeal 2016-005698 Application 13/766,019 35 U.S.C. § 103(a) Claims 1—11 Independent claim 1 and depending claims 2—11 stand rejected as obvious over Sharp and Reisman (and further references for claims 10 and 11). Sharp’s cited disclosure (see infra) generally teaches parsing of sentences into segments and associating of some segments with respective objects (e.g., with “cup”). See, e.g., Sharp || 104—106. Reisman’s cited disclosure (see infra) generally teaches gathering of opinions on a topic (e.g., a retail product) and generating of both a numerical score and textual summary of sentiment for the topic. See, e.g., Reisman || 3, 38, 81. The Examiner finds Sharp teaches parsing and scoring of text segments smaller than a sentence. Final Act. 4 (citing Sharp 88, 104— 105, 156, 161). The Examiner finds Reisman teaches scoring of a text segment’s likelihood of describing a reason for an opinion. Id. at 5 (citing Reisman || 3, 28, 48, 81). Combining these alleged teachings, the Examiner finds it would have been obvious to add Reisman’s scoring to Sharp’s invention so as to additionally “generate [Reisman’s] fluent textual summary which takes multiple feature into account.” Id. (citing Reisman || 3, 28); see also Ans. 14 (reiterating the rationale without elaboration or revision). Appellants argue: [I]t is abundantly clear that the cited portions of Reisman say nothing whatsoever about any score of each segment. Further, the cited portions of Reisman say nothing whatsoever about a score that “indicates a likelihood that the segment describes a reason for the opinion.” Rather, the cited material describes something substantially different, namely generating a “summary” that summarizes multiple opinions related to the same topic. Note that, because the “summary” is 10 Appeal 2016-005698 Application 13/766,019 generated from multiple opinions, it is clear that the “summary” is not a “explanatoriness score of each segment,” and clearly does not indicate “a likelihood that the segment describes a reason for the opinion.” App. Br. 8 (emphasis omitted); see also Reply Br. 2-4. We are unpersuaded by Appellants’ arguments. The Examiner relies upon Sharp as teaching or suggesting generating sentence segments and determining an explanatoriness score for the segments. Final Act. 4. The Examiner relies upon Reisman as teaching that the explanatoriness score indicates a likelihood that the segment describes a reason for the opinion. Id. at 5. Appellants do not dispute the Examiner’s findings with respect to Sharp. See App. Br. 7—9. Instead, Appellants argue that Reisman is directed at summarizing multiple opinions related to a topic rather than determining an explanatoriness score of each segment, where the explanatoriness score indicates a likelihood that the segment describes a reason for the opinion. Such arguments attack Reisman individually and fail to account for the Examiner’s findings as a whole. As state above, the Examiner relies upon Sharp for teaching generating sentence segments and determining an explanatoriness score for the segments. Final Act. 4. Reisman teaches that textual opinions can be analyzed to determine “what people think about topic X; how much people liked or disliked X; why they liked or disliked about X. . . .” Reisman 128 (emphasis added). We agree with the Examiner that by teaching a determination of why people liked or disliked a topic, Reisman teaches indicating a likelihood that the text describes a reason for the opinion (i.e., a reason why the topic was liked or disliked). When combined with the Examiner’s findings regarding Sharp, we agree that the combination teaches 11 Appeal 2016-005698 Application 13/766,019 “determining ... an explanatoriness score of each segment,. . . wherein the explanatoriness score of each segment indicates a likelihood that the segment describes a reason for the opinion,” as recited in claim 1. Characterizing the Examiner’s rationale as asserting “that it would have been obvious to modify Sharp to include the missing subject matter (i.e. the ‘summary’ of Reisman) because this would provide the missing subject matter (i.e. ‘to generate fluent textual summary’),” Appellants further argue that the Examiner’s rationale for combining Sharp with Reisman “is merely circular logic.” App. Br. 9 (emphasis omitted). The Examiner finds “[i]t would have been obvious having the concept of Sharp to further include the concept of Reisman to generate a fluent texual [sic] summary which takes multiple feature[s] into account.” Final Act. 5 (emphasis omitted) (citing Reisman || 3, 28). We disagree that this is circular logic. Instead, the Examiner merely states that it would have been obvious to add a feature from Reisman (the textual summaries) to Sharp to take multiple features of the opinions into account. Rather than circular logic, this is a simple addition of a feature from one reference to another. Accordingly, for the foregoing reasons, we sustain the rejection of claims 1—11 under 35 U.S.C. § 103(a). Claims 12, 13, 19, and 20 Independent claim 12 and depending claims 13, 19, and 20 stand rejected as obvious over Reis, Knoll, and Reisman (and further references for claims 19 and 20). Appellants present patentability arguments for only claim 12 and reference those arguments for depending claims 13, 19, and 20 (App. Br. 13—15). 12 Appeal 2016-005698 Application 13/766,019 Appellants again argue, now for claim 12, “Reisman says nothing whatsoever about an ‘explanatoriness score of each segment[.]’ . . . Rather, at best, [Reisman] describes ... a ‘sentiment score’ that corresponds to each attribute of a topic.” App. Br. 13 (emphasis omitted). Appellants also argue, for claim 1 and now for claim 12, Reisman’s scoring does not indicate the likelihood that a segment describes a reason for an opinion. App. Br. 13—14 (claim 12); see also id. at 8—9 (claim 1). We disagree with Appellants’ arguments for the same reasons stated with respect to claim 1. In particular, Appellants again attack Reisman individually rather than address the Examiner’s findings as a whole. The Examiner finds Reis and Knoll teach or suggest generating segments from sentences and Reis as teaching or suggesting an explanatoriness scorer to generate an explanatoriness score for each segment. Final Act. 7. Thus, Appellants argument that “Reisman says nothing whatsoever about an ‘explanatoriness score of each segment’” is unpersuasive because such a finding is addressed by Reis. As to Appellants’ argument that Reisman does not indicate the likelihood that a segment describes a reason for an opinion, we disagree as explained above, namely, Reisman’s disclosure of analyzing an opinion to determine why a person liked or disliked a topic teaches indicating a likelihood that a segment describes a reason for an opinion. Accordingly, for the foregoing reasons, we sustain the rejection of claims 12, 13, 19, and 20 under 35 U.S.C. § 103(a). Claims 14—18 Independent claim 14 and depending claims 15—18 stand rejected as obvious over Huang and Knoll (and further references for claims 15— 13 Appeal 2016-005698 Application 13/766,019 17). Appellants present patentability arguments for only claim 14 and reference those arguments for depending claims 15—18 (App. Br. 17—18). The Examiner finds Huang teaches all but the claimed generating of a parse tree for each sentence; relying on Knoll therefor. Final Act. 10. Appellants argue: Huang says nothing whatsoever about the cited “affinity rank” being a score for each segment, or being a “score indicating a likelihood that the respective segment describes a reason for the opinion.” Rather, at best, Huang describes something different, namely that the “affinity rank” refers to measures of the “richness” and “diversity” of topics included in a set of search results. App. Br. 16 (emphasis omitted). Addressing the Examiner’s focus on a “richness” aspect of Huang’s affinity rank (Ans. 20-21), Appellants add in the Reply Brief: [A]s best understood, the Answer apparently argued that the “richness in opinion” discussed in Huang discloses a “likelihood of opinion,” and thus somehow discloses the aforementioned subject matter of claim 14. First, it is noted that claim 14 actually recites “the explanatoriness score indicating a likelihood that the respective segment describes a reason for the opinion.” However, the asserted phrase “likelihood of opinion” in the Answer clearly misinterprets this claim limitation. . . . Second, the conclusory assertion that “richness in opinion” discloses a “likelihood of opinion” is not supported . . . [and] appears to be erroneous. . . . [T]he term “richness” appears to refer to the number of opinions or topics included in a single document.... However, a person of ordinary skill in the art will readily appreciate that the number of opinions in a single document does not disclose or suggest the asserted “likelihood 14 Appeal 2016-005698 Application 13/766,019 of opinion” [or claimed] . . . “likelihood that the respective segment describes a reason for the opinion[.]” Third, . . . [the] number of opinions included in an entire document ... is not a “score for each segment,” and does not indicate “a likelihood that the respective segment describes a reason for the opinion.” Reply Br. 9—10 (emphasis omitted). We are not persuaded by Appellants’ arguments. Huang’s affinity ranking and included richness measure are part of several operations for presenting opinion snippets. See Huang || 40-69. First, the system filters subjective statements (i.e., removing objective statements) from returned “documents” such as Internet product reviews. Huang || 6, 41. Second, the system ranks, weights, etc., each document based in part upon “how many different topics [the] single document contains[.]” Id. 1 51; see also id. 148. Third, the system weights the snippets of each document. Id. 1 59. Fourth, the system displays the documents by rank and accompanied by the respective best snippets. Id. 171. These steps are expressly disclosed as yielding snippets more “helpful... to understand the actual reviews or ratings of the target product” {id. 1 5) and better “directed towards the product reviewf.]” Id. 1 6. Thus, Huang affinity ranking (i.e., score) identifies snippets with a higher likelihood of indicating an opinion and reason therefor. Accordingly, for the foregoing reasons, we sustain the rejection of claims 14—18 under 35 U.S.C. § 103(a). 15 Appeal 2016-005698 Application 13/766,019 DECISION The Examiner’s rejections of claims 1—20 under 35 U.S.C. § 101 are affirmed. The Examiner’s rejections of claims 1—20 under 35 U.S.C. § 103(a) are affirmed. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). AFFIRMED 16 Copy with citationCopy as parenthetical citation