HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPDownload PDFPatent Trials and Appeals BoardMar 2, 20222020006011 (P.T.A.B. Mar. 2, 2022) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/543,745 07/14/2017 Manish Marwah 90445980 1023 146568 7590 03/02/2022 MICRO FOCUS LLC 500 Westover Drive #12603 Sanford, NC 27330 EXAMINER ZAIDI, SYED A ART UNIT PAPER NUMBER 2432 NOTIFICATION DATE DELIVERY MODE 03/02/2022 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): software.ip.mail@microfocus.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte MANISH MARWAH, ANIKET CHAKRABARTI, and MARTIN ARLITT Appeal 2020-006011 Application 15/543,745 Technology Center 2400 Before JOSEPH L. DIXON, DAVID M. KOHUT, and JON M. JURGOVAN, Administrative Patent Judges. KOHUT, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1, 3-5, and 8-19.2,3 We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as MICRO FOCUS LLC. Appeal Br. 3. 2 Throughout this Decision we refer to the Specification (“Spec.”) filed July 14, 2017 (“Spec.”), the Final Rejection mailed November 22, 2019 (“Final Act.”), the Appeal Brief filed February 20, 2020 (“Appeal Br.”), and the Examiner’s Answer mailed June 19, 2020 (“Ans.”). 3 Claims 2, 6, and 7 have been cancelled. See Final Act. 1-2. Appeal 2020-006011 Application 15/543,745 2 INVENTION The present invention relates to a method and apparatus for “detecting anomalous sensor data” using information obtained by “predicting data acquired by a network of sensors based at least in part on a graphical model of the network, where the graphical model includes true value nodes, observed value nodes and edge factors based at least in part on historical pairwise dependencies for the observed value nodes.” Spec., Title, Abstract. Claim 1 is representative of the invention and is reproduced below. 1. A method comprising: accessing, by a computer, historical data for a network of a plurality of sensors; constructing, by the computer, a graphical model of the network, wherein the graphical model comprises, for each sensor of the plurality of sensors, a true value node associated with a non-observed true value for the sensor and an observed value node associated with an observed value for the sensor, and constructing the graphical model comprises determining relationships among the true value nodes and the observed value nodes based on the historical data; applying, by the computer, graphical model inference using the graphical model to predict the non-observed true values for the sensors; and detecting, by the computer, anomalous sensor data provided by the network based at least in part on the predicted non-observed true values and the observed values. Appeal Br. 23 (Claims App.). REFERENCES The prior art relied upon by the Examiner is: Name Reference Date A. Farruggia et al. Detecting Faulty Wireless Sensor Nodes Through Stochastic Classification, Proceedings of the May, 2011 Appeal 2020-006011 Application 15/543,745 3 2011 IEEE International Conference on Pervasive Computing and Communications Workshops 148-153 Premkumar et al. US 2016/0261468 A1 Sept. 8, 2016 REJECTIONS4 Claims 1, 3-5, and 8-19 stand rejected under 35 U.S.C. § 101 as being directed to patent-ineligible subject matter. Final Act. 7-17. Claims 1, 3-5, 8-13, 16, and 19 are rejected under 35 U.S.C. § 102(a)(1) as anticipated by Premkumar. Final Act. 18-23. Claims 17 and 18 are rejected under 35 U.S.C. § 103 as being unpatentable over Premkumar and Farruggia. Final Act. 25-27. OPINION Rejection under 35 U.S.C. § 101 Patent eligibility under § 101 is a question of law that may contain underlying issues of fact. “We review the [Examiner’s] ultimate conclusion on patent eligibility de novo.” Interval Licensing LLC v. AOL, Inc., 896 F.3d 1335, 1342 (Fed. Cir. 2018) (citing Berkheimer v. HP Inc., 881 F.3d 1360, 1365 (Fed. Cir. 2018)); see also SiRF Tech., Inc. v. Int’l Trade Comm’n, 601 F.3d 1319, 1331 (Fed. Cir. 2010) (“Whether a claim is drawn to patent-eligible subject matter is an issue of law that we review de novo.”); 4 Claims 14 and 15 were rejected under 35 U.S.C. § 102(a)(1) as anticipated by Premkumar. See Final Act. 18, 22-23. However, the anticipation rejection of claims 14 and 15 was withdrawn in the Examiner’s Answer, and is no longer pending on appeal. See Ans. 3. Appeal 2020-006011 Application 15/543,745 4 Dealertrack, Inc. v. Huber, 674 F.3d 1315, 1333 (Fed. Cir. 2012). Accordingly, we review the Examiner’s § 101 determinations concerning patent eligibility under this standard. An invention is patent-eligible if it claims a “new and useful process, machine, manufacture, or composition of matter.” 35 U.S.C. § 101. However, the Supreme Court has long interpreted 35 U.S.C. § 101 to include implicit exceptions: “[l]aws of nature, natural phenomena, and abstract ideas” are not patentable. See, e.g., Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, 216 (2014). In determining whether a claim falls within an excluded category, we are guided by the Supreme Court’s two-step framework, described in Mayo and Alice. Alice, 573 U.S. at 217-18 (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 75-77 (2012)). In accordance with that framework, we first determine what concept the claim is “directed to.” See Alice, 573 U.S. at 219 (“On their face, the claims before us are drawn to the concept of intermediated settlement, i.e., the use of a third party to mitigate settlement risk.”); see also Bilski v. Kappos, 561 U.S. 593, 611 (2010) (“Claims 1 and 4 in petitioners’ application explain the basic concept of hedging, or protecting against risk.”). Concepts determined to be abstract ideas, and thus patent ineligible, include certain methods of organizing human activity, such as fundamental economic practices (Alice, 573 U.S. at 219-20; Bilski, 561 U.S. at 611); mathematical formulas (Parker v. Flook, 437 U.S. 584, 594-95 (1978)); and mental processes (Gottschalk v. Benson, 409 U.S. 63, 69 (1972)). Concepts determined to be patent eligible include physical and chemical processes, such as “molding rubber products” (Diamond v. Diehr, 450 U.S. 175, 191 Appeal 2020-006011 Application 15/543,745 5 (1981)); “tanning, dyeing, making water-proof cloth, vulcanizing India rubber, smelting ores” (id. at 182 n.7 (quoting Corning v. Burden, 56 U.S. 252, 267-68 (1853))); and manufacturing flour (Benson, 409 U.S. at 69 (citing Cochrane v. Deener, 94 U.S. 780, 785 (1876))). If the claim is “directed to” an abstract idea, we turn to the second step of the Alice and Mayo framework, where “we must examine the elements of the claim to determine whether it contains an ‘inventive concept’ sufficient to ‘transform’ the claimed abstract idea into a patent- eligible application.” Alice, 573 U.S. at 221 (quotation marks omitted). “A claim that recites an abstract idea must include ‘additional features’ to ensure ‘that the [claim] is more than a drafting effort designed to monopolize the [abstract idea].’” Id. (quoting Mayo, 566 U.S. at 77). “[M]erely requir[ing] generic computer implementation[] fail[s] to transform that abstract idea into a patent-eligible invention.” Id. USPTO Guidance In January 2019, the U.S. Patent and Trademark Office (USPTO) published revised guidance on the application of § 101. 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“Guidance”).5 Under the Guidance, we first look to whether the claim recites: (1) (see Guidance at 54, Step 2A-Prong 1) any judicial exceptions, including certain groupings of abstract ideas (i.e., 5 The Office issued a further memorandum on October 17, 2019 (“October 2019 Memorandum”) clarifying guidance of the January 2019 Memorandum in response to received public comments. See https://www.uspto.gov/sites/default/files/documents/peg_oct_2019_update. pdf. Moreover, “[a]ll USPTO personnel are, as a matter of internal agency management, expected to follow the guidance.” Guidance at 51; see also October 2019 Memorandum at 1. The MANUAL OF PATENT Appeal 2020-006011 Application 15/543,745 6 mathematical concepts, certain methods of organizing human activity such as a fundamental economic practice, or mental processes); and (2) (see Guidance at 54-55, Step 2A-Prong 2) additional elements that integrate the judicial exception into a practical application (see MPEP § 2106.05(a)-(c), (e)-(h)). Only if a claim (1) recites a judicial exception and (2) does not integrate that exception into a practical application, do we then look to whether the claim: (3) adds a specific limitation beyond the judicial exception that is not “well-understood, routine, conventional” in the field (see MPEP § 2106.05(d)); or (4) simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. See Guidance at 56, Step 2B. Analysis At the outset, we determine that the claims are directed to statutory categories. See Guidance at 53. Claims 1, 3-5, 16, and 17 are directed to methods (processes), claims 8-11 and 18 are directed to apparatuses (machines), and claims 12-15 and 19 are directed to storage media (articles of manufacture). See Appeal Br. 23-26 (Claims App.). Thus, the pending claims are directed to recognized statutory categories of § 101. We next turn to Step 2A, Prong 1, of the Guidance to determine whether the claims recite a judicial exception. See Guidance at 54. EXAMINING PROCEDURE (“MPEP”) now incorporates the Guidance and the subsequent updates at Section 2106 (9th ed. Rev. 10.2019, rev. June 2020). Appeal 2020-006011 Application 15/543,745 7 Step 2A, Prong 1: “recites a judicial exception” The Examiner determines that claims 1, 3-5, and 8-19 are not patent eligible as they are directed to a judicial exception without reciting significantly more. Ans. 3-4, 7-11; Final Act. 10-14. More particularly, the Examiner determines the claims recite the abstract idea of mental process of analyzing and making observation (i.e., prediction) based on the collected data (“applying, by the computer, graphical model inference using the graphical model to predict the non-observed true values for the sensors; and detecting, by the computer, anomalous sensor data provided by the network based at least in part on the predicted non-observed true values and the observed values”). . . . Looking at the steps of the claims, for each of the claims, data is simply being collected and analyzed using mathematical operations/correlations being performed on them. This is simply collecting, organizing information, and making an observation (i.e., abstract idea of mental process) based on the collected information. . . . each and every step can be performed mentally and with pen and paper. Ans. 7-8, 10; see also Ans. 4, Final Act. 14. Appellant argues claims 1, 3-5, and 8-19 together, referencing claim 1 as exemplary. See Appeal Br. 8-14. As a result, we select independent claim 1 as the representative claim and address Appellant’s arguments thereto. See 37 C.F.R. § 41.37(c)(1)(iv). Claims 3-5 and 8-19 stand or fall with claim 1. Id. Appellant alleges the claims do not recite a mental process because: [t]he human mind is not equipped to apply graphical model inference, apply graphical model inference using a graphical model, or apply graphical model inference to predict non- observed true values for sensors, as recited in claim 1 (as an example) and recited in the other claims. . . . Appeal 2020-006011 Application 15/543,745 8 the instant claims set forth a specific, concrete way to detect anomalous sensor data. Appeal Br. 10-11. Appellant’s arguments do not persuade us that claim 1 does not recite an abstract idea, and we concur with the Examiner’s conclusion that the claim recites an abstract idea. Ans. 4, 7-8, 10; Final Act. 10-14. In particular, we agree with the Examiner and find that particular portions of claim 1 recite elements that fall within the abstract idea grouping of mental processes. See id. Specifically, claim 1 sets forth a process performable in the human mind or with pen and paper, the process collecting and accessing information (the claimed “accessing . . . historical data for a network of a plurality of sensors”), analyzing the information (the claimed “constructing . . . a graphical model of the network, wherein the graphical model comprises, for each sensor of the plurality of sensors, a true value node associated with a non-observed true value for the sensor and an observed value node associated with an observed value for the sensor” where “constructing the graphical model comprises determining relationships among the true value nodes and the observed value nodes based on the historical data,” and “applying . . . graphical model inference using the graphical model to predict the non-observed true values for the sensors”), and providing results of the collection and analysis (the claimed “detecting . . . anomalous sensor data provided by the network based at least in part on the predicted non-observed true values and the observed values”). See generally Ans. 3-4, 7-10; Final Act. 10-14. Our reviewing court has concluded that mental processes include similar concepts of collecting, manipulating, and providing data. See Intellectual Ventures I LLC v. Capital Appeal 2020-006011 Application 15/543,745 9 One Fin. Corp., 850 F.3d 1332, 1340 (Fed. Cir. 2017) (the Federal Circuit held “the concept of . . . collecting data, . . . recognizing certain data within the collected data set, and . . . storing that recognized data in a memory” ineligible); Electric Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016) (merely selecting information, by content or source, for collection, analysis, and display does nothing significant to differentiate a process from ordinary mental processes); CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1375 (Fed. Cir. 2011) (purely mental processes can be unpatentable, even when performed by a computer). Although Appellant argues that “[t]he human mind is not equipped to apply graphical model inference, apply graphical model inference using a graphical model, or apply graphical model inference to predict non-observed true values for sensors,” Appellant does not explain why that might be so. See Appeal Br. 10. Claim 1 broadly recites a “graphical model inference” that is applied to a graphical model to “predict the non-observed true values for the sensors,” without providing any details regarding the complexity of the “graphical model inference” or of the performed prediction. The steps of claim 1 also do not require constructing a graphical model of “large sensor networks (e.g., a network that has ‘hundreds or even tens of million’ of sensors” (as Appellant argues, see Appeal Br. 11). Rather, claim terms are to be given their broadest reasonable interpretation, as understood by those of ordinary skill in the art and taking into account whatever enlightenment may be had from the Specification. In re Morris, 127 F.3d 1048, 1054 (Fed. Cir. 1997). “In the patentability context, claims are to be given their broadest reasonable interpretations . . . limitations are not to be read into the claims from the specification.” In re Van Geuns, 988 F.2d 1181, 1184 (Fed. Appeal 2020-006011 Application 15/543,745 10 Cir. 1993) (citations omitted). Under its broadest reasonable interpretation, claim 1 requires a “graphical model”-constructed for “a network” of “a plurality of sensors” (which may include as few as two sensors)-to which a generically claimed “graphical model inference” is applied to predict non- observed true values for, e.g., two sensors for which observed values (e.g., one for each sensor) are available. See Appeal Br. 23 (claim 1). As the Examiner finds, these claimed network modeling and analysis steps are manually performable by, e.g., pen and paper. Ans. 7-8, 10; Final Act. 3, 11-14. That is, although claim 1 provides that steps are performed “by a computer,” the underlying operations recited in the claim are acts that could be performed mentally and by pen and paper (without the use of a computing device) on existing data (historical data previously collected and made available). See Ans. 4, 7-8, 10-11; Final Act. 3, 11-14. Thus, we agree with the Examiner that claim 1, and grouped claims 3- 5 and 8-19, recite an abstract idea of a mental process. Having determined the claims recite an abstract idea (a mental process) identified in the Guidance, we turn to Step 2A, Prong 2, of the Guidance to determine whether the abstract idea is integrated into a practical application. See Guidance at 54-55. Step 2A, Prong 2: “does not integrate that exception into a practical application” The Examiner determines claim 1 does not recite additional elements that integrate the judicial exception into a practical application. Ans. 4-5, 11-12; Final Act. 3, 8, 13-15. In particular, the Examiner determines Claims do not integrate a practical application of the abstract idea in the claims. Claims merely recite generic and conventional computing elements of sensors, network, processor, and storage Appeal 2020-006011 Application 15/543,745 11 medium used as tools to implement the recited abstract idea. . . . the claimed invention is directed to a judicial exception (i.e. an abstract idea of mental process). . . . No post-solution activity is claimed, significant or insignificant, and thus there is no improvement whatsoever to any technology or computing device (i.e. no computer device realizes any improvements, if any were to be alleged, such as using improved accuracy of anomalous sensor readings to effectuate anything). Ans. 4, 11-12; see also Final Act. 3, 13-14. We agree. Under Revised Step 2A, Prong 2 of the Guidance, we recognize that claim 1 references additional elements such as “a computer” and “a network of a plurality of sensors.” The other independent claims (8 and 12) include additional elements such as “at least one processor,” “a memory,” and a “non-transitory machine readable storage medium.” Furthermore, our review of Appellant’s Specification finds that the terms “computer,” “processor,” “memory,” “non-transitory machine readable storage medium,” and “network of a plurality of sensors” are nominal. Appellant’s Specification indicates that the “computer,” “processor,” “memory,” “non- transitory machine readable storage medium,” and “network of a plurality of sensors” (see Spec. ¶¶ 11-13, 41-43) of claims 1, 8, and 12 do not recite specific types of additional elements or their operations. As a result, these additional elements are not enough to distinguish the steps of claim 1 (and the operations of claims 8 and 12) from the description of a mental or manually performable process. See SiRF Tech., 601 F.3d at 1319, 1333 (“In order for the addition of a machine to impose a meaningful limit on the scope of a claim, it must play a significant part in permitting the claimed method to be performed, rather than function solely as an obvious Appeal 2020-006011 Application 15/543,745 12 mechanism for permitting a solution to be achieved more quickly, i.e., through the utilization of a computer for performing calculations.”).) Appellant argues, however, that “the claims as a whole integrate[] the purported recited judicial exception into a practical application” providing “an improvement in the functioning of a computer, or an improvement to other technology or technical field” because “the technology [of the claims] pertains to sensor networks, and the claims recite improvements in the functioning in sensor networks in a meaningful way (e.g., ways to identifies[sic] anomalous sensors in network containing hundreds or even tens of millions of sensors).” Appeal Br. 11-12. Appellant further argues the claims are patent-eligible because “[they] are necessarily rooted in (and inextricably tied to) technology to monitor and detect problems with physical components, i.e., sensors,” and the use of Appellant’s described model “allows detection of anomalous sensor data, which may not be achievable using a threshold-based outlier detection method. . . . [and] allows a ‘global dependency structure . . . to predict sensor values for the network.’” Id. at 11 (citing Spec. ¶¶ 9-10). Appellant’s arguments are not persuasive because claim 1 does not recite or require identifying “anomalous sensors in [a] network containing hundreds or even tens of millions of sensors,” as Appellant argues. See Appeal Br. 12. Claim 1 merely recites “detecting, by the computer, anomalous sensor data provided by” a “network of a plurality of sensors,” which does not require resolving complexities of data processing that may be needed in connection with “large sensor networks . . . [such as] a network that has ‘hundreds or even tens of million’ of sensors.” See id. at 11 (citing Spec. ¶ 9). Appellant submits the described model “allows detection of Appeal 2020-006011 Application 15/543,745 13 anomalous sensor data, which may not be achievable using a threshold- based outlier detection method” (see id. at 11 (citing Spec. ¶¶ 9-10)), but claim 1 does not recite the particular use of a global dependency structure- based network model for outlier sensor data detection as described in paragraphs 9 and 10 of the Specification. Instead, claim 1 recites “detecting, by the computer, anomalous sensor data provided by the network based at least in part on the predicted non-observed true values and the observed values,” which fails to capture how the claim would provide “an improvement in the functioning of a computer, or an improvement to other technology or technical field” (see id. at 11). Ans. 11-12; Final Act. 16. For example, it is not clear (and Appellant does not sufficiently explain) how the claimed detection of anomalous sensor data might “improve[] . . . the functioning in sensor networks” (see Appeal Br. 11-12) beyond a generic detection of sensor malfunction. Although Appellant’s claimed method applies to (and employs) technology (i.e., “a computer” and “a network of a plurality of sensors”), claim 1 does not demonstrate a use of computing and technological elements that in combination perform functions that are not merely generic, such as, e.g., the claims in DDR. See DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1257-1258 (Fed. Cir. 2014) (“[T]he claimed solution is necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks” as “the claims . . . specify how interactions with the Internet are manipulated to yield a desired result-a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink.”). Appellant also analogizes the claims to those involved in SRI International, Inc. v. Cisco Systems, Inc., 930 F.3d 1295 (Fed. Cir. 2019) Appeal 2020-006011 Application 15/543,745 14 where the court concluded that a claim reciting the use of a plurality of network monitors to analyze specific network traffic data and to identify suspicious activity on the network constituted an improvement in computer network technology. See Appeal Br. 10. The Federal Circuit stated that “the claims here are not directed to using a computer as a tool-that is, automating a conventional idea on a computer. Rather, the representative claim improves the technical functioning of the computer and computer networks by reciting a specific technique for improving computer network security.” SRI International, 930 F.3d at 1304. We are not persuaded by Appellant’s argument. SRI International involved complex claims necessarily rooted in computer technology, which constituted an improvement rather than performance of known business practices on the Internet. The claims of the instant application merely use the claimed computer as a tool for “predicting data acquired by a network of sensors based at least in part on a graphical model of the network.” Spec. Abstract.; SRI International, 930 F.3d at 1303 (indicating that “[t]he specification bolsters our conclusion that the claims are directed to a technological solution to a technological problem,” particularly as “[t]he claims are directed to using a specific technique-using a plurality of network monitors that each analyze specific types of data on the network and integrating reports from the monitors-to solve a technological problem arising in computer networks: identifying hackers or potential intruders into the network.”). In contrast to SRI International, Appellant’s claim 1 does not preclude using the computer to draw a picture of two pairs of true and observed value nodes with a connection between them, add two traffic Appeal 2020-006011 Application 15/543,745 15 sensors on the connection between the nodes, and then use the reading on one sensor to predict the traffic on the other sensor. Thus, we determine that claim 1, and grouped claims 3-5 and 8-19, do not recite “additional elements that integrate the judicial exception into a practical application,” and are directed to an abstract idea in the form of a mental process. Guidance at 52, 54; see also MPEP § 2106.05(a)-(c), (e)- (h). Therefore, we proceed to Step 2B, The Inventive Concept. Alice/Mayo-Step 2 (Inventive Concept) Step 2B identified in the Revised Guidance Step 2B of the Alice two-step framework requires us to determine whether any element, or combination of elements, in the claim is sufficient to ensure that the claim amounts to significantly more than the judicial exception. Alice, 573 U.S. at 221; see also Guidance at 56. Appellant challenges the Examiner’s findings as to the second step of the Alice analysis on the basis that: (i) the “claims are replete with innovative elements pertaining to a way to construct a graphical model of a network of sensors in an innovative way that models each sensor as being a true value node and an observed value node” and “applying graphical model inference to predict the non-observed true values for the sensors”; and (ii) the rejection “fails to set forth any of the factual determinations that are set forth in the Berkheimer Memo” and “fails to show that these [claimed] computer functions are well-understood, routine or conventional.” Appeal Br. 12-14 (citing Berkheimer v. HP Inc., 881 F.3d 1360 (Fed. Cir. 2018); USPTO Memorandum, “Changes in Examination Procedure Pertaining to Subject Matter Eligibility, Recent Subject Matter Eligibility Decision Appeal 2020-006011 Application 15/543,745 16 (Berkheimer v. HP, Inc.),” published on April 19, 2018 (“Berkheimer Memo”)). As recognized by the Guidance, an “inventive concept” under Alice step 2 can be evaluated based on whether an additional element or combination of elements: (1) “[a]dds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present;” or (2) “simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.” See Guidance at 56. Following the Guidance, the Examiner finds that the claims do not contain any additional elements, individual or in combination, which amount to significantly more than the abstract idea. Final Act. 15-17; Ans. 12-14. In particular, the Examiner finds the claims do not recite an improvement to another technology or technical field, an improvement to the functioning of any computer itself, or provide meaningful limitations beyond generally linking an abstract idea (collecting data from sensors) to a particular technological environment (a general purpose computer and/or environment of the user). . . . the additional elements are applied merely to carry out data processing . . . receiving and analyzing which fall under well-understood, routine, and conventional functions of generic computers-in our common day-to-day interactions. Ans. 13-14. We agree with the Examiner that the additional elements of claim 1 (and those of claims 8 and 12, reciting a “processor,” “memory,” and “non-transitory machine readable storage medium”), when considered individually and in an ordered combination, correspond to nothing more than Appeal 2020-006011 Application 15/543,745 17 generic and well-known components used to implement the abstract idea. See Ans. 12-14; Final Act. 15-17. We are also not persuaded that the network data processing solution recited in claim 1 is rooted in computer technology, or that the operations recited in claim 1 produce an improvement to the functioning of the computer. Appellant’s abstract idea of processing historical sensor data to detect anomalous data by operations performable by a person mentally or with pen and paper (without necessarily analyzing a large amount of data), is not rooted in computer technology; rather, claim 1 employs a computer to automate manually performable steps that analyze previously provided sensor data. Appellant also has not demonstrated the claimed generic computer is able to perform functions that are not merely generic, as, e.g., the claims in DDR. See Ans. 13-14; Final Act. 16-17; DDR Holdings, 773 F.3d at 1258. The presence of “innovative elements pertaining to a way to construct a graphical model of a network of sensors in an innovative way that models each sensor as being a true value node and an observed value node and applying graphical model inference to predict the non-observed true values for the sensors” in the claims (see Appeal Br. 12) does not, by itself, confer patent eligibility under § 101. That is because even a novel and nonobvious claim directed to a purely-abstract idea is patent-ineligible. See Mayo Collaborative Servs., 566 U.S. at 89-91; see also Diehr, 450 U.S. at 188-89 (“[t]he ‘novelty’ of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter.”). As noted supra, Appellant also argues there is no evidence that “any of the purported additional elements are well-understood, routine or Appeal 2020-006011 Application 15/543,745 18 conventional” or that “applying a computer to construct a graphical model, as defined by the claims, and apply[ing] graphical model inference to predict non-observed true values for sensors is well-understood, routine or conventional activity” (see Appeal Br. 13-14). However, Appellant’s Specification describes generic computing elements performing generic data processing, and Appellant’s claim 1 uses generic technology (“a computer”) to access and analyze historical network data. Final Act. 3, 16-17; Ans. 13- 14; see Spec. ¶¶ 11-13, 41-43 (describing the employed hardware and computing elements in purely generic terms). Additionally, Appellant has not provided evidence that claim 1’s manually performable steps (of, e.g., accessing historical data, constructing a graphical network model, applying graphical model inference, and detecting anomalous sensor data based on the historical data) are unconventional. As noted supra, claim 1 does not require processing historical data from “large sensor networks” with “hundreds or even tens of million” of sensors (as argued at Appeal Br. 11), the claimed steps of constructing and analyzing the graphical model can be manually performed by, e.g., pen and paper, and Appellant has not explained why such manually performable operations would be non-conventional. “[T]he use of generic computer elements like a microprocessor or user interface” to perform conventional computer functions “do not alone transform an otherwise abstract idea into patent-eligible subject matter.” FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1096 (Fed. Cir. 2016) (citing DDR Holdings, 773 F.3d at 1256); see also OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (“relying on a computer to perform routine tasks more quickly or more accurately is insufficient to render a claim patent eligible”). Thus, implementing the Appeal 2020-006011 Application 15/543,745 19 abstract idea with the generic and well-known components recited in claims 1, 8, and 12 “fail[s] to transform that abstract idea into a patent-eligible invention.” Alice, 573 U.S. at 221. Therefore, we agree with the Examiner that claim 1, and grouped claims 8 and 12, do not provide significantly more than the abstract idea itself. Therefore, because claim 1, and grouped claims 3-5 and 8-19 are directed to the abstract idea of a mental process, and do not provide significantly more than the abstract idea itself, we agree with the Examiner that claims 1, 3-5, and 8-19 are ineligible for patenting. We therefore affirm the Examiner’s § 101 rejection of claims 1, 3-5, and 8-19. Rejections under 35 U.S.C. § 102 and § 103 The Examiner, among other things, finds Premkumar discloses “constructing, by the computer, a graphical model of the network, wherein the graphical model comprises, for each sensor of the plurality of sensors, a true value node associated with a non-observed true value for the sensor and an observed value node associated with an observed value for the sensor” with the graphical model construction comprising “determining relationships among the true value nodes and the observed value nodes based on the historical data,” and “applying, by the computer, graphical model inference” to the graphical model to detect anomalous sensor data, as recited in claim 1. Final Act. 18-20; Ans. 15-19. Particularly, the Examiner finds Premkumar’s description of “a telecommunication network (e.g. a base station subsystem, BSS) topology [that] may be modelled as a graphical model which can capture both the interactions/relationships between neighbouring NEs [(Network Elements)] and KPI [(key performance Appeal 2020-006011 Application 15/543,745 20 indicator)] level interactions” (see Premkumar ¶ 26) discloses the claimed construction of “a graphical model of the network.” Final Act. 18 (citing Premkumar ¶¶ 26-38, 42-54); Ans. 15-17. The Examiner finds Premkumar’s network topology model “comprises, for each sensor of the plurality of sensors, a true value node associated with a non-observed true value for the sensor and an observed value node associated with an observed value for the sensor” (as recited in claim 1) because Prem6 discloses modeling the network topology including NEs as a MRF [(Markov Random Field)] model wherein the nodes of the MRF model represent the variables, which may be hidden or observed represented by KPIs. In order to learn the network conditions and to predict alarms i.e., anomaly, wherein the values are based on the observed KPI data based on the nodes data (i.e., “observed value”), the route (i.e., edges[] data) and the expected KPI data (i.e., “true value”). Ans. 16 (citing Premkumar ¶¶ 13-14, 26-38, 41-54, Abstract); see also Final Act. 18-19. Because the Examiner’s rejection is an anticipation rejection, our analysis must take into account that “[a] claim is anticipated only if each and every element as set forth in the claim is found, either expressly or inherently described, in a single prior art reference.” Verdegaal Bros. v. Union Oil Co. of California, 814 F.2d 628, 631 (Fed. Cir. 1987). Applying this principle, we find that the Examiner has not adequately explained how Premkumar discloses constructing a graphical network model comprising “for each sensor . . . a true value node associated with a non-observed true value for the sensor and an observed value node associated with an observed 6 The Examiner refers to Premkumar as “Prem.” See Ans. 14. Appeal 2020-006011 Application 15/543,745 21 value for the sensor . . . [and] determin[ed] relationships among the true value nodes and the observed value nodes,” as recited in claim 1. Accordingly, we are persuaded that the Examiner erred in finding that Premkumar discloses the claimed “constructing” and “applying” steps recited in claim 1. In particular, we are persuaded by Appellant’s argument that the Examiner’s rejection fails to show . . . where Premkumar discloses a graphical model containing the alleged two nodes for each sensor, as set forth in claim 1. Instead, Premkumar merely mentions a “graphical model”, such as an MRF model . . . but fails to disclose or render obvious the expressly-defined two node model per sensor that is set forth in claim 1. The topology graph that is depicted in Fig. 1 of Premkumar cannot be considered the graphical model of claim 1, as the nodes are network elements, not sensor values. Moreover, the graph does not include true value nodes and observed value nodes; and a sensor is not associated with a pair of nodes of the graph. Fig. 5 of Premkumar cannot be considered the graphical model of claim 1 for similar reasons, as Fig. 5 merely depicts routes between NE network elements. Fig. 6 fails to disclose the claimed graphical model. Fig. 6 of Premkumar is “a schematic illustration of how values of several KPIs can be monitored for predicting a future alarm” (Premkumar, para. no. [0023]). The nodes of Fig. 6 pertain to different NEs (see, for example, para. no. [0048] of Premkumar). Neither Fig. 6 nor the corresponding description disclose a graph with a true value node and an observed value node for each sensor, as set forth in claim 1. Appeal Br. 16-17 (emphases added). Appellant’s arguments are supported by the description in Premkumar, which provides that: (i) the network topology in Figure 1 assigns one node to one sensor (e.g., a node labeled 2 is assigned to a Mobile Switching Centre, a node labeled 3 assigned to a Media Appeal 2020-006011 Application 15/543,745 22 Gateway, a node labeled 4 assigned to a Base Station Controller, and a node labeled 7 assigned to a Network Operations Centre); (ii) the network topology in Figure 5 assigns one node to one sensor (e.g., a node labeled 2X to Network Element NE 2X, a node labeled 3A to Network Element NE 3A, etc.); and (iii) the diagram in Figure 6 is an illustration of known relationships between KPIs or various NEs. See Premkumar ¶¶ 27-28, 30, 43-44, 46-48, Figs. 1, 5, 6. None of the graphical models in Premkumar are a graphical network model comprising two nodes for each sensor, of which an observed value node is associated with an observed value for the sensor, and a true value node is associated with a non-observed true value that is to be predicted, as recited in claim 1. Appeal Br. 16-18. And, although Premkumar predicts sensor alarms (as noted by the Examiner, see Ans. 16), Premkumar does not perform this prediction by constructing a graphical network model comprising two nodes for each sensor/NE, with one node corresponding to an alarm that is to be predicted. Rather, Premkumar discloses “predicting one or more alarms at an NE 2, 3 or 4, based on the obtained KPI values from other NE(s) 2, 3 and 4 and route(s) 5 and 6” and “based on previous training, e.g. based on previous alarm triggering in view of the monitored KPIs of related NEs 2, 3 and 4 and routes 5 and 6.” See Premkumar ¶¶ 30, 46-48, 51-55. We additionally note that the Examiner’s reference to “MRF and its inherent features” and an alleged “inherent” disclosure in Premkumar still does not explain how Premkumar discloses each element in the claimed “constructing” and “applying” steps recited in claim 1. To anticipate, a prior art reference must “disclose all elements of the claim within the four corners of the document, and it must disclose those elements arranged as in the Appeal 2020-006011 Application 15/543,745 23 claim.” Microsoft Corp. v. Biscotti, Inc., 878 F.3d 1052, 1068 (Fed. Cir. 2017) (internal quotation marks omitted); see also Richardson v. Suzuki Motor Co., 868 F.2d 1226, 1236 (Fed. Cir. 1989); Verdegaal Bros., 814 F.2d at 631. As discussed supra, although Premkumar outputs predictions, its alarm prediction technique differs from the claimed prediction of non- observed true sensor values for nodes in a graphical network model. Accordingly, for essentially the same reasons argued by Appellant in the Brief, as further discussed above, we reverse the Examiner’s anticipation rejection of independent claim 1. We also do not sustain the Examiner’s anticipation rejection of independent claims 8 and 12 reciting limitations similar to (and rejected for the same reasons as) claim 1. We additionally do not sustain the Examiner’s anticipation rejection of claims 3-5, 9-11, 13, 16, and 19 depending from one of claims 1, 8, and 12. The Examiner does not identify how the teachings of Farruggia remedy the noted deficiencies of Premkumar, and we cannot sustain the obviousness rejections of dependent claims 17 and 18 for the same reasons. CONCLUSION The Examiner’s rejection of claims 1, 3-5, and 8-19 under 35 U.S.C. § 101 is AFFIRMED. The Examiner’s rejection of claims 1, 3-5, 8-13, 16, and 19 under 35 U.S.C. § 102(a)(1) is REVERSED. The Examiner’s rejection of claims 17 and 18 under 35 U.S.C. § 103 is REVERSED. DECISION SUMMARY In summary: Appeal 2020-006011 Application 15/543,745 24 Claim(s) Rejected 35 U.S.C. § Reference(s)/ Basis Affirmed Reversed 1, 3-5, 8- 19 101 Eligibility 1, 3-5, 8-19 1, 3-5, 8- 13, 16, 19 102(a)(1) Premkumar 1, 3-5, 8-13, 16, 19 17, 18 103 Premkumar, Farruggia 17, 18 Overall Outcome 1, 3-5, 8-19 Because we have affirmed at least one ground of rejection with respect to each claim on appeal, the Examiner’s decision is affirmed. See 37 C.F.R. § 41.50(a)(1). No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation