Klarna ABDownload PDFPatent Trials and Appeals BoardFeb 18, 20212020005406 (P.T.A.B. Feb. 18, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/918,169 10/20/2015 Mikael Hussain 0101560-001US0 4881 22504 7590 02/18/2021 DAVIS WRIGHT TREMAINE, LLP/SEATTLE IP DOCKETING DEPT. DAVIS WRIGHT TREMAINE LLP 920 FIFTH AVE., SUITE 3300 SEATTLE, WA 98104-1610 EXAMINER KHATTAR, RAJESH ART UNIT PAPER NUMBER 3693 NOTIFICATION DATE DELIVERY MODE 02/18/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patentdocket@dwt.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte MIKAEL HUSSAIN __________________ Appeal 2020-005406 Application 14/918,169 Technology Center 3600 ____________________ Before JOSEPH A. FISCHETTI, BIBHU R. MOHANTY, and JAMES P. CALVE, Administrative Patent Judges. CALVE, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the decision of the Examiner to reject claims 1–3 and 5–21, which are all of the pending claims.2 We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 “Appellant” refers to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies Klarna AB as the real party in interest. Appeal Br. 3. 2 Claim 4 is cancelled. Appeal Br. 18 (Claims App.). Appeal 2020-005406 Application 14/918,169 2 CLAIMED SUBJECT MATTER The claims relate to mitigating the risk of default by a consumer on a transaction conducted over the Internet by generating a fidelity score based on information collected about a consumer to reflect a projected likelihood of a consumer defaulting on payment. See Spec. ¶¶ 1, 13, 16–20. Claim 1, 5, and 13 are independent. Representative claim 1 recites: 1. A computer-implemented method, comprising: under the control of one or more computer systems that execute instructions, providing executable instructions to a client computing device associated with a user that, as a result of being executed by the client computing device, causes the client computing device to: collect client data that includes: personally identifiable information about the user; an identifier associated with the client computing device; and a measurement captured by the client computing device associated the user interacting with the client computing device, the measurement including: an action performed by the user to an object displayed in a user interface of the client computing device; an identity of the object; and a time value indicating a time at which the action was performed to the object; and provide the client data to the one or more computer systems; obtaining stored transactional data associated with one or more previous transactions involving the user; obtaining verification data verifying that the personally identifiable information is accurate; Appeal 2020-005406 Application 14/918,169 3 transforming the stored transactional data, the verification data, and the client data that includes the measurement into a set of variable values usable as input into a machine learning algorithm that is trained to infer characteristics about the user from the set of variable values; obtaining, as a result of inputting the set of variable values into the machine learning algorithm, a fidelity score output by the machine learning algorithm; and based at least in part on the fidelity score and without obtaining additional information about the user from a third party: determining a payment or credit option to display in the user interface for a current transaction; and updating, contents of the user interface to provide the payment or credit option. REJECTION Claims 1–3 and 5–21 are rejected under 35 U.S.C. § 101 as directed to a judicial exception without significantly more. ANALYSIS Patent Eligibility of Claims 1–20 Appellant argues the claims as a group. See Appeal Br. 9–14. We select claim 1 as representative. See 37 C.F.R. § 41.37(c)(1)(iv). The Examiner determines that claim 1 recites certain methods of organizing human activity of commercial or legal interactions by providing payment/credit options for a current transaction and the determining step can be carried out using pen and paper as a mental process. Final Act. 3; Ans. 3–4. The Examiner determines that the abstract idea is applied on a generic computing device and machine learning algorithm recited at high levels of generality and used as tools that do not integrate the abstract idea into a practical application or provide significantly more. Final Act. 3–5; Ans. 5. Appeal 2020-005406 Application 14/918,169 4 Principles of Law Section 101 of the Patent Act states: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. § 101. This provision contains an implicit exception: “Laws of nature, natural phenomena, and abstract ideas are not patentable.” Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, 216 (2014). To distinguish patents that claim laws of nature, natural phenomena, and abstract ideas from those that claim patent-eligible applications, we first determine whether the claims are directed to a patent-ineligible concept. Id. at 217. If they are, we consider the elements of each claim, individually and “as an ordered combination,” to determine if additional elements “‘transform the nature of the claim’ into a patent-eligible application” as an “inventive concept” sufficient to ensure the claims in practice amount to significantly more than a patent on the ineligible concept itself. Id. at 217–18. The USPTO has issued guidance about this framework. 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (Jan. 7, 2019) (“Revised Guidance”). Under the Revised Guidance, to determine whether a claim is “directed to” an abstract idea, we evaluate whether the claim recites: (1) any judicial exceptions, including certain groupings of abstract ideas listed in the Revised Guidance (i.e., mathematical concepts, certain methods of organizing human activities such as a fundamental economic practice, or mental processes); and (2) additional elements that integrate the judicial exception into a practical application (see MPEP §§ 2106.05(a)–(c), (e)–(h) (9th ed. Rev. 10.2019 June 2020) (“MPEP”)). Id. at 52–55. Appeal 2020-005406 Application 14/918,169 5 Only if a claim (1) recites a judicial exception and also (2) does not integrate that exception into a practical application, do we then consider whether the claim either (3) adds a specific limitation beyond a judicial exception that is not “well-understood, routine, conventional” in the field (see MPEP § 2106.05(d)) or (4) simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. Id. at 56. Step 1: Are the Claims Within a Statutory Category? Claim 1 recites a method, which is a statutory category of invention under 35 U.S.C. § 101, namely, a process. Final Act. 2; see Appeal Br. 8. Step 2A, Prong One: Do the Claims Recite a Judicial Exception? We agree with the Examiner that claim 1 recites certain methods of organizing human activity to facilitate commercial or legal interactions by providing payment/credit options. Final Act. 3; Ans. 3–4. The Revised Guidance enumerates this concept as certain methods of organizing human activity––fundamental economic principles or practices (mitigating risk) and commercial interactions (legal interactions and sales activities). See Revised Guidance, 84 Fed. Reg. at 52. Some limitations recite mental processes. Id. The method addresses the needs of merchants and payment services to mitigate the risk of consumers defaulting on a transaction conducted over the Internet from a consumer’s computing device. Spec. ¶ 1. A merchant may contact a private credit bureau to obtain a consumer’s credit score during an online transaction, but this process is costly in terms of time and money. Id. So, the method collects data about a consumer to generate a fidelity score to indicate a likelihood of default on a transaction and to select a payment or a credit option to offer the consumer. See id. ¶¶ 11–17, 25–27, 48–50, 59–61. Appeal 2020-005406 Application 14/918,169 6 The first limitations recite data collection as (1) collecting a user’s personally identifiable information (PII), an identifier of a client computing device, and a measurement of the user interacting with the client computing device, (2) obtaining transaction data of previous user transactions; and (3) obtaining verification data of a user’s PII. Appeal Br. 16 (Claims App.). See Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1355 (Fed. Cir. 2016) (“[M]erely selecting information, by content or source, for collection, analysis, and display does nothing significant to differentiate a process from ordinary mental processes, whose implicit exclusion from § 101 undergirds the information-based category of abstract ideas.”); In re TLI Commc’ns LLC Patent Litig., 823 F.3d 607, 613 (Fed. Cir. 2016) (“[T]he claims . . . are simply directed to the abstract idea of classifying and storing digital images in an organized manner. . . . [W]e have applied the ‘abstract idea’ exception to encompass inventions pertaining to methods of organizing human activity.”); Content Extraction and Transmission LLC v. Wells Fargo Bank, N.A., 776 F.3d 1343, 1348 (Fed. Cir. 2014) (collecting data, recognizing certain data in the collected data, and storing recognized data recite mental steps humans perform such as banks reviewing checks, recognizing data in checks (e.g., amount, account number, account holder), and storing data); Berkheimer v. HP Inc., 881 F.3d 1360, 1367 (Fed. Cir. 2018) (“The parsing and comparing of [certain] claims . . . are similar to the collecting and recognizing of Content Extraction . . . and the classifying in an organized manner of TLI . . . .”); CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372–73 (Fed. Cir. 2011) (obtaining information about credit card transactions made from an Internet address and creating a map of credit card numbers to compare to transactions can be performed in the human mind). Appeal 2020-005406 Application 14/918,169 7 The Specification describes an embodiment in which a merchant’s checkout page 122 causes a user’s computing device to collect PII entered by a user (e.g., name, address, telephone number, email address, date of birth, etc.) and interactions of the user with web pages 120 of a merchant’s website. Spec. ¶ 20, Fig. 1. User interactions include mouse clicks, mouse and cursor movements on a page, and user inputs via input/output devices. Id. ¶ 22. User interactions also may be measured by image sensors for gaze detection, accelerometers, gyroscopes, microphones, global positioning system receivers, and other sensors of behavioral data. Id. ¶¶ 22, 23. The Specification’s high-level description of user measurement features using generic input/output devices and generic sensors to perform basic data capture confirms the abstract nature of these steps as recited in claim 1 at a high level of generality without any technical details of the collection steps. In Electric Power, the claims similarly collected measurements from data streams in real time, time stamped and synchronized from a power grid, including voltages, power, phase angles, oscillation modes, damping, and other measured power system data. See Elec. Power, 830 F.3d at 1351–52. Nonetheless, the court held that collecting such measured information, even when limited to particular content, did not take the claims out of the abstract realm of mental processes. Id. at 1353, 1355. Similarly, claims to collecting a user’s physiological heart rate data relating to arrhythmia events for analysis and display required only generic data analysis and therefore were abstract like those in Electric Power. See CardioNet, LLC v. InfoBionic, Inc., 816 F. App’x 471, 475 (Fed. Cir. 2020) (citing Elec. Power and Content Extraction). Here, claim 1 recites similar generic data collection steps of user PII and measured interaction data. Appeal 2020-005406 Application 14/918,169 8 The “stored transactional data” is described as previous transactions by the user such as types and amounts of purchases and payment history. Spec. ¶¶ 13, 29. Verification data can be obtained from a third party credit bureau to verify a user’s PII such as name, address, and the like. Id. ¶ 39. Again, the Specification’s broad description of these limitations confirms the claim’s focus on these data collection steps as abstract mental processes. The final limitations analyze the collected data to generate “variable values” (scores) for the collected transactional data, verification data, and client data (PII and measured interaction data), and output a “fidelity score” by a machine learning algorithm without technical details of this process. The fidelity score is used to determine and provide payment or credit options customized to the user on a user interface. Appeal Br. 16–17 (Claims App.). The method mitigates the risk in a sales transaction by calculating a “fidelity score” for a user and recommending appropriate loan/credit options based on the user’s fidelity score. The Specification indicates the method replaces credit scores provided by a private credit bureau and mitigates the risk of default by the consumer. Spec. ¶ 1. Mitigating risk is a fundamental economic practice and an abstract idea when recited at such a high level of generality. Alice, 573 U.S. at 219 (claims to using a third party to mitigate settlement risk recite a fundamental economic practice long prevalent in our system of commerce); Credit Acceptance Corp. v. Westlake Servs., 859 F.3d 1044, 1054 (Fed. Cir. 2017) (claims to processing an application to finance a purchase of a product by calculating a user credit score recite a fundamental economic practice long prevalent in our system of commerce). The claimed credit scoring method also saves the cost of obtaining a credit assessment of a user from a third party credit service bureau. See Spec. ¶¶ 35, 46. Appeal 2020-005406 Application 14/918,169 9 In Clarilogic, similar claims to a method of certifying financial data recited an abstract idea. Clarilogic, Inc. v. FormFree Holdings Corp., 681 F. App’x 950, 954 (Fed. Cir. 2017). The method received a request for certified financial data, collected financial data, transformed the data into a desired format, and validated the data by applying an algorithm engine to generate a credit report. Id. at 952–54. The steps of gathering financial information of potential borrowers for analysis and displaying certain results of the collection and analysis recited an abstract idea as in Electric Power. Id. at 954. The algorithm engine was applied to the collected data in some unclaimed manner that did not make the claim patent eligible. Id. The court explained. Peculiar to this case is that the algorithm engine mentioned in the claim is not claimed, identified, or explained. To be sure, claiming an algorithm does not alone render subject matter patent eligible . . . . But a method for collection, analysis, and generation of information reports, where the claims are not limited to how the collected information is analyzed or reformed, is the height of abstraction. Id.; see Mortg. Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324 (Fed. Cir. 2016) (claims to the collection of information to generate a “credit grading” to facilitate anonymous loan shopping is an abstract idea). In Mortgage Grader, a borrower’s credit grading was calculated so lenders could provide loan pricing information based on the credit grading. Mortg. Grader, 811 F.3d at 1324. The steps of collecting data, calculating a credit grading, and providing loan pricing all could be performed by humans without a computer. Id. (citing CyberSource, 654 F.3d at 1377 (computation methods that can be performed in the human mind embody the basic tools of scientific and technological work free to all men and reserved to none)). Appeal 2020-005406 Application 14/918,169 10 Here, claim 1 similarly uses a generic machine learning algorithm to output a fidelity score using “variable values” in some unspecified manner. The Specification indicates that a fidelity score may be generated by logistic regression, random forest, supervised learning algorithm, risk algorithm, regression model, or other classification algorithm. See Spec. ¶¶ 16, 47, 68. This description confirms the broad scope of claim 1. See Ericsson Inc. v. TCL Commc’n Tech. Holdings Ltd., 955 F.3d 1317, 1325 (Fed. Cir. 2020) (“[T]he specification may be ‘helpful in illuminating what a claim is directed to . . . [but] the specification must always yield to the claim language’ when identifying the ‘true focus of a claim.’”) (quoting ChargePoint, Inc. v. SemaConnect, Inc., 920 F.3d 759, 766 (Fed. Cir. 2019)); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1149 (Fed. Cir. 2016) (“The § 101 inquiry must focus on the language of the Asserted Claims themselves.”). Using a fidelity score based on user information to offer customized payment or credit options recites a fundamental practice. See Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1369–70 (Fed. Cir. 2015) (citing Alice, 573 U.S. at 219); Bridge & Post, Inc. v. Verizon Commc’ns, Inc., 778 F. App’x 882, 887 (Fed. Cir. 2019) (providing tailored information to a user based on data provided by a user is a fundamental practice); see also Bozeman Fin. LLC v. Fed. Reserve Bank of Atlanta, 955 F.3d 971, 978 (Fed. Cir. 2020) (“Verifying financial documents to reduce transactional fraud is a fundamental business practice that, without more, is not eligible for patent protection.”); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1354–55 (Fed. Cir. 2014) (underwriting a transaction by creating a transaction performance guaranty for online transactions is a long-familiar commercial transaction for contractual relationships and an abstract idea). Appeal 2020-005406 Application 14/918,169 11 Combining one abstract idea (certain methods of organizing human for a fundamental economic practice) with another abstract idea (mental processes) does not make an abstract idea non-abstract. See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327 (Fed. Cir. 2017) (“Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract.”); Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1240 (Fed. Cir. 2016) (“An abstract idea can generally be described at different levels of abstraction.”); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (“[A] claim for a new abstract idea is still an abstract idea.”). Outputting the results of the abstract data collection and analysis is part of the same abstract idea. See Elec. Power, 830 F.3d at 1354 (“And we have recognized that merely presenting the results of abstract processes of collecting and analyzing information, without more (such as identifying a particular tool for presentation), is abstract as an ancillary part of such collection and analysis.”). Thus, presenting payment or credit options in a generic user interface does not make claim 1 non-abstract. See Appeal Br. 9. The Examiner’s citation of limitations reciting transactional data and payment and credit options for a current transaction (Final Act. 3; Ans. 3–4) together with the Specification’s description of these and other features also supports the Examiner’s determination that the focus of the claim involves a commercial or legal interaction (Final Act. 3). See Appeal Br. 10. Appellant asserts the claims are rooted in computer technology similar to claim 2 of Example 37 of the 2019 Subject Matter Eligibility Examples at https://www.uspto.gov/sites/default/files/ documents/101_examples_37to42 20190107.pdf (January 7, 2019) (2019 PEG Examples). Appeal Br. 10. Appeal 2020-005406 Application 14/918,169 12 Example 37 of the 2019 PEG Examples illustrates why claim 1 here is not patent eligible. Claim 2 of Example 37 is a method of rearranging icons on a graphical user interface (GUI) by receiving a user selection to organize each icon based on specific criteria of the amount of use of each icon and automatically moving the most used icons to a position on the GUI closest to the start icon based on the determined use. 2019 PEG Examples 3. Claim 1 here does not recite features of a GUI such as icons. Instead, it just displays credit and payment options without any technical details of that process. The steps of measuring an action performed by the user to an object displayed in a user interface, an identity of the object, and a time value at which the action is performed simply collect data of user interactions with a user interface at a high level of generality that the Specification confirms to involve measurements of generic activities using generic components with time stamps of the activities. See Spec. ¶¶ 22, 23. Measuring/collecting data from sensors in real time with time stamps, without more, recites the abstract idea identified above. Elec. Power, 830 F.3d at 1351–53. Collecting a persistent device identifier and historic and location-centric information to create a user profile to deliver customized information to the user recites a fundamental practice as discussed above. See Bridge and Post, 778 F. App’x at 886–87. Tracking a user’s location, navigation data, and the time of day the user navigates to a website to serve customized information is a fundamental practice. See Intellectual Ventures, 792 F.3d at 1369–70; see also In re Morsa, 809 F. App’x 913, 917 (Fed. Cir. 2020) (requesting user demographic and psychographic information to match to a specific advertiser is a fundamental economic practice of organizing human activity through customizing information). Appeal 2020-005406 Application 14/918,169 13 The Specification describes this process as follows: Executable code may be embedded in one or more of the set of web pages 120 to collect details (also referred to as a set of measurements) about user interactions with the webpages. For example, each of the set of web pages 120 may include JavaScript or other client-side executable code that keeps a record of certain actions performed by a user (also referred to herein as a set of measurements). The code may be provided by the merchant or may be embedded in the merchant web pages in an HTML inline frame (iframe), with the iframe source being provided by a payment service provider (which may be the same provider as the provider of system of the present disclosure). The record may be a set of measurements of actions performed by the user, and each measurement of the set of measurements may include information such as a user or session identifier for the particular user/session, a detected action (e.g., onClick, onDblClick, onMouseOver, onMouseOut, onMouseDown, onMouseUp, onMouseMove, and onSubmit event triggers, etc.) performed by the user using a user input device (e.g., mouse, trackball, touch screen, keyboard, light pen, game controller, fingerprint scanner, etc.), and a timestamp for the detected action. Spec. ¶ 22. As claimed, the measurement collects data based on user actions with an “object” displayed in a user interface. See Accenture Global Servs. GmbH v. Guidewire Software, Inc., 728 F.3d 1336, 1345 (Fed. Cir. 2013) (the complexity of implementing software or detail in the specification does not make a claim reciting only an abstract idea patent-eligible). In Internet Patents, furnishing icons on a webpage and a browser with Back and Forward navigation functions to display an online application that users filled with data recited the abstract idea of retaining information in the navigation of online forms with no details. Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348 (Fed. Cir. 2015) (noting maintaining the state is not described with any mechanism for accomplishing such a result). Appeal 2020-005406 Application 14/918,169 14 Similar claims to collecting, transmitting, analyzing, and storing data to detect fraudulent and/or invalid clicks for pay-per-click advertising based on the time between two requests by the same device recited an abstract idea. See Zuili v. Google LLC, 722 F. App’x 1027, 1030 (Fed. Cir. 2018). Appellant cites Example 39 of the 2019 PEG Examples as an example of why claim 1 here is not an abstract idea. Appeal Br. 10–11. Claim 1 of Example 39 recites a method of training a neural network for facial detection by collecting digital facial images and transforming the images to create a first training set used to train the neural network in a first stage followed by a second stage that trains the network with a second training set of images that were detected incorrectly as facial images during the first stage. 2019 PEG Examples 8–9. Here, claim 1 recites a machine learning algorithm that is trained to infer characteristics about a user from variable values generated from transactional data, verification data, and client data in an unspecified way without any technical details. The algorithm outputs a fidelity score. The Specification describes the algorithm as including a logistic regression, random forest similar to supervised models described in various applications incorporated in the Specification, a supervised learning algorithm, a neural network, a vector machine, or another classification algorithm. Spec. ¶¶ 16, 35, 47, 68. The Specification’s description confirms the abstract nature of this feature. A “machine learning algorithm” as such is not described in the Specification. The description of other concepts used to calculate a fidelity score confirms the abstract nature of the claimed machine learning algorithm and fidelity score. Even if the Specification provided more technical details, such details would not make claim 1 patent eligible where it recites only an abstract idea as discussed above. See Accenture, 728 F.3d at 1345. Appeal 2020-005406 Application 14/918,169 15 The fidelity score can be a numeric value or range but its calculation is not described in any technical detail thus confirming its broad scope in claim 1 as a mental process. Spec. ¶¶ 56–63. A scoring engine is broadly described as collecting and processing various client data at a high level of generality. See id. ¶¶ 13, 20, 25–29. The broad description of the scoring function confirms that transforming transactional, verification, and client data into variable values encompasses mental processes as recited in claim 1. See Mortg. Grader, 811 F.3d at 1324 (claimed steps of applying for a loan and calculating a borrower’s credit grading to provide loan pricing can be performed by humans); Spec. ¶ 29 (prior transactions with faithful payments may weigh toward more favorable risk assessment; no prior transactions or unreliable transactions may weigh towards a less favorable risk assessment). Descriptions of analysis of other client data illustrate similar mental processes. Using Internet Brower that was preinstalled on a computer may suggest the user is less fickle than a user who uses Google Chrome browser. Spec. ¶ 15. Hovering a cursor over a submit button for a threshold amount of time may indicate a user applies careful consideration before completing a transaction, suggesting that the user may be more reliable in making credit commitments than a user who clicks quickly on the submit button. Id. Gaze detection may indicate how long a user spent reading certain portions of the terms and conditions. Id. ¶ 23. Detecting background noise may indicate a user is transacting business in a noisy area (public bus or sporting event) and may not be giving a transaction sufficient consideration. Id. Similar mental processes are described for verification and behavioral data. Id. ¶¶ 39–41. Accordingly, we determine claim 1 recites the abstract idea identified above as do claims 2, 3, and 5–21, which fall with claim 1. Appeal 2020-005406 Application 14/918,169 16 Step 2A, Prong Two: Integration into a Practical Application We next consider whether claim 1 recites additional elements that integrate the abstract idea into a practical application. Revised Guidance, 84 Fed. Reg. at 54. We determine claim 1 lacks additional elements that improve a computer or other technology or implement the abstract idea in conjunction with a particular machine or manufacture that is integral to the claim. Nor does it include an additional element that transforms or reduces a particular article to a different state or thing or applies the abstract idea in a meaningful way beyond linking it to a particular technological environment. See Revised Guidance, 84 Fed. Reg. at 55; Final Act. 3–4; Ans. 4–5. Appellant argues that claim 1 recites a practical application by making a risk assessment of a user. Appeal Br. 11. Appellant argues that integration results from the ordered combination of collecting user measurements by a client computing device, transforming the measurements into variable values input into a machine learning algorithm trained to infer user characteristics, and outputting a fidelity score without additional information from a third party to determine a payment/credit option. Appeal Br. 11. These features recite the abstract idea identified under Prong One and therefore cannot be additional elements that integrate that abstract idea into a practical application. Revised Guidance, 84 Fed. Reg. at 55 n.24 (“USPTO guidance uses the term ‘additional elements’ to refer to claim features, limitations, and/or steps that are recited in the claim beyond the identified judicial exception.”); Alice, 573 U.S. at 221 (a claim to an abstract idea must include “additional features” to ensure it does not monopolize the abstract idea) (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 77 (2012)). Appeal 2020-005406 Application 14/918,169 17 Outputting results of data collection and analysis is an ancillary part of the abstract idea. Elec. Power, 830 F.3d at 1354. Claim 1 does not recite features of a user interface as does Example 37 of the 2019 PEG Examples. See Appeal Br. 12. Example 37’s claims organize icons automatically on a GUI. 2019 PEG Examples 2–3. Such features are not claimed here. Core Wireless illustrates why claim 1 is not integrated into a practical application. Core Wireless Licensing S.A.R.L. v. LG Elecs., Inc., 880 F.3d 1356 (Fed. Cir. 2018). The claimed user interface displayed a summary window of un-launched applications reached from a menu without launching the applications. Id. at 1362–63. This display of application data improved prior art interfaces that displayed an index and required users to navigate to an application and open it to access data. Id. at 1363. Here, claim 1 recites no similar features. It displays the results of abstract processes on a generic interface that is not a particular machine that is integral to the claim. The Specification describes algorithms for generating fidelity scores without details of training them to infer user characteristics. Spec. ¶¶ 16, 35, 47, 68. As claimed, this feature is a mental process that humans perform to score credit/loan applications. See Mortg. Grader, 811 F.3d at 1324. The method “may save 1.3 seconds or more of transaction processing time.” Spec. ¶ 46; Appeal Br. 11–12. Any such improvements in speed or network latency by eliminating the need to obtain a credit score from a credit bureau results from generic computer automation rather than a technological improvement to computers or other technology. See Credit Acceptance, 859 F.3d at 1055; see Spec. ¶¶ 1, 35, 46, 59. Accordingly, we determine that claim 1 lacks additional elements that are sufficient to integrate the abstract idea into a practical application. Appeal 2020-005406 Application 14/918,169 18 Step 2B: Do the Claims Include an Inventive Concept? We next consider whether claim 1 recites any additional elements, individually or as an ordered combination, that provide an inventive concept. Alice, 573 U.S. at 217–18. This step is satisfied when claim limitations involve more than well-understood, routine, and conventional activities known in industry. Berkheimer, 881 F.3d at 1367; see Revised Guidance, 84 Fed. Reg. at 56 (Step 2B looks for a limitation beyond a judicial exception that is not “well-understood, routine, conventional” activity in the field). The “client computing device” is any type of computing device with processors and memory and capable of receiving input from a user and communicating with components of the risk engine 234 through a network. Spec. ¶¶ 33, 73. The machine learning algorithm is a generic algorithm. Id. ¶¶ 16, 35, 47, 68. These elements do not provide an inventive concept. See BSG Tech, LLC v BuySeasons, Inc, 899 F.3d 1281 1290–91 (Fed. Cir. 2018) (“If a claim’s only ‘inventive concept’ is the application of an abstract idea using conventional and well-understood techniques, the claim has not been transformed into a patent-eligible application of an abstract idea.”); SAP Am., Inc. v. Investpic, LLC, 898 F.3d 1161, 1169 (Fed. Cir. 2019) (“[T]his court has ruled many times that ‘such invocations of computers . . . that are not even arguably inventive are insufficient to pass the test of an inventive concept in the application of an abstract idea.’”) (citation omitted); Bancorp Services, LLC v. Sun Life Assurance Company of Canada (U.S.), 687 F.3d 1266, 1280 (Fed. Cir. 2012) (“The district court correctly held that without the computer limitations nothing remains in the claims but the abstract idea of managing a stable value protected life insurance policy by performing calculations and manipulating the results.”); Final Act. 4; Ans. 5–6. Appeal 2020-005406 Application 14/918,169 19 As an ordered combination, the limitations recite no more than they do individually. See Elec. Power, 830 F.3d at 1355–56 (claims requiring nothing besides off-the-shelf, conventional computer, network, and display technology for gathering, sending, and presenting desired information do not state an inventive concept for application of the information-based abstract idea); Ans. 6 (machine learning algorithm and computer send and receive data as is known in the art). The generic description of these elements in the Specification indicates they are well-known enough in the art that further details are not required to describe them and their generic functions. Accordingly, we determine claim 1 lacks an inventive concept to transform the abstract idea into patent eligible subject matter. We sustain the rejection of claim 1 and claims 2, 3, and 5–21, which fall therewith. CONCLUSION In summary: Claims Rejected 35 U.S.C. § Reference(s)/ Basis Affirmed Reversed 1–3, 5–21 101 Eligibility 1–3, 5–21 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation