Ex Parte Tuffs et alDownload PDFPatent Trial and Appeal BoardMay 19, 201713796923 (P.T.A.B. May. 19, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/796,923 03/12/2013 Philip Simon Tuffs NETF/0079US 3211 108911 7590 05/23/2017 Arte.ois T aw firm in T T P / Netflix EXAMINER 7710 Cherry Park Drive Suite T #104 Houston, TX 77095 VU, TUAN A ART UNIT PAPER NUMBER 2193 NOTIFICATION DATE DELIVERY MODE 05/23/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): algdocketing @ artegislaw. com kcruz @ artegislaw.com rsmith @ artegislaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte PHILIP SIMON TUFFS, ROY RAPOPORT, and ARIEL TSEITLIN Appeal 2016-007703 Application 13/796,9231 Technology Center 2100 Before DEBRA K. STEPHENS, NABEEL U. KHAN, and MICHAEL J. ENGLE, Administrative Patent Judges. ENGLE, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from a final rejection of claims 1—24. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. Technology The application relates to “evaluating a software application relative to another software application that performs the identical fimction(s).” Spec. 11. Illustrative Claim Claim 1 is illustrative and reproduced below with the limitations at issue emphasized: 1 Appellants state the real party in interest is Netflix, Inc. App. Br. 3. Appeal 2016-007703 Application 13/796,923 1. A method for evaluating a second version of software, comprising: selectively routing incoming requests to a plurality of baseline instances executing on a plurality of hardware platforms and a plurality of canary instances also executing on the plurality of hardware platforms, wherein each of the plurality of baseline instances comprises a first version of the software, and each of the plurality of canary instances comprises the second version of the software; collecting performance data for a plurality of performance metrics associated with the plurality of baseline instances executing on the plurality of hardware platforms; collecting performance data for the plurality of performance metrics associated with the plurality of canary instances executing on the plurality of hardware platforms; for each of the plurality ofperformance metrics: calculating an aggregate baseline performance metric for the performance metric based on the performance data associated with the plurality of baseline instances', and calculating a plurality of performance values by comparing the aggregate baseline performance metric for the performance metric to the performance data associated with each canary instance included in the plurality of canary instances', and calculating a final measure of performance for the second version of software based on the plurality of performance values. Rejections Claims 1—24 stand rejected under 35 U.S.C. § 103(a) as obvious over one or more of Ravi et al. (US 2014/0068053 Al; Mar. 6, 2014); Gardner et al. (US 2012/0017165 Al; Jan. 19, 2012); David Mytton, Canary concept for system updates (Nov. 15, 2012), https://blog.serverdensity.com/canary- 2 Appeal 2016-007703 Application 13/796,923 concept-for-system-updates; Todd Hoff, Netflix: Developing, Deploying, and Supporting Software According to the Way of the Cloud (Dec. 12, 2011, 9:05 AM), http://highscalability.com/blog/2011/12/12/netflix-developing- deploying-and-supporting-software-accordi.html; Maurer (US 2012/0060146 Al; Mar. 8, 2012); Dupont et al. (US 2012/0137367 Al; May 31, 2012); Lientz (US 2013/0339519 Al; Dec. 19, 2013); Maiocco et al. (US 2010/0229096 Al; Sept. 9, 2010); Corbett (US 2011/0208808 Al; Aug. 25, 2011); Erasmus et al. (US 2012/0221407 Al; Aug. 30, 2012); and Kurapati et al. (US 2003/0237094 Al; Dec. 25, 2003). Final Act. 2-24. Claims 3,11, and 19 stand rejected on the grounds of non-statutory obviousness-type double patenting over claims 2, 10, and 18 of copending Application No. 13/926,797 (“the ’797 application”) in view of Ravi, Dupont, and Hoff. Final Act. 24—28. ISSUES Did the Examiner err in finding the combination of cited references teaches or suggests “for each of the plurality of performance metrics” both “calculating an aggregate baseline performance metric for the performance metric based on the performance data associated with the plurality of baseline instances” and “calculating a plurality of performance values by comparing the aggregate baseline performance metric for the performance metric to the performance data associated with each canary instance included in the plurality of canary instances,” as recited in claim 1? 3 Appeal 2016-007703 Application 13/796,923 ANALYSIS Obviousness Claim 1 recites “for each of the plurality of performance metrics: calculating an aggregate baseline performance metric for the performance metric based on the performance data associated with the plurality of baseline instances.” Appellants contend none of the cited references teach or suggest this limitation because (A) Ravi teaches collecting results “for each instance” rather than a single “aggregate” result (e.g., an average or sum of the individual results) and (B) Gardner, Dupont, Erasmus, and Kurapati teach aggregating results but fail to teach aggregating a performance metric “across multiple instances of the same application.” App. Br. 11—14. We agree with the Examiner (Ans. 34—37), however, that “one cannot show non-obviousness by attacking references individually where, as here, the rejections are based on combinations of references.” In re Keller, 642 F.2d 413, 426 (CCPA 1981). Here, Appellants concede Ravi teaches “to determine baseline performance metrics associated with a first computing architecture and then comparing the baseline performance metrics to the performance metrics of other proposed architectures.” App. Br. 11; Ans. 36; Ravi 8, 41, Fig. 3. Even if Ravi’s baseline was a single result rather than an aggregate (e.g., an average),2 we agree with the Examiner that at least Kurapati or Dupont teaches calculating an aggregated value such as a normalized average and comparing individual values to that aggregated 2 Although not relied upon for our decision, we note Ravi does teach “using a comparison against norms.” Ravi H 71, 47 (“can be compared against expectations or norms”); see also Ans. 14 (citing Ravi 171). 4 Appeal 2016-007703 Application 13/796,923 value. Ans. 31—32, 14—15; Kurapati 19, Abstract; Dupont H 972, 164. Appellants have not sufficiently addressed the Examiner’s proposed combination or explained why an ordinarily skilled artisan would not have found it obvious to use Kurapati or Dupont’s aggregated value as the baseline for Ravi’s comparisons. See Ans. 15—17. Claim 1 also recites “for each of the plurality of performance metrics: . . . calculating a plurality of performance values by comparing the aggregate baseline performance metric for the performance metric to the performance data associated with each canary instance included in the plurality of canary instances.” Appellants dispute this limitation based on the same arguments above, specifically “because none of the cited references discloses the generation of aggregate baseline performance metrics,. . . none of the references can logically disclose calculating performance values by comparing performance data for each canary instance to an aggregate baseline performance metric for any given performance metric.” App. Br. 14. However, as discussed above, we are not persuaded by Appellants’ argument regarding the obviousness of an aggregate baseline. Accordingly, we sustain the Examiner’s obviousness rejection of claim 1, and claims 2—24, which Appellants argue are patentable for similar reasons. See App. Br. 14—15; 37 C.F.R. § 41.37(c)(l)(iv).3 3 Although not relied upon for our decision, we note contrary to Appellants’ assertion that Ravi only tests the same application on different hardware (App. Br. 11), Ravi also teaches comparing “different applications” or “similar applications.” Ravi 45 (“applications performing similar functions that are embodied in different applications (e.g., possibly from different vendors) can be compared”), 62 (“compared against empirical data 5 Appeal 2016-007703 Application 13/796,923 Double Patenting Appellants have not presented any arguments addressing the Examiner’s provisional double patenting rejection. Arguments that Appellants could have made but chose not to are deemed waived. 37 C.F.R. § 41.37(c)(l)(iv). Accordingly, we sustain the Examiner’s double patenting rejection of claims 3,11, and 19 pro forma.* * 4 DECISION For the reasons above, we affirm the Examiner’s decision rejecting claims 1—24 as obvious and claims 3,11, and 19 for double patenting. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 41.50(f). AFFIRMED that it has collected for similar applications”), 67 (comparing an application “ported” to a different operating system). 4 We note that the ’797 application on which the double patenting rejection was based subsequently issued as U.S. Patent No. 9,225,621 on December 29, 2015, which was after the Final Rejection but before the Appeal Brief was filed. 6 Copy with citationCopy as parenthetical citation