T-Mobile USA, Inc.Download PDFPatent Trials and Appeals BoardJul 27, 20212020002317 (P.T.A.B. Jul. 27, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/149,052 05/06/2016 Michael John Prochniak TMBLE.095A 2995 118591 7590 07/27/2021 KNOBBE, MARTENS, OLSON & BEAR, LLP T-Mobile USA, Inc. (TMBLE) 2040 Main Street, Fourteenth Floor Irvine, CA 92614 EXAMINER DU, HUNG K ART UNIT PAPER NUMBER 2647 NOTIFICATION DATE DELIVERY MODE 07/27/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): efiling@knobbe.com jayna.cartee@knobbe.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte MICHAEL JOHN PROCHNIAK, BRANDON KANNIER, MARTIN JAMES TALL, and MATTHEW BRYAN TAUNTON ____________________ Appeal 2020-002317 Application 15/149,052 Technology Center 2600 ____________________ Before JOHN A. EVANS, JUSTIN BUSCH, and JOHN P. PINKERTON, Administrative Patent Judges. PINKERTON, Administrative Patent Judge. DECISION ON APPEAL Appellant1 appeals under 35 U.S.C. § 134(a) from the Examiner’s Non-Final Rejection of claims 1–23. Claims 24 and 25 are canceled. Because the claims on appeal have been twice rejected, we have jurisdiction pursuant to 35 U.S.C. §§ 6 and 134(a). Ex parte Lemoine, 46 USPQ2d 1420, 1423 (BPAI 1994) (precedential). We AFFIRM. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies T-Mobile USA, Inc. as the real party in interest. Appeal Br. 3. Appeal 2020-002317 Application 15/149,052 2 STATEMENT OF THE CASE Introduction Appellant generally describes the disclosed and claimed invention as relating “to the management of network infrastructure equipment . . . utilizing intermediate performance thresholds and trend performance thresholds to characterize functionality of the infrastructure equipment according to collected performance information.” Spec. ¶ 9.2 Claims 1, 6, and 17 are independent. Claim 1, which is reproduced below, is representative of the subject matter on appeal: 42. A method for managing infrastructure equipment comprising: obtaining hardware performance information for an individual infrastructure component, wherein the hardware performance information corresponds to data collected during operation of the individual infrastructure component in a wireless network; storing the hardware performance information; identifying one or more intermediate hardware performance thresholds corresponding to performance information collected from the individual infrastructure component; characterizing a functionality of the individual infrastructure component based on applying the identified one or more intermediate hardware performance thresholds; generating a notification based on a characterization of the functionality as likely to fail; 2 Our Decision refers to the Non-Final Office Action mailed July 17, 2018 (“Non-Final Act.”); Appellant’s Appeal Brief filed June 28, 2019 (“Appeal Br.”) and Reply Brief filed Jan. 30, 2019 (“Reply Br.”); the Examiner’s Answer mailed Dec. 2, 2019 (“Ans.”); and the Specification filed May 6, 2016, as amended on September 20, 2017 (“Spec.”). Appeal 2020-002317 Application 15/149,052 3 identifying one or more trend hardware performance thresholds corresponding to a set of performance information collected from the individual infrastructure component, wherein the set of performance information includes at least one historical performance information; characterizing a second functionality of the individual infrastructure component based on applying the identified one or more trend hardware performance thresholds; and generating a notification based on a characterization of the second functionality as likely to fail. Appeal Br. 18 (Claims App.). Rejections on Appeal The Examiner rejects claims 1–23 under the following grounds: Claims 35 U.S.C. § Reference(s) Citation 1, 2, 4, 6, 7, 9, 12–15, 17–21, 23–25 103 Tektumanidze, 3 Henderson4 Non–Final Act. 3–13 5, 16, 22 103 Tektumanidze, Henderson, Foster5 Non–Final Act. 13–15 8, 10 103 Tektumanidze, Henderson, Xing6 Non–Final Act. 15–16 3, 11 103 Tektumanidze, Henderson, Lee7 Non–Final Act. 16–17 3 US 9,585,036 B1 (issued Feb. 28, 2017, “Tektumanidze”). 4 US 2014/0073303 Al (published Mar. 13, 2014, “Henderson”). 5 US 2014/0355484 Al (published Dec. 4, 2014, “Foster”). 6 US 2013/0090126 Al (published Apr. 11, 2013, “Xing”). 7 US 2015/0028816 Al (published Jan. 29, 2015, “Lee”). Appeal 2020-002317 Application 15/149,052 4 ANALYSIS We have reviewed the Examiner’s rejections of claims 1–23 in light of Appellant’s arguments in the Appeal Brief and the Reply Brief. See Appeal Br. 8–17; Reply Br. 1–5. Any other arguments Appellant could have made, but chose not to make, are waived. See 37 C.F.R. § 41.37(c)(1)(iv) (2018). For the reasons discussed below, Appellant’s arguments are not persuasive of error by the Examiner. Unless otherwise indicated, we agree with, and adopt as our own, the Examiner’s findings of fact and conclusions as set forth in the Non-Final Office Action from which this appeal is taken and in the Answer. Non-Final Act. 2–17; Ans. 3–13. We provide the following explanation for emphasis. Appellant argues independent claim 1 and states that the rejections of claims 2–23 are improper for at least the reasons set forth for claim 1. See Appeal Br. 8–17. Accordingly, we select claim 1 as representative, and the remaining claims stand or fall with claim 1. See 37 C.F.R. § 41.37(c)(1)(iv) (2018). The Examiner rejects claim 1 under 35 U.S.C. § 103 for obviousness based on Tektumanidze and Henderson. Non–Final Act. 3–6; see also Ans. 3–6. Appellant argues that the cited prior art references, individually or in combination, fail to teach or suggest “characterizing a functionality of the individual infrastructure component based on applying the identified one or more intermediate hardware performance thresholds.” Appeal Br. 9–13 (emphasis added); Reply Br. 2–5. We consider this argument below, but we are not persuaded that the Examiner erred. The Examiner finds Tektumanidze teaches or suggests the “characterizing” step because it describes creating a problem identification Appeal 2020-002317 Application 15/149,052 5 grid by applying threshold tests to performance parameter data for a mobile wireless network component, such as a base station. See Non–Final Act. 2– 3 (citing Tektumanidze 7:45–48), 4 (citing Tektumanidze 10:50–11:9); Ans. 3–5 (additionally citing Tektumanidze 5:13–19, 8:45–9:42, 9:60–10:10, 10:18–35, 10:37–49, 11:9–11:26, Figs. 2–3). Tektumanidze relates to identifying the impact of problematic mobile wireless network components to facilitate network maintenance activities. Tektumanidze 2:45–51. Figure 3 of Tektumanidze is reproduced below. Figure 3 of Tektumanidze is a flowchart summarizing a set of operations for identifying network performance issues to be addressed by network technicians. Id. at 3:47–54, 9:44–10:49, Fig. 3; see also id. at Figs. 2, 4. At step 300, the system acquires and stores a set of performance- parameter data points for one or more mobile wireless network components in a geographical area covering a plurality of cell sites. Id. at 9:60–66, Fig. 3 (step 300). The set of performance parameter data points can have one or more data types, including CPU usage, dropped call percentage, and key performance indicators (KPI) such as Received Quality (RSRQ), Reference Appeal 2020-002317 Application 15/149,052 6 Signal Received Power (RSRP), and radio signal strength. Id. at 9:62–63, Fig. 2 (items 200, 210, 290). And each data point can be assigned a particular geographical location, such as (1) the location of the base station associated with the one or more mobile wireless network components or (2) the location of the mobile wireless device receiving or generating the radio signal. Id. at 9:66–10:17. At step 310, by applying one or more threshold tests to the acquired and stored data points, the system’s problem impact server can create a problem impact grid for the geographical area of interest. Id. at 10:18–21, 10:29–36, 10:50–11:9, Figs. 3 (step 310), 4 (step 400), 5. The problem impact grid can identify specific areas (squares in a grid) within the geographic area of interest and the associated degree of impact of the particular problem (for example, a CPU usage or Radio signal strength value). See id. at 10:29–36, Fig. 5. Additional mathematical operations can be performed on the problem impact grid (and revisions thereof) to determine the extent and severity of a problem area and to identify additional problem areas. Id. at 11:39–50, 11:57–67, 12:11–18, Figs. 4 (steps 410, 420), 6–8. And, at step 320, based upon the problem impact grid, the problem impact server can generate a listing representing a prioritized set of specific network component performance issues arranged in order of highest to lowest impact based upon the impact score values assigned to squares of the problem impact grid. Id. at 10:37–49. Appellant argues the Examiner erred in finding Tektumanidze teaches or suggests the “characterizing” step for two main reasons. See Reply Br. 2–5; Appeal Br. 9–13. First, Appellant argues the Examiner’s rationale— Appeal 2020-002317 Application 15/149,052 7 that identifying a performance parameter problem in Tektumanidze indicates an infrastructure component problem—“erroneously assumes that identifying a performance issue in a grid square necessarily identifies a problem with an infrastructure component in the grid square.” Reply Br. 2; Appeal Br. 9–13. In particular, Appellant asserts that “[a]n indication of a dropped call problem (or other performance issue detected by the system taught in Tektumanidze) may have any number of root causes,” and “it is not appropriate to infer, as the Examiner does, that any performance problem detected by the system taught in Tektumanidze is caused by an issue with the functionality of the infrastructure equipment in the geographic area where the problem is detected.” Appeal Br. 10. Appellant also asserts that Tektumanidze does not teach or suggest applying “intermediate” hardware performance thresholds because its network performance measurements are not available immediately for testing, but instead are collected over an extended period of time before being used to determine whether there is a network problem. Id. at 11–12 (citing Tektumanidze 10:64:11:9; Spec. ¶ 12). Second, Appellant disputes the Examiner’s finding that Tektumanidze teaches or suggests the “characterizing” step because it describes user-created tests for filtering network components. Reply Br. 2– 5. Appellant asserts that the cited description of Tektumanidze “does not contain any disclosures regarding user-created tests or tests that ‘filter out’ network components.” Instead, Appellant argues that “Tektumanidze discloses . . . that its system performs a variety of tests that evaluate to a ‘l’ or ‘0’ and are used to calculate a ‘problem impact score’ for each grid square,” but “there is no teaching or suggestion that the system . . . Appeal 2020-002317 Application 15/149,052 8 characterizes the functionality of an individual infrastructure component based on the results of such tests.” Reply Br. 3. Appellant adds that, in Tektumanidze, there is no apparent correlation between a wireless network performance issue (identified by a particular grid square value) and a particular hardware infrastructure component in the wireless network. Reply Br. 4–5; Appeal Br. 11–13. We are not persuaded of Examiner error. Rather, for the reasons stated below, we agree with the Examiner that Tektumanidze teaches or at least suggests the “characterizing” step of claim 1 with its description of creating a problem identification grid by applying threshold tests to performance parameter data for a mobile wireless network component. See, e.g., Non–Final Act. 2–4 (citing Tektumanidze 7:10–20, 7:45–48, 10:50– 11:9, Fig. 2); Ans. 3–5 (additionally citing Tektumanidze 5:13–19, 8:45– 9:42, 9:60–10:10, 10:18–35, 10:37–49, 11:9–11:26, Figs. 2–3); see also Tektumanidze 8:59–64, 9:12–15, 9:26–31, 10:9–17, 12:39–53, Figs. 3–4. Contrary to Appellant’s arguments, Tektumanidze at least suggests the “characterizing” step because the value of a particular grid square may be determined on the basis of a wireless network performance parameter collected directly from or for a particular infrastructure component of the wireless network. For example, as the Examiner finds, one of the wireless network performance parameters that may be used to create a problem identification grid (including individual grid square values) is a CPU usage parameter, which Tektumanidze describes is “for a particular network component” and “represents the percentage (e.g., maximum or average) of non-idle process CPU cycles over a time period—a potential indicator of whether additional, or more powerful, processors are needed for the Appeal 2020-002317 Application 15/149,052 9 component.” See Ans. 4–6; Tektumanidze 9:12–15, 9:26–31, Fig. 2 (item 290). These descriptions of Tektumanidze would have suggested the “characterizing” step because applying a threshold test to a CPU usage parameter data point set would have created a problem identification grid (including one or more grid square values) indicating whether a CPU of a device in Tektumanidze’s wireless network architecture (for example, a server) is consuming too much resources. See, e.g., Tektumanidze 9:60– 10:36, 10:50–11:9, Figs. 2 (item 290), 3 (steps 300, 310), 4 (step 400); see also id. at 8:50–43 (“The performance parameters are potentially used to detect any type of problem for purposes of determining an impact of the problem. In that regard the performance parameters may indicate an actual failure of a network component.”). Tektumanidze’s wireless network performance parameters also include key performance indicators (KPIs) that specify well-known radio- signal quality measures such as RSRQ, RSRP, and radio signal strength. Tektumanidze 8:59–64, Fig. 2 (item 210); see Non-Final Act. 3–4 (citing Tektumanidze 7:10–20, Fig. 2 (item 210)). According to Tektumanidze, raw performance data point sets for such KPIs can be collected from any of a variety of mobile wireless network components, including base stations and gateways. Id. at 7:10–15. Tektumanidze’s process of collecting a data point set from a particular mobile wireless network component, and then performing threshold testing thereon to create a problem identification grid, would have suggested grid square values that describe the performance of that particular network component—that is, “characterizing a functionality of the individual infrastructure component.” See, e.g., id. at 7:10–15, 9:60– 10:36, 10:50–11:9, Figs. 2, 3 (steps 300, 310), 4 (step 400). Tektumanidze Appeal 2020-002317 Application 15/149,052 10 further describes that the problem impact grid square values resulting from an initial threshold test of acquired performance parameters can indicate the problem of “insufficient signal strength.” Id. at 12:39–53. This description as well would have suggested the “characterizing” step because it suggests that the base station is sending a mobile wireless device a signal that does not have sufficient power or strength. See also id. at 8:50–54. Tektumanidze’s teachings and suggestions are consistent with Appellant’s Specification, which describes that an “intermediate performance threshold[] can correspond to a . . . measured signal strength of a radio component.” Spec. ¶¶ 13, 28. Appellant argues that the prior art requires greater detail about how the threshold testing of a particular performance parameter data point set— and the results thereof—relates to the performance of a particular hardware infrastructure component. See Reply Br. 4–5; Appeal Br. 11–13. But this argument is not commensurate with the scope of claim 1, which is broadly recited, and Appellant’s Specification does not provide a limiting definition of either “individual infrastructure component” or “intermediate hardware performance thresholds.” Instead, the Specification describes that infrastructure equipment can include “a wide variety” of components including various hardware components and associated software devices and provides only non-limiting examples of intermediate hardware performance thresholds. Spec. ¶¶ 2, 12, 27–29, 32. And even though the Specification describes an embodiment in which “intermediate” performance thresholds are used to evaluate immediately collected performance information (see id. ¶¶ 12, 25, 30), the claim language is not so limiting. Accordingly, we see no problem with Tektumanidze’s threshold testing being applied to Appeal 2020-002317 Application 15/149,052 11 performance parameter data that has been collected over a period of time. Appeal Br. 11–12. It is true that Tektumanidze does not expressly disclose user-created tests that filter out network components. Nor does Tektumanidze specifically explain how each grid square value relates to the performance of a particular network component. But, as discussed above, the Examiner’s overall findings nevertheless show that Tektumanidze’s steps that are part of the process for creating a problem identification grid would have, at least, suggested the “characterizing” step as claimed. The test for obviousness is not that the claimed invention must be expressly disclosed in any one or all of the references but rather, what the combined teachings of the references would have suggested to those of ordinary skill in the art. In re Keller, 642 F.2d 413, 426 (CCPA 1981). As applied here, we are not persuaded of error because the Examiner has set forth sufficient evidence that the proposed combination of Tektumanidze and Henderson would suggested the disputed claim step to one of ordinary skill in the art. Thus, for these reasons, we are not persuaded the Examiner erred in: (1) finding that Tektumanidze teaches or at least suggests the “characterizing” step of claim 1; and (2) concluding that Tektumanidze and Henderson renders obvious the subject matter of claim 1 under 35 U.S.C. § 103. Accordingly, we sustain the Examiner’s rejection of claim 1 under 35 U.S.C. § 103 for obviousness based on Tektumanidze and Henderson. For the same reasons, we sustain the Examiner’s rejections of claims 2–23 under 35 U.S.C. § 103 for obviousness based at least on Tektumanidze and Appeal 2020-002317 Application 15/149,052 12 Henderson. As discussed above, Appellant does not argue the rejections of these claims beyond the arguments advanced for claim 1. DECISION We affirm the Examiner’s rejections of claims 1–23. SUMMARY In summary: Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 2, 4, 6, 7, 9, 12–15, 17–21, 23– 25 103 Tektumanidze, Henderson 1, 2, 4, 6, 7, 9, 12–15, 17–21, 23– 25 5, 16, 22 103 Tektumanidze, Henderson, Foster 5, 16, 22 8, 10 103 Tektumanidze, Henderson, Xing 8, 10 3, 11 103 Tektumanidze, Henderson, Lee 3, 11 Overall Outcome 1–25 No period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). See 37 C.F.R. § 41.50(f). AFFIRMED Copy with citationCopy as parenthetical citation