Samsung Electronics Co., Ltd.Download PDFPatent Trials and Appeals BoardJan 19, 20222020006335 (P.T.A.B. Jan. 19, 2022) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/154,650 05/13/2016 John W. Brothers 1535-315 7901 124044 7590 01/19/2022 Renaissance IP Law Group LLP (SSI) 17933 NW Evergreen Place, Suite 121 Beaverton, OR 97006 EXAMINER GIROUX, GEORGE ART UNIT PAPER NUMBER 2128 NOTIFICATION DATE DELIVERY MODE 01/19/2022 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): dkt@renaissanceiplaw.com eofficeaction@appcoll.com pto@renaissanceiplaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte JOHN W. BROTHERS and JOOHOON LEE Appeal 2020-006335 Application 15/154,650 Technology Center 2100 Before ROBERT E. NAPPI, RAE LYNN P. GUEST, and LARRY J. HUME, Administrative Patent Judges. HUME, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision rejecting claims 1-20, which are all claims pending in the application. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM IN PART. 1 “Appellant” refers to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies the real party in interest as Samsung Electronics Co., Ltd. Appeal Br. 1. Appeal 2020-006335 Application 15/154,650 2 STATEMENT OF THE CASE2 The claimed subject matter is directed to “Automatic Tuning of Artificial Neural Networks.” See Spec. (Title). In particular, Appellant’s disclosed embodiments and claimed invention relate to “[t]uning a neural network [that] may include selecting a portion of a first neural network for modification to increase computational efficiency and generating, using a processor, a second neural network based upon the first neural network by modifying the selected portion of the first neural network while offline.” Spec. 41 (Abstr.). Exemplary Claims Claims 1 and 15, reproduced below, are representative of the subject matter on Appeal (emphases added to contested prior-art limitations): 1. A method of tuning a neural network, the method comprising: selecting a portion of a first neural network for modification to alter one or more target performance requirements; generating, using a processor, a second neural network based upon the first neural network by modifying the selected portion of the first neural network while executing neither the first neural network nor the second neural network; and 2 Our decision relies upon Appellant’s Appeal Brief (“Appeal Br.,” filed June 2, 2020); Reply Brief (“Reply Br.,” filed Sept. 9, 2020); Examiner’s Answer (“Ans.,” mailed July 9, 2020); Final Office Action (“Final Act.,” mailed Jan. 17, 2020); and the original Specification (“Spec.,” filed May 13, 2016) (claiming benefit of US 62/171,172 filed June 4, 2015, US 62/174,472, filed June 11, 2015, and US 62/196,699, filed July 24, 2015). Appeal 2020-006335 Application 15/154,650 3 validating, by the processor, that an operation of the second neural network achieves at least a selected set of the target performance requirements. 15. An apparatus for tuning a neural network, comprising: a memory configured to store program code; a processor configured to execute the program code and initiate operations including: selecting a portion of a first neural network for modification to increase computational efficiency; and generating a second neural network based upon the first neural network by modifying the selected portion of the first neural network while offline. REFERENCES The Examiner relies upon the following prior art as evidence: Name Reference Date Adjaoute US 2002/0133721 A1 Sept. 19, 2002 Packes et al. (“Packes”) US 2015/0242747 A1 Aug. 27, 2015 Max Jaderberg et al., Speeding up Convolutional Neural Networks with Low Rank Expansions, Visual Geometry Group, Dept. of Engineering Sci., U of Oxford, May 2014, pp. 1-12 (hereinafter “Jaderberg”). REJECTIONS R1. Claims 1-6, 10, and 12 stand rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Packes. Final Act. 3. R2. Claims 7-9, 11, 13, and 14 stand rejected under 35 U.S.C. § 103 as being unpatentable over the combination of Packes and Jaderberg. Final Act. 8. Appeal 2020-006335 Application 15/154,650 4 R3. Claims 15-20 stand rejected under 35 U.S.C. § 103 as being unpatentable over the combination of Packes and Adjaoute. Final Act. 11. CLAIM GROUPING Based on Appellant’s arguments (Appeal Br. 13-37) and our discretion under 37 C.F.R. § 41.37(c)(1)(iv), we decide the appeal of anticipation Rejection R1 of claims 1-6, 10, and 12 on the basis of representative claim 1; and we decide the appeal of obviousness Rejection R3 of claims 15-20 on the basis of representative claim 15. Remaining claims 7-9, 11, 13, and 14 in obviousness Rejection R2, not argued separately, stand or fall with the independent claim 1 from which they depend.3 ISSUES AND ANALYSIS In reaching this decision, we consider all evidence presented and all arguments actually made by Appellant. To the extent Appellant has not advanced separate, substantive arguments for particular claims, or other issues, such arguments are waived. 37 C.F.R. § 41.37(c)(1)(iv). Based upon our review of the record, we are persuaded by particular arguments advanced by Appellant with respect to the anticipation Rejection R1 of claims 1-6, 10, and 12 for the specific reasons discussed below. We 3 “Notwithstanding any other provision of this paragraph, the failure of appellant to separately argue claims which appellant has grouped together shall constitute a waiver of any argument that the Board must consider the patentability of any grouped claim separately.” 37 C.F.R. § 41.37(c)(1)(iv). In addition, when Appellant does not separately argue the patentability of dependent claims, the claims stand or fall with the claims from which they depend. In re King, 801 F.2d 1324, 1325 (Fed. Cir. 1986). Appeal 2020-006335 Application 15/154,650 5 similarly are persuaded of error in the Examiner’s rejection R2 of claims of claims 7-9, 11, 13, and 14. However, we disagree with Appellant’s arguments with respect to obviousness Rejection R3 of claims 15-20, and, unless otherwise noted, we incorporate by reference herein and adopt as our own: (1) the findings and reasons set forth by the Examiner in the action from which this appeal is taken, and (2) the reasons and rebuttals set forth in the Examiner’s Answer in response to Appellant’s arguments. We highlight and address specific findings and arguments regarding independent claims 1 and 15 for emphases as follows. 1. § 102(a)(1) Rejection R1 of Claims 1-6, 10, 12 Issue 1 Appellant argues (Appeal Br. 13-24; Reply Br. 2-5) the Examiner’s rejection of claim 1 under 35 U.S.C. § 102(a)(1) as being anticipated by Packes is in error. These contentions present us with the following issue: Did the Examiner err in finding the cited prior art discloses “[a] method of tuning a neural network” that includes, inter alia, the step of “generating, using a processor, a second neural network based upon the first neural network by modifying the selected portion of the first neural network while executing neither the first neural network nor the second neural network,” as recited in claim 1? Principles of Law Anticipation of a claim under 35 U.S.C. § 102 occurs when each claimed element and the claimed arrangement or combination of those Appeal 2020-006335 Application 15/154,650 6 elements is disclosed, inherently or expressly, by a single prior art reference. Therasense, Inc. v. Becton, Dickinson & Co., 593 F.3d 1325, 1332 (Fed. Cir. 2010). A reference inherently discloses an element of a claim “if that missing characteristic is necessarily present, or inherent, in the single anticipating reference.” Schering Corp. v. Geneva Pharms., 339 F.3d 1373, 1377 (Fed. Cir. 2003) (citation omitted) (emphasis added). “Inherency, however, may not be established by probabilities or possibilities. The mere fact that a certain thing may result from a given set of circumstances is not sufficient.” Therasense, 593 F.3d at 1332 (emphasis omitted) (citing Cont’l Can Co. USA, Inc. v. Monsanto Co., 948 F.2d 1264, 1269 (Fed. Cir. 1991)). Analysis The Examiner finds Packes discloses the “generating” step that is carried out “by modifying the selected portion of the first neural network while executing neither the first neural network nor the second neural network,” as recited in claim 1. Final Act. 3 (citing Packes ¶¶ 45-50) (emphasis added). Appellant contends “[h]ere . . . the creation without ‘executing []either the first neural network []or the second neural network’ is at issue.” Appeal Br. 14. Appellant further argues: The PTO cites to Step 153 in ¶ [0047] in which the retrained neural network’s performance is deemed acceptable or not. Step 153 occurs after Steps 173 and 177 have retrained the selected neural network (1st neural network). Likewise, if the retraining was successful (i.e., Step 153 returns a “No” value), then the 2nd neural network is retrained in step 161. This is the point at issue. Packes is creating what the PTO asserts is a 2nd neural network by retraining the 1st neural network. It is respectfully asserted that retraining involves Appeal 2020-006335 Application 15/154,650 7 executing the neural network based upon a training data set (steps 157 & 165 or 177), which Assignee’s claim forbids. It is respectfully asserted that that training is a specific way of executing a neural network using training data and correcting the network when it gets the output wrong. During each execution, the neural network takes as input the input data and weights and generates output data. As part of the training or retraining process, an error function is determined that calculates the difference between the values the neural network computed compared to the values it should have computed. The weights of the network are automatically altered during execution to the produce correct results. This is not something that can happen when the neural network is not being executed. Appeal Br. 15-16 (emphasis omitted) (italics added). We are persuaded by Appellant’s arguments because, based upon our review of Packes, the disclosed retraining of the neural network is carried out while executing the neural network, contrary to the explicit requirement of the claim.4 We further find the Examiner has not adequately mapped the disputed limitations to the cited prior art, but instead merely cites to multiple paragraphs of Packes, without specific analysis. Based upon the findings above, on this record, we are persuaded of at least one error in the Examiner’s reliance on the cited prior art to disclose the disputed limitation of claim 1, such that we find error in the Examiner’s 4 We find support for the negative limitation at issue at least at paragraph 25 of the Specification, i.e., “Neural network analyzer 104 may operate on neural network 102 to generate neural network 106 as an offline process. An ‘offline’ process, for example, is one that is performed while not executing the neural network in an application utilizing the neural network, as part of a system, for a service, for a user, or for a client device. As a further example, an offline process can correspond to a process that is not executed in a real- time environment.” Spec. ¶ 25. Appeal 2020-006335 Application 15/154,650 8 resulting finding of anticipation. Therefore, we do not sustain the Examiner’s anticipation rejection of independent claim 1, and grouped claims 2-6, 10, and 12, which fall therewith. See Claim Grouping, supra. 2. § 103 Rejection R3 of Claims 15-20 Issue 2 Appellant argues (Appeal Br. 32-36; Reply Br. 7-8) the Examiner’s rejection of claim 15 under 35 U.S.C. § 103 as being obvious over the combination of Packes and Adjaoute is in error. These contentions present us with the following issue: Did the Examiner err in finding the cited prior art combination teaches or suggests “[a]n apparatus for tuning a neural network” that includes “a processor configured to execute the program code and initiate operations including,” inter alia, “generating a second neural network based upon the first neural network by modifying the selected portion of the first neural network while offline,” as recited in claim 15? Principles of Law “[O]ne cannot show non-obviousness by attacking references individually where . . . the rejections are based on combinations of references.” In re Keller, 642 F.2d 413, 426 (CCPA 1981). “The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference . . . . Rather, the test is what the combined teachings of those references would have suggested to those of ordinary skill in the art.” Id. at 425. Further, the test for obviousness is what the combined teachings of the prior art would have Appeal 2020-006335 Application 15/154,650 9 suggested to the hypothetical person of ordinary skill in the art. Keller, 642 F.2d at 425. In KSR, the Court stated “[t]he combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.” KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 416 (2007). When a work is available in one field of endeavor, design incentives and other market forces can prompt variations of it, either in the same field or a different one. If a person of ordinary skill can implement a predictable variation, § 103 likely bars its patentability. For the same reason, if a technique has been used to improve one device, and a person of ordinary skill in the art would recognize that it would improve similar devices in the same way, using the technique is obvious unless its actual application is beyond his or her skill. . . . [A] court must ask whether the improvement is more than the predictable use of prior art elements according to their established functions. Id. at 417. Further, the relevant inquiry is whether the Examiner has set forth “some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.” In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006) (cited with approval in KSR, 550 U.S. at 418). During prosecution, claims must be given their broadest reasonable interpretation when reading claim language in light of the specification as it would be interpreted by one of ordinary skill in the art. In re Am. Acad. of Sci. Tech. Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004). Under this standard, we interpret claim terms using “the broadest reasonable meaning of the words in their ordinary usage as they would be understood by one of Appeal 2020-006335 Application 15/154,650 10 ordinary skill in the art, taking into account whatever enlightenment by way of definitions or otherwise that may be afforded by the written description contained in the applicant’s specification.” In re Morris, 127 F.3d 1048, 1054 (Fed. Cir. 1997). Analysis The Examiner finds “[w]hile Packes teaches optimizing of neural networks (see above) it does not explicitly teach modifying the selected portion of the first neural network while offline.” Final Act. 12. The Examiner cites to Adjaoute as teaching or suggesting “modifying the selected portion of the first neural network while offline.” Id. (citing Adjaoute claim 45, ¶¶ 47, 88, 92, 113, 154). We agree with the Examiner’s finding. For example, and consistent with the Examiner’s rejection, we find “the neural network may be updated offline” (Adjaoute ¶ 113), and “[i]t should be understood by one skilled in the art that models 197 may be updated automatically or offline.” Id. ¶ 154. Appellant contends “[i]t is respectfully asserted that Adjaoute’s usage of the term ‘offline’ has nothing to do with ‘not executing the neural network’, as defined by Assignee’s specification. Adjaoute executes its neural network, just in two modes (fast or slow).” App. Br. 34.5 5 We rely upon the doctrine of claim differentiation between independent claims 1 and 15. “When different words or phrases are used in separate claims, a difference in meaning is presumed.” Nystrom v. TREX Co., 424 F.3d 1136, 1143 (Fed. Cir. 2005). Claim 1 recites “modifying the selected portion of the first neural network while executing neither the first neural network nor the second neural network”; and claim 15 recites “generating a second neural network based upon the first neural network by modifying the selected portion of the first neural network while offline.” We presume a Appeal 2020-006335 Application 15/154,650 11 The Examiner responds: The specification of the instant application (para. 0025) [discloses] “An ‘offline’ process, for example, is one that is performed while not executing the neural network in an application utilizing the neural network, as part of a system, for a service, for a user, or for a client device. As a further example, an offline process can correspond to a process that is not executed in a real-time environment”. Therefore the term “offline” in claims 15-20 has been interpreted under its broadest reasonable interpretation, given the plain meaning of the term. It is for this reason (for claims 15-20) that examiner relied upon Adjaoute for teaching modifying the network while “offline”, even though Packes teaches that the modification occurs after execution and not during. Ans. 5. We agree with the Examiner’s finding under the broadest reasonable interpretation that Adjaoute teaches or suggests modifying the selected portion of the first neural network while offline because we find the disclosed examples of “offline” in the Specification do not rise to the level of explicit definitions that would preclude the interpretation applied by the Examiner with respect to claim 15. Based upon the findings above, on this record, we are not persuaded of error in the Examiner’s reliance on the cited prior art combination to teach or suggest the disputed limitation of claim 15, nor do we find error in the Examiner’s resulting legal conclusion of obviousness. Therefore, we sustain the Examiner’s obviousness rejection of independent claim 15, and grouped claims 16-20, which fall therewith. See Claim Grouping, supra. patentable difference between being “offline” and “executing neither the first neural network nor the second neural network.” Appeal 2020-006335 Application 15/154,650 12 3. Rejection R2 of Claims 7-9, 11, 13, 14 In light of our reversal of the rejections of independent claim 1, supra, we also reverse obviousness Rejection R2 under § 103 of claims 7-9, 11, 13, and 14, which depend from claim 1. On this record, the Examiner has not shown how the additionally cited Jaderberg reference overcomes the aforementioned deficiencies with Packes, as discussed above regarding claim 1. REPLY BRIEF To the extent Appellant may advance new arguments in the Reply Brief (Reply Br. 2-10) not in response to a shift in the Examiner’s position in the Answer, arguments raised in a Reply Brief that were not raised in the Appeal Brief or are not responsive to arguments raised in the Examiner’s Answer will not be considered except for good cause (see 37 C.F.R. § 41.41(b)(2)), which Appellant has not shown. Appeal 2020-006335 Application 15/154,650 13 CONCLUSIONS We AFFIRM IN PART the Examiner’s rejections. More specifically: (1) Appellant has persuasively argued that the Examiner erred with respect to anticipation Rejection R1 of claims 1-6, 10, and 12 under 35 U.S.C. § 102(a)(1), and we do not sustain the rejection. (2) Based upon our reversal of anticipation Rejection R1 of claims 1-6, 10, and 12, we are persuaded the Examiner erred with respect to obviousness Rejection R2 of claims 7-9, 11, 13, and 14 under 35 U.S.C. § 103 over the cited prior art combination of record, and we do not sustain the rejection. (3) Appellant has not persuasively argued that the Examiner erred with respect to obviousness Rejection R3 of claims 15-20 under 35 U.S.C. § 103 over the cited prior art combination of record, and we sustain the rejection. DECISION SUMMARY In summary: Claims Rejected 35 U.S.C. § References/ Basis Affirmed Reversed 1-6, 10, 12 102(a)(1) Packes 1-6, 10, 12 7-9, 11, 13, 14 103 Packes, Jaderberg 7-9, 11, 13, 14 15-20 103 Packes, Adjaoute 15-20 Overall Outcome 15-20 1-14 Appeal 2020-006335 Application 15/154,650 14 TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). See 37 C.F.R. § 41.50(f). AFFIRMED IN PART Copy with citationCopy as parenthetical citation