The MITRE CorporationDownload PDFPatent Trials and Appeals BoardOct 20, 20212021004846 (P.T.A.B. Oct. 20, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/678,532 04/03/2015 Robert M. TAYLOR Jr. 699592000900 1286 25227 7590 10/20/2021 MORRISON & FOERSTER LLP 2100 L STREET, NW SUITE 900 WASHINGTON, DC 20037 EXAMINER LEE, TSU-CHANG ART UNIT PAPER NUMBER 2128 NOTIFICATION DATE DELIVERY MODE 10/20/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): EOfficeVA@mofo.com PatentDocket@mofo.com pair_mofo@firsttofile.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte ROBERT M. TAYLOR and BURHAN NECIOGLU Appeal 2021-004846 Application 14/678,532 Technology Center 2100 Before CAROLYN D. THOMAS, MICHAEL J. STRAUSS, and MICHAEL J. ENGLE, Administrative Patent Judges. STRAUSS, Administrative Patent Judge. Appeal 2021-004846 Application 14/678,532 2 DECISION ON APPEAL STATEMENT OF THE CASE1 Pursuant to 35 U.S.C. § 134(a), Appellant2 appeals from the Examiner’s decision to reject claims 1–22. Non-Final Act. 2. We have jurisdiction under 35 U.S.C. § 6(b). We REVERSE. CLAIMED SUBJECT MATTER According to Appellant: [T]he claims of the present application are directed to image compression and decompression using a trained learning machine . . . [that] enable compression ratios unachievable by conventional methods. As explained in the specification, image compression techniques generally utilize patterns in image data to reduce the amount of information needed to represent an image. For example, conventional image compression techniques such as JPEG are generally based on the observations that adjacent pixels in an array of pixels representing an image are likely to be similar and that the human eye and brain have difficulty detecting high frequency changes in pixel intensities over a given area. The compression ratios achievable by conventional compression techniques are limited by the patterns that their human designers were able to determine. The systems 1 In this Decision, we refer to Appellant’s Appeal Brief filed December 14, 2020 (“Appeal Br.”); Reply Brief filed August 9, 2021 (“Reply Br.”); the Non-Final Office Action mailed May 13, 2020 (“Non-Final Act.”); the Examiner’s Answer mailed June 8, 2021 (“Ans.”); and the Specification filed April 3, 2015 (“Spec.”). Rather than repeat the Examiner’s findings and Appellant’s contentions in their entirety, we refer to these documents. 2 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as The MITRE Corporation. Appeal Br. 2 Appeal 2021-004846 Application 14/678,532 3 and methods according to the present claims do not rely on human-determined patterns. Rather, the inventors realized that there “are patterns in digital image data that humans have not been able to discern or explain that may be used to push image compression levels far beyond those achievable by conventional methods.” The systems and methods according to the claims utilize machine learning techniques that detect these patterns, build a model that captures these patterns, and apply the model parameters to compress and decompress digital images at compression ratios unachievable by conventional methods. Appeal Br. 8–9 (citations omitted). Claim 1, reproduced below, is illustrative of the claimed subject matter: 1. A computer implemented method for machine learning model parameters for image compression, comprising: partitioning a plurality of training image files stored on a first computer memory into a first set of regions, wherein each region of the first set of regions is an array of pixel values; training a probabilistic learning machine on the first set of regions to generate a first set of machine learned model parameters, the first set of machine learned model parameters representing a first level of data patterns in the plurality of training image files; constructing a representation of each region of the first set of regions using the first set of regions and the first set of machine learned model parameters, wherein a representation of a respective region of the first set of regions has an equal number of pixel values as the respective region; constructing representations of the plurality of training image files by combining the representations of the regions of the first set of regions; partitioning the representations of the plurality of image files into a second set of regions, wherein a region of the second send of regions has a greater number of pixel values than at least one region of the first set of regions; training the probabilistic learning machine on the second set of regions to generate a second set of machine learned model parameters, the second set of machine learned model parameters Appeal 2021-004846 Application 14/678,532 4 representing a second level of data patterns in the plurality of image files, wherein the number of machine learned model parameters in the second set of machine learned model parameters is less than the number of machine learned model parameters in the first set of machine learned model parameters; and building the first set of machine learned model parameters and the second set of machine learned model parameters into an executable image compression application that is configured to compress an image file that is different from the training image files by transforming the image file into a compressed image file based on the first set of machine learned model parameters generated from the training image files and the second set of machine learned model parameters generated from the training image files. Appeal Br. 35 (Claims App.). REFERENCES AND REJECTIONS The prior art relied upon by the Examiner are: Name Reference Date Dube et al. (“Dube.”) US 6,633,677 B1 Oct. 14, 2003 Chuang et al. (“Chuang.”) US 2012/0169842 A1 July 5, 2012 Rodriguez et al. (“Rodriguez.”) US 2015/0055855 A1 Feb. 26, 2015 Turkan et al. (“Turkan.”) US 2015/0093045 A1 Apr. 2, 2015 Kainen, P., Kůrková, V., Vogt, A. (2001). Continuity of Approximation by Neural Networks in Lp Spaces. Annals of Operations Research, 101, pp.143– 147. (“Kainen.”) Robinson, J, Kecman, V. (2003). Combining support vector machine learning with the discrete cosine transform in image compression. IEEE Transactions on Neural Networks, 14(4), pp.950–958. (“Robinson.”) Bryt, O., Elad, M. (2008). Compression of facial images using the K-SVD algorithm. Journal of Visual Communication and Image Representation, 19(4), pp.270–282. (“Bryt.”) Chen, M. et al. (2010). Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds. IEEE Transactions on Signal Processing, 58(12), pp.6140–6155. (“Chen.”) Appeal 2021-004846 Application 14/678,532 5 The Examiner rejects: a. claims 1–3, 7, 11, 13, 15–17, and 19–22 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube and Bryt (Non-Final Act. 3–15); b. claims 4–6 and 12 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube, Bryt, and Turkan (Non-Final Act. 16–18); c. claims 8–10 and 18 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube, Bryt, and Chen (Non-Final Act. 18– 21); and d. claim 14 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube, Bryt, Turkan, and Chen (Non-Final Act. 21– 22). ISSUE Has the Examiner erred in finding the combination of references teaches or suggests the step of training a probabilistic learning machine on [a] first set of regions [partitioned from a plurality of training image files] to generate a first set of machine learned model parameters, the first set of machine learned model parameters representing a first level of data patterns in the plurality of training image files as recited in claim 1? ANALYSIS The Examiner rejects claim 1 as obvious over the combined teachings of Dube and Bryt. Non-Final Act. 3–8. The Examiner relies upon Dube’s disclosure of image compression and decompression processing using hierarchical coding for teaching all but one of the limitations of claim 1. Id. Appeal 2021-004846 Application 14/678,532 6 at 3–6 (citing Dube 4:1–3, 6:16–17, 7:62–-63, Figs. 1–3B, 5–7B, 9, 12). However, the Examiner finds Dube does not explicitly disclose the limitation of “building the first set of machine learned model parameters and the second set of machine learned model parameters into an executable image compression application that is configured to compress an image file that is different from the training image files” are recited by claim 1. Id. at 6. Addressing the noted deficiency of Dube, the Examiner finds Bryt’s disclosure of an off-line training process producing a set of K-singular value decomposition (K-SVD) dictionaries teaches or suggests the identified limitation. Id. at 6–7 (citing Bryt Abstract, §§ 2.3, 3.1.1, Fig. 3). According to the Examiner it would have been obvious “to modify the predictive multiple level image compression/decompression system [and] method of Dube to include a learning machine to build sets of machine learned model parameters as taught in Bryt . . . [so as] to support a very low bit rate compression.” Id. at 11. Appellant contends that, inter alia, Dube fails to teach either of claim 1’s training steps, arguing as follows: [T]he Examiner cites to Dube’s “predicted values” as specifically teaching the claimed “machine learned model parameters” and asserts that Dube’s prediction method teaches “implementing a machine to learn & predict image characteristics via the predictors.” (Id. at 4 (citing Dube, FIGs. IA, 5, 7B, and col. 7, lines 62–63).) However, the predicted values of Dube are not analogous to the claimed “machine learned model parameters” at least because the predicted values of Dube are derived from only a single image file. The predicted values of Dube are “determined by the predictors, for the object pixel.” (Dube, col. 7, lines 62–64.) Thus, the predicted values of Dube are a component of Dube’s predictive multiple level image Appeal 2021-004846 Application 14/678,532 7 compression/decompression system that are derived on data (e.g., pixels) of a single image (i.e., the image to be compressed). By contrast, the claimed “machine learned model parameters” are not derived from a single image file, but are instead derived from “the first set of regions [of a plurality of training image files]” as recited in the claim. Thus, Dube’s predicted values are not analogous to the claimed “machine learned model parameters” since the predicted values are only based on data of a single image file, and not “the first set of regions [of a plurality of training image files]” as required of the “machine learned model parameters” recited in claim 1. Appeal Br. 11–12. The Examiner responds, finding that “[t]he method shown by Dube can be used to a single or multiple images, with single image as a special case for multiple images. There is no restriction that the method shown in Dube can only be used in a single image file.” Ans. 17–18. Appellant replies, arguing as follows: Applicant does not argue here that Dube’s multi-level predictive image compression process cannot be used on a plurality of images, as alleged by the Examiner. . . . Applicants are simply demonstrating that Dube’ s predicted values are not the same as the claimed “first set of machine learned model parameters” because they are not derived from a plurality of training image files. Specifically, Dube’s predicted values are “determined by the predictors, for the object pixel.” (Dube, col. 7, lines 62–64.) Further, as explained in Dube, “predictor module 100 uses a plurality of predictors to generate predicted values for an object pixel, e.g., X, in an image.” (Dube, col. 6, lines 58–60.) Thus, the predicted values of Dube are a component of Dube’s predictive algorithm that are derived on data (e.g., a pixel) of a single image (i.e., the image to be compressed). As required by claim 1, the “first set of machine learned model parameters” are generated by training a probabilistic learning machine on a first set of regions of a plurality of training image files. Further, the “first set of machine learned model Appeal 2021-004846 Application 14/678,532 8 parameters” represent “a first level of data patterns in the plurality of training image files.” Thus, the claimed “first set of machine learned model parameters” include data from a plurality of training image files, whereas the predicted values of Dube only include data from a single image. Reply Br. 4–5. We are persuaded the rejection is improper because the Examiner fails to provide sufficient evidence or reasoning to show that Dube teaches or suggests claim 1’s training step that generates machine learned model parameters representing data patterns found in a plurality of training image files. Although Dube’s image processing might be used to process multiple images, each image is processed separately to generate corresponding predictors. That is, Dube discloses a one-to-one relationship between each image file and its predictor values, not using plural training images to generate a set of predictors that are used to build an executable image compression application to compress a different image file as required by the disputed claim language. Because we agree with at least one of the arguments advanced by Appellant, we do not reach the merits of Appellant’s other arguments. Accordingly, we do not sustain the rejection of independent claim 1, or the rejection of independent claims 11, 16, 19, 20, and 22, which include language similar to the argued limitation of claim 1. Nor do we sustain the rejections of dependent claims 2–10, 12–15, 17, 18, and 21 which stand with their respective base. Appeal 2021-004846 Application 14/678,532 9 CONCLUSION We reverse the Examiner’s rejections of a. claims 1–3, 7, 11, 13, 15–17, and 19–22 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube and Bryt; b. claims 4–6 and 12 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube, Bryt, and Turkan; c. claims 8–10 and 18 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube, Bryt, and Chen; and d. claim 14 under 35 U.S.C. § 103 as obvious over the combined teachings of Dube, Bryt, Turkan, and Chen. DECISION SUMMARY In summary: Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1–3, 7, 11, 13, 15–17, 19–22 103 Dube, Bryt 1–3, 7, 11, 13, 15–17, 19–22 4–6, 12 103 Dube, Bryt, Turkan 4–6, 12 8–10, 18 103 Dube, Bryt, Chen 8–10, 18 14 103 Dube, Bryt, Turkan, Chen 14 Overall Outcome 1–22 REVERSED Copy with citationCopy as parenthetical citation