Uniloc 2017 LLCDownload PDFPatent Trials and Appeals BoardDec 8, 2020IPR2019-01126 (P.T.A.B. Dec. 8, 2020) Copy Citation Trials@uspto.gov Paper 23 571-272-7822 Date: December 8, 2020 UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD UNIFIED PATENTS INC., Petitioner, v. UNILOC 2017 LLC, Patent Owner. ____________ IPR2019-01126 Patent 6,519,005 B2 ____________ Before JAMESON LEE, KEVIN F. TURNER, and THOMAS L. GIANNETTI, Administrative Patent Judges. TURNER, Administrative Patent Judge. JUDGMENT Final Written Decision Determining All Challenged Claims Unpatentable 35 U.S.C. § 318(a) IPR2019-01126 Patent 6,519,005 B2 2 We have jurisdiction to conduct this inter partes review under 35 U.S.C. § 6. This Final Written Decision is issued pursuant to 35 U.S.C. § 318(a) and 37 C.F.R. § 42.73. For the reasons discussed herein, we determine that a preponderance of the evidence shows that claims 1−16 and 39−42 (“the challenged claims”) of U.S. Patent No. 6,519,005 B2 (Ex. 1001, “the ’005 Patent”) are unpatentable. I. INTRODUCTION A. Summary of Procedural History Unified Patents Inc. (“Petitioner”) filed a Petition (Paper 1, “Pet.”) requesting inter partes review of the challenged claims. Pet. 1. We instituted an inter partes review of the Challenged Claims on all grounds of unpatentability asserted in the Petition. Paper 8 (“Inst. Dec.”). Thereafter, Patent Owner filed a Patent Owner Response (Paper 11, “PO Resp.”). Petitioner filed a Reply to the Patent Owner Response (Paper 14, “Pet. Reply”), to which Patent Owner filed a Sur-reply (Paper 15, “PO Sur- reply”). Oral argument was held on September 8, 2020, and a transcript of the hearing appears in the record. Paper 22 (Tr.). We have jurisdiction under 35 U.S.C. § 6. This Final Written Decision is issued pursuant to 35 U.S.C. § 318(a) and 37 C.F.R. § 42.73 (2017). Petitioner bears the burden of proving unpatentability of the Challenged Claims by a preponderance of the evidence, and the burden of persuasion never shifts to Patent Owner. See 35 U.S.C. § 316(e) (2012); 37 C.F.R. § 42.1(d) (2017); Dynamic Drinkware, LLC v. Nat’l Graphics, Inc., 800 F.3d 1375, 1378 (Fed. Cir. 2015). IPR2019-01126 Patent 6,519,005 B2 3 B. Related Proceedings The parties indicate that the ’005 Patent is the subject of 23 pending district court cases, with 12 of those cases indicated as still pending. See Pet. 7−9; Paper 3, 2; Paper 7, 1−3. The following post grant proceedings also involve the ’005 Patent: (1) IPR2019-01270, filed June 25, 2019 by petitioner Sling TV LLC (instituted: January 9, 2020) and (2) IPR2019-01584, filed September 9, 2019 by petitioner Google LLC (denied: March 24, 2020; rehearing denied: October 16, 2020). C. The ’005 Patent The ’005 Patent is generally related to digital video compression, and more particularly, to a motion estimation method and search engine for a digital video encoder. Ex. 1001, 1:6−8. The ’005 Patent is titled “Method of Concurrent Multiple-Mode Motion Estimation for Digital Video” and discloses improved methods of digital video encoding. Id. at code (54), Abs. With respect to the prior art Moving Pictures Expert Group (MPEG) processes, the ’005 Patent explains that audio and video data in the form of a multimedia data stream are encoded/compressed prior to transmission “in order to minimize the bandwidth required to transmit this digital video data stream for a given picture quality.” Id. at 1:12−17. The ’005 Patent explains that, in accordance with the MPEG standards (MPEG-2 and its predecessor, MPEG-1), “the audio and video data comprising a multimedia data stream (or ‘bit stream’) are encoded/compressed in an intelligent manner using a compression technique generally known as ‘motion coding,’” to avoid transmitting each video frame in its entirety. Id. at 1:40−47. IPR2019-01126 Patent 6,519,005 B2 4 The ’005 Patent discusses “predictive” video frames using reference or anchor frames looking at macroblocks of 16x16 pixel regions. Id. at 1:61−65, Figs. 1A, 1B. For each macroblock, the system searches for a similar portion of the reference and results in a motion vector that identifies the position of the “best match” within reference frame relative to the position of the macroblock in the frame being encoded. Id. at 2:18−47. The ’005 Patent details six standard ways to perform the motion estimation process, called “prediction modes,” with different modes being applied based on the “picture structure.” Id. at 9:36−40. For “frame” picture structures, the entire frame is considered a single picture, while for “field” picture structures, the frame is divided into a top field and a bottom field. Id. at 7:27−61. Two of these different prediction modes are illustrated in Figures 2A and 2B and are reproduced below: Figures 2A and 2B of the ’005 Patent are diagrams that illustrate motion estimation for frame pictures using frame and field predictions. IPR2019-01126 Patent 6,519,005 B2 5 In Figure 2A, “frame prediction” will treat the macroblock like a single 16x16 block, and the motion estimation search system will search for the 16x16 pixel portion of the reference picture that is the best match (e.g., minimizes the difference between the macroblock and a given portion of the reference picture), whereas Figure 2B shows “field prediction” for a “frame” picture, with the macroblock and the anchor picture divided into odd and even numbered rows of pixels (referred to as a “top” field, and a “bottom” field), and the system will try to find the best match for each of the two fields. Id. at 7:27−60. The ’005 Patent discloses at least six prediction modes. Id. at 9:36−38, 15:38−46. The ’005 Patent provides that conventional systems required a single prediction mode to be specified, without knowing which might be optimum, but the disclosed processes allow for the determination of the optimum prediction mode while performing motion estimation and generating motion vectors, without the need for pre-selection of a mode. Id. at 3:7−15, 40−49. The ’005 Patent describes a “scheme for concurrently searching . . . each of a plurality of motion prediction modes” using a single search, which allows for optimum selection between the different options, i.e., different prediction modes. Id. at 3:53−61, 9:28−35. The ’005 Patent also provides that because the error metrics for different prediction modes are mathematically related to one another, a single search can “concurrently” calculate the error metrics for multiple prediction modes. Id. at 3:54−67, 8:14−23. IPR2019-01126 Patent 6,519,005 B2 6 D. Illustrative Claim Of the challenged claims, claims 1, 39, and 41 are independent. Claims 2−16 ultimately depend from claim 1; claim 40 depends from claim 39; and claim 42 depends from claim 41. Claim 1 is illustrative: 1. A method for motion coding an uncompressed digital video data stream, including the steps of: [1A] comparing pixels of a first pixel array in a picture currently being coded with pixels of a plurality of second pixel arrays in at least one reference picture and [1B] concurrently performing motion estimation for each of a plurality of different prediction modes in order to determine which of the prediction modes is an optimum prediction mode; [1C] determining which of the second pixel arrays constitutes a best match with respect to the first pixel array for the optimum prediction mode; and, [1D] generating a motion vector for the first pixel array in response to the determining step. Ex. 1001, 15:9–22 (bracketed matter added). IPR2019-01126 Patent 6,519,005 B2 7 E. The Asserted Grounds of Unpatentability Petitioner asserts the following grounds of unpatentability (Pet. 16)1: Claim(s) Challenged 35 U.S.C. § References 1−16 and 39−42 103 Ishihara2 and AAPA3 1−5, 7−16, 41, and 42 103 Mombers 4, Nakajima5, and Senda6 39 and 40 103 Mombers and Nakajima 6 103 Mombers, Nakajima, Senda, and AAPA Petitioner also relies on multiple declarants in addition to the material found in the Petition. Specifically, Petitioner relies on the testimony of Dr. 1 The claims at issue have an effective filing date of April 30, 1999, which is prior to March 16, 2013, the effective date of the Leahy-Smith America Invents Act, Pub. L. No. 112-29, 125 Stat. 284 (2011) (“AIA”), and, thus, we apply the pre-AIA version of 35 U.S.C. § 103. 2 K. Ishihara et al., A Half-pel Precision MPEG2 Motion-Estimation Processor with Concurrent Three-Vector Search, 30 (12) IEEE J. of Solid- State Circuits 1502−09 (Dec. 1995) (Ex. 1004, “Ishihara”). 3 Petitioner cites to sections of the ’005 Patent that involve the discussion of the prior art as “Applicant’s Admitted Prior Art” (“AAPA”). See Ex. 1001, 1:21−34, 1:41−2:5, 2:18−27, 49−54, 2:64−3:14, 3:25−38, 7:28−30, Figs. 1A and 1B. 4 F. Mombers et al., A Video Signal Processor Core for Motion Estimation in MPEG2 Encoding, 1997 IEEE Int’l Symposium on Circuits and Systems 2805−08 (June 9−12, 1997) (Ex. 1005, “Mombers”). 5 U.S. Patent No. 5,412,435, issued May 2, 1995 (Ex. 1006, “Nakajima”). 6 Y. Senda et al., A Simplified Motion Estimation Using an Approximation for the MPEG-2 Real-Time Encoder, 1995 Int’l Conference on Acoustics, Speech, and Signal Processing 2273−76 (May 12, 1995) (Ex. 1007, “Senda”). IPR2019-01126 Patent 6,519,005 B2 8 Immanuel Freedman (Ex. 1003) to support the contentions in the Petition. In addition, Petitioner cites to declarations of Gerard P. Grenier (Ex. 1009) and Dr. Sylvia D. Hall-Ellis (Ex. 1011) with respect to the Institute of Electrical and Electronics Engineers (IEEE) publications, to assert that those publications were published, cataloged, and indexed for public availability. Pet. 13−15 (citing Ex. 1009 ¶¶ 8−13; Ex. 1011 ¶¶ 6−8, 11−24, 34, 38, 42−46, 50−57). Neither Petitioner nor Patent Owner filed any additional declaratory evidence subsequent to the Institution Decision. II. ANALYSIS OF PETITION In our analysis of Petitioner’s unpatentability contentions with respect to the Challenged Claims, we next address the applicable principles of law; the level of ordinary skill in the art; the construction of claim terms; the scope and content of the asserted prior art; and then analyze Petitioner’s contentions with respect to each alleged ground of unpatentability for purposes of determining whether Petitioner shows by a preponderance of the evidence the unpatentability of the Challenged Claims. A. Principles of Law A patent claim is unpatentable as obvious “if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains.” 35 U.S.C. § 103.7 An invention “composed of 7 The Leahy-Smith America Invents Act (“AIA”), Pub. L. No. 112-29, 125 Stat. 284, 287–88 (2011), amended 35 U.S.C. § 103 effective March 16, 2013. We quote the AIA version of 35 U.S.C. § 103, which applies to applications with an effective filing date after March 16, 2013, however, any differences do not affect our analysis here. IPR2019-01126 Patent 6,519,005 B2 9 several elements is not proved obvious merely by demonstrating that each of its elements was, independently, known in the prior art.” KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 418 (2007). The question of obviousness is resolved on the basis of underlying factual determinations, including: (1) the scope and content of the prior art; (2) any differences between the claimed subject matter and the prior art; (3) the level of skill in the art; and (4) objective evidence of nonobviousness. Graham v. John Deere Co. of Kansas City, 383 U.S. 1, 17–18 (1966). An obviousness determination “cannot be sustained by mere conclusory statements; instead, there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.” KSR, 550 U.S. at 418 (quoting In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006)); see In re Magnum Oil Tools Int’l, Ltd., 829 F.3d 1364, 1380 (Fed. Cir. 2016). Rather, “it can be important to identify a reason that would have prompted a person of ordinary skill in the relevant field to combine the elements in the way the claimed new invention does.” KSR, 550 U.S. at 418. B. Level of Ordinary Skill in the Art In determining the level of ordinary skill in the art, various factors may be considered, including the “type of problems encountered in the art; prior art solutions to those problems; rapidity with which innovations are made; sophistication of the technology; and educational level of active workers in the field.” In re GPAC, Inc., 57 F.3d 1573, 1579 (Fed. Cir. 1995) (quotation omitted). Petitioner’s declarant, Dr. Freedman, testifies that a person of ordinary skill in the art in the context of the ’005 Patent would have been a person IPR2019-01126 Patent 6,519,005 B2 10 having “at least a bachelor’s degree in electrical engineering, computer engineering, computer science, or a related field[,] together with at least two years of experience in the field of video coding.” Ex. 1003 ¶¶ 33−34. Patent Owner does not challenge this assessment, nor does Patent Owner offer “a competing definition for POSITA.” PO Resp. 8. As we indicated in our Institution Decision, to the extent that Petitioner’s assessment uses the open ended “at least” modifiers to make the assessment open ended, we remove those modifiers to obtain definite amounts of education and experience. Inst. Dec. 10. We continue to note that Petitioner’s assessment appears consistent with the level of ordinary skill in the art at the time of the invention as reflected in the prior art in the instant proceeding. See Okajima v. Bourdeau, 261 F.3d 1350, 1355 (Fed. Cir. 2001). For purposes of this Decision, we continue to apply Dr. Freedman’s assessment of the level of ordinary skill in the art. C. Claim Construction We apply the same claim construction standard articulated in Phillips v. AWH Corp., 415 F.3d 1303 (Fed. Cir. 2005) (en banc). See Changes to the Claim Construction Standard for Interpreting Claims in Trial Proceedings Before the Patent Trial and Appeal Board, 83 Fed. Reg. 51,340, 51358 (Oct. 11, 2018) (amending 37 C.F.R. § 42.100(b) effective November 13, 2018) (now codified at 37 C.F.R. § 42.100(b) (2019)). Petitioner acknowledges this standard. Pet. 26. Under Phillips, claim terms are generally afforded “their ordinary and customary meaning.” Phillips, 415 F.3d at 1312. “[T]he ordinary and customary meaning of a claim term is the meaning that the term would have to a person of ordinary skill in the art in question at the time of the invention.” Id. at 1313. “[T]he person of IPR2019-01126 Patent 6,519,005 B2 11 ordinary skill in the art is deemed to read the claim term not only in the context of the particular claim in which the disputed term appears, but in the context of the entire patent, including the specification.” Phillips at 1313. Only terms that are in controversy need to be construed, and then only to the extent necessary to resolve the controversy. Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803 (Fed. Cir. 1999). Petitioner seeks explicit claim construction of two terms: “optimum prediction mode” and “best match.” Pet. 26–28 (citing Ex. 1003 ¶¶ 60–61). With respect to the former, Petitioner asserts that the “optimum prediction mode” should be construed as “the prediction mode that yields the smallest value of an error metric,” and with respect to the latter, Petitioner asserts that the “best match” should be construed to include “the array included in the search ‘having the smallest value of an error metric.’” Id. Patent Owner indicates that “the Board need not construe any claim term in a particular manner in order to arrive at the conclusion that the Petition is substantively deficient,” while reserving the right to object to the proposed constructions and provide its own proposed constructions. PO Resp. 9 (citing Wellman, Inc. v. Eastman Chem. Co., 642 F.3d 1355, 1361 (Fed. Cir. 2011)). We note that neither of Patent Owner’s Response nor its Sur-reply explicitly objects to Petitioner’s proposed claim constructions, nor supplies alternative constructions. See generally PO Resp.; PO Sur-reply. We adopt Petitioner’s proposed claim constructions of “optimum prediction mode” and “best match,” as discussed above, for purposes of this decision as we determine that these are consistent with the Specification. See Ex. 1001, 2:58–3:1, 14:47–54, 15:15–17, 20:3–7, 20:17–20. IPR2019-01126 Patent 6,519,005 B2 12 D. Scope and Content of the Prior Art Petitioner relies on Ishihara, Mombers, Nakajima, Senda, and AAPA to show the unpatentability of the Challenged Claims. Pet. 29–97. Each of the first four of these references is summarized briefly below. 1. Overview of Ishihara (Ex. 1004) Ishihara is directed to a processor for encoding in accordance with the MPEG-2 video compression standard. Ex. 1004, 1502. Ishihara indicates that it supports all prediction modes in MPEG-2 and estimates three vectors concurrently. Id. at Abs. A block diagram of the hardware disclosed in Ishihara is shown in Figure 5 and is reproduced below: Fig. 5 of Ishihara illustrates a block diagram of motion estimation processor In Ishihara, the “Integer-pel Unit (IU)” performs “full search block matching” and “realizes a better efficiency for finding the optimum [motion] vectors than the hierarchical search which has been employed in some other processors.” Id. at 1504. Thereafter, the system performs a half-pixel precision search to refine the result. Id. IU “consists of 256 processing elements (PE’s), summation circuits and a minimum-value detector,” with IPR2019-01126 Patent 6,519,005 B2 13 each PE corresponding to a pixel on the template block. Id. at 1504, Fig. 6. Each PE compares pixel data from the template block and a search window, representing pixel data from a reference frame, and computes the absolute difference between the two. Id. The error information is passed into the summation circuit that sums the errors together, which outputs the results to a minimum value detector that keeps track of the vectors that minimize overall error and selects the best vectors for each template block. Id. at 1505. Ishihara also discloses that “[t]he absolute differences computed in each PE are classified in four groups[:] upper-top, upper-bottom, lower-top and lower-bottom.” Id. at 1504, Fig. 8. As shown in TABLE II, reproduced below, the summation circuit will sum together errors for different subsets of pixels and simultaneously calculate the mean absolute difference for different prediction modes. Id. TABLE II of Ishihara provides the distortion summation scheme for three vectors for different picture structures. IPR2019-01126 Patent 6,519,005 B2 14 2. Overview of Mombers (Ex. 1005) Mombers is directed to “the core of a video signal processor dedicated to perform MPEG-2 motion estimation and prediction selection” using a “mixed Hardware/Software approach based on a small RISC controller.” Ex. 1005, Abs. A block diagram of the processor disclosed in Mombers is reproduced below: Figure 1 of Mombers illustrates its motion estimation and prediction selection module (color annotations added by Petitioner, Pet. 71) In Mombers, a vector generation unit (VGU) is “programmed to generate candidate motion vectors” based on pixel information. Id. § 3.1. This pixel information is converted into the equivalent pixel information, which is used to calculate the matching errors associated with each vector IPR2019-01126 Patent 6,519,005 B2 15 (e.g., the mean absolute difference between the macroblock and the portion of the search window that the vector corresponds to). Id. The resulting matching errors can be used to select a prediction mode to be used. Also illustrated in Figure 1 of Mombers is a pixel processor (PP) that calculates the errors for a plurality of different prediction modes. The PP “accumulates matching costs for the entire macroblock (16x16) and for its two halves (16x8).” Id. § 3.2. The structure of the pixel processor is illustrated in Figure 3 of Mombers and is reproduced below: Figure 3 of Mombers illustrates the structure of the pixel processor The candidate motion vectors, generated by the vector generation unit, are evaluated by the PP, which makes use of a “reference cache (2 banks of 16 * 16 pixels) and the searching window (2 bank[s] of 48 * 64 pixels) cache” and is configured such that “[f]or each matched vector, the PP accumulates matching costs for the entire macroblock (16x16) and for its IPR2019-01126 Patent 6,519,005 B2 16 two halves (16x8). In this way, it is possible to evaluate costs for all types of predictions allowed by the MPEG-2 video standard.” Id. §§ 3.1, 3.2. 3. Overview of Nakajima (Ex. 1006) Nakajima is directed to a video signal motion compensation prediction system with a coder that includes a comparator for selecting the smallest one of the prediction errors output from the motion estimators. Ex. 1006, Abs., Fig. 3. The comparator chooses an optimum mode by comparing “the prediction error signals E1 to En and output[ting] the selection flag ZM of the motion estimator providing the smallest prediction error signal to a selector 102.” Id. at 2:31−44, Fig. 3. 4. Overview of Senda (Ex. 1006) Senda discloses a simplified motion estimation process using an approximation for the MPEG-2 real-time encoder that reduces the number of necessary computations in certain steps. Ex. 1007, Abs. The approximation is applied to the MPEG-2 Test Model whereby full-pixel motion vectors are searched exhaustively and thereafter each is refined to half-pixel accuracy. Ex. 1007, 2274, Fig. 1 E. Obviousness over Ishihara and AAPA Petitioner contends claims 1−16 and 39−42 are unpatentable under § 103(a) as obvious over Ishihara and AAPA. Pet. 29–70. Patent Owner argues that Ishihara fails to teach or suggest all elements of the independent claims, where Ishihara is relied upon to teach those elements. PO Resp. 9– 19; PO Sur-reply 1–13. We address Petitioner’s and Patent Owner’s arguments below and determine, for the reasons provided below, that Petitioner shows by a preponderance of the evidence that Ishihara and AAPA render claims 1−16 and 39−42 obvious. IPR2019-01126 Patent 6,519,005 B2 17 1. Analysis of Cited Art as Applied to Independent Claim 1 a) Preamble Petitioner asserts that the preamble of independent claim 1 is taught or suggested by Ishihara. Pet. 33−35. Petitioner points out that Ishihara is directed to MPEG-2 motion estimation that involves coding and compression. Id. at 33 (citing Ex. 1004, Abs., 1502−1503, 1508, Fig. 1). Petitioner also points out that the ’005 Patent discusses the MPEG-2 standard, and describes it as prior art. The ’005 Patent explains that MPEG- 2 is a compression technique for encoding/compressing a multimedia data stream of audio and video. Id. at 34 (citing Ex. 1001, 1:41−45). Petitioner further asserts that “[t]o the extent that the preamble’s stated intended use of coding an ‘uncompressed’ video st[r]eam is not explicitly disclosed in Ishihara, a POSITA would have understood Ishihara as teaching that encoding/compressing according to the MPEG-2 standard involves encoding/compressing an uncompressed video stream, as taught by AAPA.” Id. at 34−35 (citing Ex. 1013). Patent Owner does not challenge this aspect of Petitioner’s analysis. See generally PO Resp.; PO Sur-reply. We determine that that Petitioner has demonstrated that the preamble of claim 1 is disclosed by Ishihara. b) Element [1A] With respect to this element, Petitioner asserts that Ishihara discloses comparing the pixels of a template block, from a picture being encoded, with pixels from a search window in past and/or future pictures. Pet. 35−38 (citing Ex. 1003 ¶¶ 71−73). Petitioner asserts that Ishihara discloses that “[m]otion estimation is a procedure to find the best motion vector representing a displacement to the predictor from the position of the template block.” Id. at 36 (citing Ex. 1004, 1503, Fig. 4). Patent Owner IPR2019-01126 Patent 6,519,005 B2 18 does not challenge this aspect of Petitioner’s analysis. See generally PO Resp.; PO Sur-reply. We determine that that Petitioner has demonstrated that element [1A] of claim 1 is taught by Ishihara. c) Element [1B] Petitioner asserts that Ishihara discloses this element. Pet. 38−43 (citing Ex. 1003 ¶¶ 74−78). Petitioner argues that Ishihara concurrently performs motion estimation for a plurality of prediction modes, citing that “[t]hree different distortions for three vectors are summed up concurrently under a data flow control in the summation circuit.” Ex. 1004, 1504. Petitioner points to Figure 8 and Table II of Ishihara as providing summation circuits and schemes that estimate the three vectors concurrently for the field and the frame structures pictures. Pet. 39−40 (citing Ex. 1004, 1504−1505, Table II, Fig. 8). Petitioner further asserts that the output of the summation circuit goes to a minimum-value detector which finds the minimum distortions and selects the best vectors for each template block. Id. at 42 (citing Ex. 1004, 1504−1505). Lastly, Petitioner asserts that the determination of a best prediction mode based on a minimum distortion is equivalent to determining an optimum prediction mode. Id. at 43. Patent Owner contends that the disclosure in Ishihara of determining the best prediction mode provides that “Ishihara specifies only that the best prediction mode is selected from three prediction modes; there is no disclosure that the determination of the best prediction mode is based upon comparison of pixels in first- and second-pixel arrays.” PO Resp. 12. Petitioner responds that claim 1 does not recite that the optimum mode is based on a comparison of pixels, but rather recites that motion estimation is done to determine the optimum mode. Pet. Reply 3−4. Additionally, IPR2019-01126 Patent 6,519,005 B2 19 Petitioner points out that Ishihara’s motion estimation process uses “block matching” with pixel comparisons to find the best motion vector “whose distortion is a minimum over the search range.” Id. at 4 (citing Ex. 1004, 1503−1504, Fig. 2; Ex. 1003 ¶¶ 76−78). We agree with Petitioner that Ishihara clearly provides for the consideration of multiple prediction modes, such as: For the frame structure pictures, the best prediction mode and motion vectors for a macroblock are selected from three prediction modes, frame-based prediction, field-based prediction, and dual-prime prediction. Ex. 1004, 1503 (emphases added). As discussed above, the selection is made on the basis of block matching through pixel comparisons. Patent Owner also argues that there is no disclosure in Ishihara that “explains how the best prediction mode is used in the motion process described in Ishihara.” PO Resp. 12. Patent Owner also argues that “Ishihara is completely lacking in describing any linkage between the best prediction mode [and] the motion process.” PO Sur-reply 3−4. Although we agree that Ishihara does not detail how the best prediction mode is used after it is selected, we disagree with Patent Owner that such an explanation is needed in the context of claim 1. Claim 1 only requires motion estimation “in order to determine” the optimum prediction mode and does not explicitly recite the setting of that optimum prediction mode. The next method step, element [1C], makes a determination based on the optimum prediction mode, and we consider that element below. In the context of element [1B], however, we are persuaded that Ishihara makes a determination of the best or optimum prediction mode, which is all that is required of that element. We determine that that Petitioner has demonstrated that element [1B] of claim 1 is taught by Ishihara. IPR2019-01126 Patent 6,519,005 B2 20 d) Element [1C] Petitioner asserts that Ishihara renders obvious determining which of the second pixel arrays (e.g., pixels from the “search window”) constitutes a best match with respect to the first pixel array for the optimum prediction mode. Pet. 43−46 (citing Ex. 1003 ¶¶ 79−82). Ishihara discloses that “[m]otion estimation is a procedure to find the best motion vector representing a displacement to the predictor from the position of the template block.” Ex. 1004, 1503. Petitioner argues that a person of ordinary skill in the art “would have also understood this to mean that a motion vector uniquely identifies a pixel array in relation to the pixels in the template block,” because a motion vector is a way of representing the location of a set of pixels relative to another set of pixels, per the AAPA. Pet. 44−45. Because of this, Petitioner asserts that one of ordinary skill in the art would have understood that determining the best vector also determines the subset of the reference picture (e.g., the second pixel array) that constitutes a best match. Id. at 45 (citing Ex. 1003 ¶ 83). Patent Owner argues that Ishihara has no disclosure of determining which of the second pixel arrays yields the smallest value of the error metric for the determined optimum prediction mode because Ishihara only teaches comparison of vectors, irrespective of a best prediction mode. PO Resp. 14−15. As such, Patent Owner asserts that Petitioner has not shown a disclosure in Ishihara of a method that identifies the best match based upon a determined optimum prediction mode. Id. at 15. Petitioner responds that Ishihara discloses finding the best vector for each mode, including the optimum prediction mode, and then finding the best vector across modes to identify the best mode. Pet. Reply 8 (citing Ex. 1004, 1505). Petitioner IPR2019-01126 Patent 6,519,005 B2 21 continues that this view is supported by its declarant, Dr. Freedman, and that Ishihara’s distortion calculations are actually based on the mode associated with the vector. Id. (citing Ex. 1003 ¶¶ 75−82). Petitioner also argues that Patent Owner has alleged that the claimed determination step, in element [1C], must occur after the optimum prediction mode is determined and be based on that determination, but Petitioner argues this ordering is not required by claim 1 and is contrary to the embodiments disclosed in the ’005 Patent. Pet.; Reply 9−12. Petitioner continues that if a best match has been determined for each of the candidate modes, then it was necessarily also determined for the optimum prediction mode, the optimum mode being one of those candidate modes. Id. at 9. Additionally, Petitioner asserts that all embodiments in the Specification disclose determining best matches before selecting the optimum prediction mode. Id. at 9−11 (citing Ex. 1001, 4:27−67, 5:1−6:57, 7:27−8:51, 9:36−10:65, 12:63−14:41, Figs. 3, 5, 7, 8). Patent Owner responds that the clear language of claim 1 has the element [1B] setting of the optimum prediction mode as a predicate, and therefore occurring before it is used in element [1C] to identify the best match. PO Sur-reply 6. Patent Owner also argues that Petitioner has conflated “two distinct terms” in the specification, namely “best match results,” where the results are merely candidates, and “best match produced by the selected prediction mode, with the actual best match selected based on the optimum prediction mode. Id. at 6−7. We agree with the analysis of Petitioner. As discussed above, element [1B] recites, in part, motion estimation occurs “in order to determine which of the prediction modes is an optimum prediction mode.” That recitation IPR2019-01126 Patent 6,519,005 B2 22 does not provide that the optimum prediction mode is “set” or specified, in that aspect of the claim, as Patent Owner appears to allege. The recitation of “in order to determine” in that element dictates a purpose for that motion estimation, i.e., what the motion estimations can be used to show, but does not explicitly state that the optimum prediction mode is determined in that method step. In other words, a system that concurrently performs motion estimation for each of a set of different prediction modes, and which allows for, but does not indicate an optimum prediction mode, would fall with the scope of that claim element. In contrast, in element [1C], “a best match” is specifically determined “for the optimum prediction mode.” In that step, the optimum prediction mode must be known at some point in order that the proper determination of “a best match” be made. We find nothing in claim 1, or in the supporting Specification, that that would prevent the optimum prediction mode determination from being made during that same step as the “best match” determination in element [1C]. As Petitioner has argued, the embodiments described in the ’005 Patent all disclose determining multiple best matches, and then determining the “optimum prediction mode,” as discussed above. We find this disclosure does not conflict with Patent Owner’s characterization of the different embodiments as disclosing the best match results, the best match being produced by the selected “optimum prediction mode.” In each, multiple results of the motion estimation of the pixel arrays would be determined, and a best match would be determined on the basis of those results, given the different prediction modes for those results. Thus, we agree with Petitioner that Ishihara discloses or suggests determining which of the pixels from the “search window” constitutes a best IPR2019-01126 Patent 6,519,005 B2 23 match with respect to the first pixel array. We determine that that Petitioner has demonstrated that element [1C] of claim 1 is disclosed by Ishihara or suggested by Ishihara and AAPA. e) Element [1D] Petitioner asserts that Ishihara discloses this last element of claim 1. Pet. 47−49 (citing Ex. 1003 ¶¶ 84−89). Petitioner asserts that after Ishihara’s ME2 processor determines the “best” vector, a single motion vector (MV) output is generated from the minimum-value detector. Id. at 47 (citing Ex. 1004, 1504−1505, Fig. 6). Petitioner also asserts that “[t]o the extent that Patent Owner would argue, incorrectly, that the vector must be created from scratch (e.g., new data allocated from memory to store vector information) after the determination is made, this is also disclosed or rendered obvious by Ishihara.” Id. at 47−49 (citing Ex. 1004, 1504−1506, Fig. 10). Patent Owner disputes what it describes as Petitioner argument that “‘generating a motion vector’ does not really mean generating a vector because it is understood that ‘passing a motion vector downstream’ constitutes generating a vector.” PO Resp. 16 (citing Pet. 48). Patent Owner argues that claim 1 requires that the vector be “generated in response to the best match determining step,” consistent with the language of claim 1 and that Petitioner’s interpretation “renders the language ‘in response to’ as having no meaning.” Id. Patent Owner also contests that the prosecution history supports Petitioner’s interpretation of this element, asserting that “the interpretation of this language by a single Examiner, which conflicts with the plain language of the claim, does not constitute an understanding” of that element. PO Resp. 16−17 (citing Ex. 1002, 120−121). IPR2019-01126 Patent 6,519,005 B2 24 In response, Petitioner argues that Patent Owner’s interpretation “contradicts all embodiments disclosed in the ’005 [P]atent.” Pet. Reply 15. Petitioner discusses the “dual-prime mode” disclosed in the ’005 Patent, and argues that motion vectors must be produced prior to determining if the dual-prime prediction is the optimal mode. Id. at 15−16 (citing Ex. 1001, 12:34-42). Petitioner also alleges that Patent Owner’s interpretation “would also nullify dependent claims 7, 10, and 11, as they require the selection of modes other than dual-prime.” Id. at 16 (citing Ex. 1001, 15:37-60; CytoLogix Corp. v. Ventana Med. Sys., Inc., 424 F.3d 1168, 1173 (Fed. Cir. 2005)). Additionally, Petitioner asserts that Figures 3 and 5 of the ’005 Patent, and their associated descriptions, indicate that motion vectors are generated before the best match or optimum prediction mode are determined. Id. at 16−19 (citing Ex. 1001, 7:51−53, 8:28−32, 8:60−9:27, 10:20−65, 13:4−8, 60−63). Patent Owner argues that Petitioner’s arguments regarding the meaning of “generating a motion vector” are “entirely new,” and we should deny consideration of these additional arguments. PO Sur-reply 8. However, in the same section, Patent Owner acknowledges that the “possible need to construe this term was first raised and acknowledged by Petitioner in its Petition.” Id. at 9 (citing Pet. 81). As such, we are not persuaded that the arguments regarding the interpretation of “generating a motion vector” are new, and in any event Patent Owner addressed the different interpretations at length in its Patent Owner Response. See PO Resp. 15−19. Although Petitioner provided additional analysis, we are not persuaded that Petitioner pursued a new direction with a new approach to IPR2019-01126 Patent 6,519,005 B2 25 this issue, as alleged by Patent Owner. As such, we do not deny consideration of the Reply, or the arguments provided therein. Responding to Petitioner arguments, Patent Owner argues that Petitioner fails to cite any case law support that Patent Owner’s interpretation would be improper if it did not read on all embodiments disclosed in the ’005 Patent. PO Sur-reply 10−11. Additionally, Patent Owner argues that dependent claims are merely an aid in interpreting the scope of claims from which they depend, and the actual recitation of the claim must control if that language is clear. Id. at 11−12. As we noted in the Institution Decision, Petitioner relies upon testimony from Dr. Freedman that “[c]reating motion vectors and selecting one to output based on the determination is consistent with how ‘generating’ was understood during prosecution of the ’005 [P]atent. See Ex. 1002 at 129 (‘generating’ a motion vector encompasses the acts of selecting and passing a motion vector downstream).” Ex. 1003 ¶ 86. We have no contravening testimony on this point. Further, at Oral Hearing, Patent Owner’s counsel provided the following explanation of what this term means: JUDGE TURNER: Counsel, before you move since we’re on element 1D, basically we have unrebutted corroborated expert testimony on what generating a motion vector is. There didn’t seem to be -- there’s certainly no expert testimony on the other side, so why providing can’t be, you know, the same as generating a motion vector? MR. KOIDE: I think here, generating things, you know, calculating and also, you know, filling that data field -- so generating can be calculating or kind of filling a data field for it, and that’s what’s being done here. It’s generating a motion vector for the first pixel array in response to the determining step, and that’s shown in -- IPR2019-01126 Patent 6,519,005 B2 26 Tr. 44−45 (emphasis added). As such, it is clear that “generating a motion vector” need not be purely creating that vector “from scratch,” as “filling the data field” would comport with “generating” without requiring new calculation of a motion vector. We are also persuaded that the language of element [1D], namely “in response to the determining step,” acts to set when the generating occurs in the context of the claim, i.e., after the prior step, instead of requiring a recalculation of a motion vector. Further, we agree with Petitioner that the embodiments disclosed in the ’005 Patent also make clear that the process discussed there indicates that motion vectors are generated before the best match or optimum prediction mode is determined. As such, we determine, based on the preponderance of evidence before us, that Ishihara’s disclosure that its ME2 processor determines the “best” vector and that a single motion vector (MV) output is generated from the minimum-value detector, meets the limitations of element [1D] of claim 1. Both parties also discuss the “half-pel” processing that is disclosed in Ishihara as also meeting the limitation of element [1D]. Pet. 48−49; PO Resp. 17−19; Pet. Reply 20−22; PO Sur-reply 12−13. As per our discussion above, however, we agree with the Petitioner’s understanding of the scope of element [1D] and we need not reach the additional disclosure of Ishihara with respect to this element. We determine that that Petitioner has demonstrated that element [1D] of claim 1 is taught by Ishihara. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Ishihara, in view of AAPA, teaches or suggests all of the limitations of independent claim 1 and IPR2019-01126 Patent 6,519,005 B2 27 renders that claim obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. 2. Analysis of Cited Art as Applied to Independent Claims 39 and 41 With respect to the limitations of independent claim 39, Petitioner relies on the citations and arguments made with respect to the preamble and element [1B] of claim 1. Pet. 64−67 (citing Pet. 33−35, 38−43; Ex. 1003 ¶¶ 109−110). Similarly, with respect to the limitations of independent claim 41, Petitioner relies on the citations and arguments made with respect to the preamble and element [1B] of claim 1. Pet. 67−70 (citing Pet. 33−35, 38−43, 66−67; Ex. 1003 ¶¶ 112−118). Patent Owner states that with respect to independent claims 39 and 41, the Petition relies on the same discussion as the Petition’s challenge to independent claim 1 emphasizing the alleged deficiencies of this ground with respect to elements of independent claim 1, discussed above. See PO Resp. 9. We do not find Patent Owner’s arguments persuasive, per our discussion of independent claim 1 above. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Ishihara, in view of AAPA, teaches or suggests all of the limitations of claims 39 and 41 and renders those claims obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. 3. Analysis of Cited Art as Applied to Dependent Claims 2−16, 40, and 42 With respect to dependent claims 2−5, 40, and 42, Petitioner asserts that Ishihara discloses the particulars required by the MPEG standard. Pet. 49−52, 67, 70. Petitioner identifies that Ishihara describes a “MPEG2 motion estimation processor” and supports all prediction modes in MPEG-2. Id. (citing Ex. 1004, Abs., TABLE I, Fig. 2; Ex. 1003 ¶¶ 90−95, 111, 119). IPR2019-01126 Patent 6,519,005 B2 28 Patent Owner has not presented contrary arguments. Based on this, we determine that Petitioner has shown that Ishihara, in view of AAPA, teaches the limitations of claims 2−5, 40, and 42. Claim 6 details providing information identifying a picture type, claim 7 lists different prediction modes for those picture types, and claims 8−12 recite different sub-sets of prediction modes. Petitioner asserts that the AAPA explains that identifying the picture type is part of the MPEG standard. Pet. 52−54 (citing Ex. 1001, 1:55−2:5; Ex. 1004, Fig. 4; Ex. 1013, 4:58−5:14, 5:28−54). Petitioner asserts that one of ordinary skill in the art would have been motivated to identify the picture type so that proper motion compensation can be performed, and Ishihara explicitly allows for the standard MPEG-2 prediction modes to be used. Id. at 54−55. Petitioner raises similar arguments with respect to claims 8−12. Id. at 57−63. Patent Owner has not presented contrary arguments. Based on this, we determine that Petitioner has shown that Ishihara, in view of AAPA, suggests the limitations of claims 6−12. Claims 13−16 recite devices that implement different methods recited in specific claims. Petitioner asserts that Ishihara discloses a “MPEG2 motion estimation processor,” which can perform the different methods claims, as discussed above. Pet. 63 (citing Ex. 1004, Abs.; Ex. 1003 ¶ 108). Patent Owner has not presented contrary arguments. Based on this, we determine that Petitioner has shown sufficiently that Ishihara, in view of AAPA, teaches the limitations of claims 13−16. Patent Owner does not challenge Petitioner’s analysis of the dependent claims other than arguing alleged deficiencies of this ground with IPR2019-01126 Patent 6,519,005 B2 29 respect to elements of independent claim 1, discussed above. See PO Resp.; PO Sur-reply. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Ishihara, in view of AAPA, teaches or suggests all of the limitations of claims 2−16, 40, and 42 and renders those claims obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. 4. Conclusion on Obviousness over Ishihara in view of AAPA For the reasons provided above, we determine that Petitioner has shown by a preponderance of the evidence that Ishihara, in view of AAPA, renders 1−16 and 39−42 of the ’005 Patent unpatentable under 35 U.S.C. § 103. F. Alleged Obviousness over Mombers, Nakajima, Senda Petitioner contends that claims 1−5, 7−16, 41, and 42 are unpatentable under § 103(a) as obvious over Mombers, Nakajima, and Senda. Pet. 70–91. Patent Owner argues that Mombers fails to teach or suggest all elements of the independent claims, where Mombers is relied upon to teach those elements. PO Resp. 20–30; PO Sur-reply 13–16. We address Petitioner’s and Patent Owner’s arguments below and determine, for the reasons provided below, that Petitioner shows by a preponderance of the evidence that Mombers, Nakajima, and Senda render claims 1−5, 7−16, 41, and 42 obvious. 1. Analysis of Cited Art as Applied to Independent Claim 1 a) Preamble Petitioner asserts that the preamble of independent claim 1 is taught by Mombers. Pet. 72−73. Petitioner points out that Mombers is directed to IPR2019-01126 Patent 6,519,005 B2 30 “the core of a video signal processor dedicated to perform MPEG-2 motion estimation and prediction selection.” Id. at 72 (citing Ex. 1005, Abs.; Ex. 1003 ¶¶ 123−124). Petitioner further asserts that “[t]o the extent that the preamble’s stated intended use of motion coding an ‘uncompressed’ video stream is not explicitly disclosed, a POSITA would have understood that this is disclosed by Mombers because the MPEG-2 algorithm’s purpose was to encode uncompressed video streams,” and that it was well-known that MPEG-2 motion coding was performed on uncompressed video data streams. Id. at 73 (citing Ex. 1013, 2:11−17, Fig. 1; Ex. 1012, 1:15−20). Patent Owner does not challenge this aspect of Petitioner’s analysis. See generally PO Resp.; PO Sur-reply. We determine that that Petitioner has demonstrated that the preamble of claim 1 is taught by Mombers. b) Element [1A] With respect to this element, Petitioner asserts that Mombers discloses comparing pixels of a reference macroblock “Ref MB” stored in a reference cache with sets of pixels from a search window stored in a “searching window” cache. Pet. 73−74 (citing Ex. 1003 ¶¶ 125−126). Petitioner asserts that Mombers discloses that a vector generation unit “‘is programmed to generate candidate motion vectors’ to be evaluated by a ‘pixel co-processor.’” Id. at 74 (quoting Ex. 1005 § 3.1). Petitioner also asserts that Mombers details that a pixel processor (PP) makes use of a reference cache and the searching window cache and is configured such that “[f]or each matched vector, the PP accumulates matching costs for the entire macroblock (16x16) and for its two halves (16x8). In this way, it is possible to evaluate costs for all types of predictions allowed by the MPEG-2 video standard.” Id. (quoting Ex. 1005 § 3.2). Patent Owner does not challenge IPR2019-01126 Patent 6,519,005 B2 31 this aspect of Petitioner’s analysis. See generally PO Resp.; PO Sur-reply. We determine that that Petitioner has demonstrated that element [1A] of claim 1 is taught by Mombers. c) Element [1B] With respect to this element, Petitioner asserts that Mombers, alone or in combination with Nakajima, discloses or renders obvious this element. Pet. 75−78 (citing Ex. 1003 ¶¶ 127−132). With respect to Mombers alone, Petitioner argues that Mombers provides: [T]he [pixel processor] accumulates matching costs for the entire macroblock (16x16) and for its two halves (16x8). In this way, it is possible to evaluate costs for all types of predictions allowed by the MPEG-2 video standard: frame prediction for frame pictures (16x16 cost); field prediction for frame pictures (16x8 costs); field prediction for field pictures (16x16 cost); 16x8 motion compensation for field pictures (16x8 costs); dual prime prediction both for field and frame pictures (dual prime cost) by estimating both the dual prime motion vector and the differential one. Id. at 75 (quoting Ex. 1005 § 3.2). Thus, Petitioner asserts that each time a candidate vector is created, Mombers concurrently performs motion estimation for the plurality of different prediction modes of the MPEG-2 standard so that the prediction costs can all be evaluated without a separate search for each mode. Id. at 75−76. Additionally, Mombers details that the errors can be used to select a particular prediction mode and that any algorithm can be applied to select the best prediction mode. Id. at 76 (citing Ex. 1005 §§ 3.1, 3.3, 4). Petitioner also asserts that to the extent that it may be argued that the claimed “optimum prediction mode” must be based on the smallest error metric, and that Mombers does not explicitly disclose that relationship, a person of ordinary skill in the art would have known that choosing the best IPR2019-01126 Patent 6,519,005 B2 32 prediction mode would be to choose the prediction mode with the smallest error metric, per Nakajima. Pet. 76 (see also Ex. 1003 ¶¶ 131−132 for Dr. Freedman’s testimony on the same). As discussed in the synopsis above, Nakajima teaches a comparator that chooses an optimum mode by “compar[ing] the prediction error signals E1 to En and output[ting] the selection flag ZM of the motion estimator providing the smallest prediction error signal to a selector 102.” Ex. 1006, 2:31−44, Fig. 3. Petitioner argues that one of ordinary skill in the art would have been motivated to modify Mombers to choose a prediction mode that has the lowest amount of error as taught by Nakajima because it would provide for the smallest prediction error and would allow for improved coding efficiency. Pet. 77−78 (citing Ex. 1003 ¶¶ 131−132). Patent Owner has not presented contrary arguments with respect to the combination of Mombers and Nakajima. We agree with Petitioner that one of ordinary skill in the art would have found it obvious to modify Mombers in view of Nakajima, as Petitioner has proposed. Pet. 76−78. Patent Owner disputes Petitioner’s characterization of Mombers, asserting that “what Mombers discloses is a list of possible calculations that its ‘pixel processor’ can perform (see Ex. 1005, § 3.2), but Mombers does not disclose performing the motion estimation calculations for each of a plurality of different prediction modes concurrently.” PO Resp. 22. Patent Owner continues that Mombers is silent regarding the timing or sequence of its calculations, especially with respect to concurrent performance of motion estimation. Id. at 23−25. Patent Owner argues that calculations could occur in a serial fashion and still permit for comparisons between prediction modes. Id. at 25−26. IPR2019-01126 Patent 6,519,005 B2 33 Petitioner responds that the Petition demonstrated that a person of ordinary skill in the art would have understood Mombers as disclosing a coherent process, with that understanding supported by its declarant. Pet. Reply 6 (citing Pet. 71; Ex. 1003 ¶¶ 121−122). Petitioner also asserts that matching cost for all modes are determined per vector that is generated during a search so that those costs are accumulated at the same time for the different prediction modes. Id. at 6−7 (citing Ex. 1005 § 3.2). Patent Owner disputes Petitioner’s contentions. PO Sur-reply 13−14. We agree with Petitioner. In view of all of the evidence submitted over the course of the trial, we are persuaded ordinarily skilled artisans would have understood Mombers as disclosing a coherent and understandable process. A reference must be considered for everything that it teaches by way of technology. EWP Corp. v. Reliance Universal Inc., 755 F.2d 898 (Fed. Cir. 1985). As discussed below, specific processes of determining matching costs are disclosed, and thus we determine it is sufficiently specific to guide in the construction of its pixel processor capable of performing the calculations disclosed. See Ex. 1005 § 3.2, Fig. 3. Mombers provides that “matching costs for the entire macroblock (16x16) and for its two halves (16x8)” are computed, to allow for costs to be evaluated for different prediction modes, thus providing a timing or sequence for the calculations. Ex. 1005 § 3.2. Thus, motion estimation would be performed for multiple prediction modes at the same time, which is the same as performing those estimations concurrently. As Petitioner asserts, this process is similar to that disclosed in the ’005 Patent, where different error metrics are calculated for different prediction modes. See Pet. Reply 7 (citing Ex. 1001, 8:14−48, 9:55−10:8). As such, we are not IPR2019-01126 Patent 6,519,005 B2 34 persuaded by Patent Owner’s arguments. We determine that that Petitioner has demonstrated that element [1B] of claim 1 is suggested by Mombers and Nakajima. d) Element [1C] Petitioner asserts that Mombers discloses this element of claim 1. Pet. 78−79 (citing Ex. 1003 ¶¶ 133−135). Petitioner points out that Mombers discloses a sorting unit that “performs the sorting of the candidate vectors based on the MAE cost provided by the [pixel processor].” Id. at 79 (quoting Ex. 1005 § 3.4). Petitioner also asserts that to the extent it is not explicitly disclosed, “a POSITA would have recognized that sorting the ‘candidate vectors’ by the matching error cost would determine a ‘best match’ for the optimal prediction mode.” Id. Additionally, Petitioner asserts that determining a best candidate vector also determines which of the second pixel arrays constitute a best match. Id. As we discuss further below, with respect to Patent Owner’s contrary arguments, we agree with Petitioner that one of ordinary skill in the art would have found it obvious to consider the sorting of vectors according their matching error cost one way to determine a best match, as Petitioner has proposed. Id. Patent Owner alleges that the Petition “essentially admit[s] that Mombers does not teach” this element and reiterates that this element “requires the step of determining which of the second pixel arrays comprises a best match for the optimum prediction mode, which was determined in the previous step.” PO Resp. 27. As discussed above, Section II.E.1.d, we are not persuaded by Patent Owner’s characterization of element [1C], instead determining that a determination of the optimum prediction mode during the method step of element [1C] falls within the scope of that element. IPR2019-01126 Patent 6,519,005 B2 35 Patent Owner also argues that Mombers’ disclosure of sorting of candidate vectors is different from determining a best match, as required by claim 1. PO Resp. 27−28. Patent Owner continues that any sorting of candidates would not be commensurate with determining a best match based upon error metrics. Id. at 28. Petitioner responds that sorting of candidate vectors by matching error cost would determine a best match and that best match would occur for the optimal prediction mode. Pet. Reply 13. Additionally, Petitioner argues that because Mombers determines a best match by sorting vectors across all modes, it does determine the best match for the optimum prediction mode as claimed since one of the modes will be selected as the optimum mode. Id. at 14. Patent Owner disputes Petitioner’s arguments. PO Sur-reply 15−16. As Petitioner points out, Mombers discloses that it performs the sorting of the candidate vectors based on the MAE cost provided by the pixel processor. Each candidate vector would have an error metric associated therewith and be sorted on that basis, such that the candidate with the lowest error metric would be self-evident, because of the sorting. The resulting best match candidate would have an associated mode. This is supported by unrebutted testimony that one of ordinary skill in the art would have interpreted Mombers as Petitioner has asserted. See Ex. 1003 ¶¶ 133−135. As such, we are not persuaded by Patent Owner’s arguments. We determine that that Petitioner has demonstrated that element [1C] of claim 1 is taught by Mombers. e) Element [1D] Petitioner asserts that Mombers, alone or in combination with Senda, discloses this last element of claim 1. Pet. 80−83 (citing Ex. 1003 IPR2019-01126 Patent 6,519,005 B2 36 ¶¶ 136−142). Petitioner asserts that Mombers states that its processor may be used as part of a general motion estimation algorithm or a “genetic algorithm.” Id. at 80 (citing Ex. 1005 § 3.4). Petitioner also points out that Mombers states that the vector generation unit is “programmed to generate candidate motion vectors” and uses matching errors “to generate a new population in the genetic motion estimation application or to select a prediction mode in the prediction selection mode” and “calculate the next motion vector.” Id. (citing Ex. 1005 § 3.1). Petitioner also asserts that one of ordinary skill in the art would have been motivated to generate a motion vector in response to determining the best prediction mode because this is a part of the MPEG-2 encoding process. Id. at 81. Petitioner additionally offers that Senda provides that a system could generate an additional set of vectors after the determining step in order to refine the result and provide half-pixel accuracy. Id. at 81−82 (citing Ex. 1007, 2274, Fig. 1). Lastly, Petitioner asserts that the use of the technique of Senda in the system of Mombers and Nakajima would have involved the use of a known technique to improve a similar system to achieve predictable results for the benefit of providing more accurate motion vectors and reduce the incidence of blocking artifacts at low bit rates. Id. at 82. Patent Owner has not presented contrary arguments with respect to the combination of Mombers and Senda. We agree with Petitioner that one of ordinary skill in the art would have found it obvious to modify Mombers in view of Senda, as Petitioner has proposed. Pet. 81−83. Patent Owner relies upon its prior arguments, made with respect to Ishihara, that element [1D] “requires the generation of a motion vector in response to identification of the best match in the determining step,” IPR2019-01126 Patent 6,519,005 B2 37 asserting that Mombers “has no such teaching.” PO Resp. 28. As we have discussed above (Section II.E.1.e), we do not find Patent Owner’s interpretation of element [1D] to be valid in view of all of the evidence of record. Patent Owner’s arguments, with respect to the ground applying Mombers, are the same as those discussed above (PO Resp. 28−30; PO Sur- reply 16), and we determine that they are equally unavailing with respect to this ground of unpatentability. We determine that that Petitioner has demonstrated that element [1D] of claim 1 is taught by Mombers, as well as suggested by Mombers and Senda. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Mombers, Nakajima, and Senda collectively teach all of the limitations of independent claim 1 and render that claim obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. 2. Analysis of Cited Art as Applied to Independent Claim 41 With respect to the limitations of independent claim 41, Petitioner relies on the citations and arguments made with respect to the preamble and element [1B] of claim 1. Pet. 89−91 (citing Pet. 72−73, 75−83; Ex. 1003 ¶¶ 155−162). Patent Owner does not challenge Petitioner’s application of its analysis to this independent claim; instead Patent Owner emphasized the alleged deficiencies of this ground with respect to elements of independent claim 1, discussed above. See PO Resp.; PO Sur-reply. We do not find Patent Owner’s arguments to be persuasive. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Mombers, Nakajima, and Senda collectively teach all of the limitations of claim 41 and render that IPR2019-01126 Patent 6,519,005 B2 38 claim obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. 3. Analysis of Cited Art as Applied to Dependent Claims 2−5, 7−16, and 42 With respect to dependent claims 2−5 and 42, Petitioner asserts that Mombers discloses or renders obvious the particulars required by the MPEG standard. Pet. 83−85, 91. Petitioner identifies that Mombers states that “[f]or each matched vector, the [pixel processor] accumulates matching costs for the entire macroblock (16x16) and for its two halves (16x8). In this way, it is possible to evaluate costs for all types of predictions allowed by the MPEG-2 video standard.” Id. (quoting Ex. 1005 § 3.2; citing Ex. 1005, Abs.; Ex. 1003 ¶¶ 143−147, 163). Based on this, we determine that Petitioner has shown that Mombers, in view of Nakajima and Senda, teaches the limitations of claims 2−5 and 42. Claim 7 lists different prediction modes that can form the plurality of prediction modes recited in claim 1, and claims 8−12 recite different sub- sets of those prediction modes. Petitioner asserts that Mombers allows for costs to be evaluated “for all types of predictions allowed by the MPEG-2 video standard,” and delineates the different prediction modes. Pet. 85−88 (citing Ex. 1005 § 3.2; quoting Ex. 1003 ¶¶ 148−153). Based on this, we determine that Petitioner has shown that Mombers, in view of Nakajima and Senda, teaches the limitations of claims 7−12. Claims 13−16 recite devices that implement different methods recited in specific claims. Petitioner asserts that Mombers discloses that “the core of a video signal processor [is] dedicated to perform MPEG-2 motion estimation and prediction selection,” and thus renders obvious devices implementing the methods recited in claims 1 and 7−9, as discussed above. IPR2019-01126 Patent 6,519,005 B2 39 Pet. 88−89 (quoting Ex. 1005, Abs.; citing Ex. 1003 ¶ 154). Based on this, we determine that Petitioner has shown that Mombers, in view of Nakajima and Senda, teaches the limitations of claims 13−16. Patent Owner does not challenge Petitioner’s analysis of the dependent claims other than arguing alleged deficiencies of this ground with respect to elements of independent claim 1, discussed above. See PO Resp. 20. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Mombers, Nakajima, and Senda collectively teach all of the limitations of claims 2−16and 42 and renders those claims obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. 4. Conclusion on Obviousness over Mombers, Nakajima, and Senda For the reasons provided above, we determine that Petitioner has shown by a preponderance of the evidence that Mombers, Nakajima, and Senda render 1−5, 7−16, 41, and 42 of the ’005 Patent unpatentable under 35 U.S.C. § 103. G. Alleged Obviousness over Mombers and Nakajima Petitioner contends claims 39 and 40 are unpatentable under § 103(a) as obvious over Mombers and Nakajima. Pet. 92–96. As discussed above with respect to the prior grounds of unpatentability, independent claim 39 recites many aspects of claim 1, but is broader in that it only requires the concurrent motion estimation using each of a plurality of different motion estimation prediction modes and the selection that produces the optimum result. Petitioner continues to rely on the teachings of Mombers and Nakajima, citing to the discussion of element [1B] of claim 1. Id. at 92 IPR2019-01126 Patent 6,519,005 B2 40 (citing Ex. 1003 ¶¶ 164−166). For dependent claim 40, Petitioner cites to Mombers as disclosing a “core of a video signal processor dedicated to perform MPEG-2 motion estimation and prediction selection.” Id. at 95−96 (quoting Ex. 1005, Abs.). Patent Owner does not separately challenge Petitioner’s analysis with respect to claims 39 and 40 other than arguing alleged deficiencies of Mombers with respect to elements of independent claim 1, discussed above. See PO Resp.; PO Sur-reply. As discussed above, we do not find Petitioner’s analysis with respect to claim 1 to be deficient. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Mombers and Nakajima collectively teach all of the limitations of claims 39 and 40 and render those claims obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. H. Alleged Obviousness over Mombers, Nakajima, Senda, and AAPA Petitioner contends claim 6 is unpatentable under § 103(a) as obvious over Mombers, Nakajima, Senda, and AAPA. Pet. 96–97. Claim 6 depends from claim 5 and additionally recites an initial step of providing information identifying a picture type of the first pixel array and using this information in the comparing step. As discussed above with respect to the first ground of unpatentability, Petitioner asserts that picture type is provided for in the MPEG standard, which is part of the AAPA, and can be used to determine the coding/compression. Id. at 96 (citing Ex. 1001, 1:55–2:5). Based on this, Petitioner asserts that one of ordinary skill in the art would have been motivated to identify and use the picture type for the motion coding process and would have had a reasonable expectation of success in doing the same. IPR2019-01126 Patent 6,519,005 B2 41 Patent Owner does not separately challenge Petitioner’s analysis with respect to claim 6 other than arguing alleged deficiencies of Mombers with respect to elements of independent claim 1, discussed above. See PO Resp.; PO Sur-reply. As discussed above, we do not find Petitioner’s analysis with respect to claim 1 to be deficient. Based on the evidence in this record, we are persuaded that Petitioner has shown by a preponderance of the evidence that Mombers, Nakajima, Senda, and AAPA collectively teach all of the limitations of claim 6 and thus render that claim obvious under 35 U.S.C. § 103, for the reasons identified in the Petition, as discussed above. IPR2019-01126 Patent 6,519,005 B2 42 III. CONCLUSION Our final determination in this case is summarized below: IV. ORDER In consideration of the foregoing, it is hereby: ORDERED that claims 1−16 and 39−42 of the ’005 Patent have been proven to be unpatentable; FURTHER ORDERED that, because this is a Final Written Decision, the parties to the proceeding seeking judicial review of the Decision must comply with the notice and service requirements of 37 C.F.R. § 90.2. Claims 35 U.S.C. § Reference(s) Claim(s) Shown Unpatentable Claims Not shown Unpatentable 1−16, 39−42 103 Ishihara, AAPA 1−16, 39−42 1−5, 7−16, 41, 42 103 Mombers, Nakajima, Senda 1−5, 7−16, 41, 42 39, 40 103 Mombers, Nakajima 39, 40 6 103 Mombers, Nakajima, Senda, AAPA 6 Overall Outcome 1−16, 39−42 IPR2019-01126 Patent 6,519,005 B2 43 For PETITIONER: Jordan Rossen Roshan Mansinghani UNIFIED PATENTS INC. jordan@unifiedpatents.com roshan@unifiedpatents.com Scott McKeown Victor Cheung ROPES & GRAY LLP scott.mckeown@ropesgray.com victor.cheung@ropesgray.com For PATENT OWNER: Ryan Loveless Brett Mangrum James Etheridge Jeffrey Huang ETHERIDGE LAW GROUP ryan@etheridgelaw.com brett@etheridgelaw.com jim@etheridgelaw.com jeff@etheridgelaw.com Copy with citationCopy as parenthetical citation