BLACKBERRY LIMITEDDownload PDFPatent Trials and Appeals BoardDec 22, 20202019004665 (P.T.A.B. Dec. 22, 2020) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 14/277,172 05/14/2014 Dake HE 101-0216USP1 6959 14551 7590 12/22/2020 Rowand LLP (BlackBerry) 3080 Yonge Street, Suite 6060 Toronto, ONTARIO M4N 3N1 CANADA EXAMINER MAHMUD, FARHAN ART UNIT PAPER NUMBER 2483 NOTIFICATION DATE DELIVERY MODE 12/22/2020 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): mailbox@rowandlaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte DAKE HE, XIAOFENG WANG, JIM WANG, TIANYING JI, and DAVID FLYNN Appeal 2019-004665 Application 14/277,172 Technology Center 2400 Before JAMES B. ARPIN, DAVID J. CUTITTA II, and MICHAEL J. ENGLE, Administrative Patent Judges. CUTITTA, Administrative Patent Judge. DECISION ON APPEAL STATEMENT OF THE CASE Pursuant to 35 U.S.C. § 134(a), Appellant1 appeals from the Examiner’s decision to reject claims 1, 2, 4–13, and 15–27, all of the claims under consideration.2 We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42(a). Appellant identifies the real party in interest as BlackBerry Limited. Appeal Br. 2. 2 Appellant cancelled claims 3 and 14. Appeal Br. 27, 29. Appeal 2019-004665 Application 14/277,172 2 CLAIMED SUBJECT MATTER Invention Appellant’s claimed subject matter relates to “context-adaptive video coding.” Spec. ¶ 1.3 In particular, Appellant explains that a “context initialization buffer 108 [is used to store] two or more context model states 120 corresponding to [a] context model state following context-adaptive entropy encoding of a respective two or more previous slices or pictures.” Id. ¶ 48. An “entropy encoder [then] selects one of the stored context model states from the buffer” and initializes the context model “using the context model state selected from the buffer.” Id. ¶¶ 62, 63. This may provide sufficient “values to adapt the probabilities associated with a particular set of contexts quickly enough,” even if data is sparse. Id. ¶ 3. Exemplary Claim Claims 1, 12, 23, 24, 25, 26, and 27 are independent. Claim 1, reproduced below with limitations at issue italicized, exemplifies the claimed subject matter: 1. A method of encoding video using a video encoder, the video encoder employing context-adaptive entropy encoding using a context model having a plurality of contexts, the context model having a context model state defining the respective probability associated with each context defined in the context model, the video encoder storing a predefined context model state for initialization of the probabilities of the context model, and the video encoder including a buffer, the method comprising: 3 We refer to: (1) the originally filed Specification filed May 14, 2014 (“Spec.”); (2) the Final Office Action mailed July 12, 2018 (“Final Act.”); (3) the Appeal Brief filed December 3, 2018 (“Appeal Br.”); and (4) the Examiner’s Answer mailed March 22, 2019 (“Ans.”). Appeal 2019-004665 Application 14/277,172 3 context-adaptively encoding a first picture, including progressively updating the contexts of a context model state as bins are coded, and storing, in the buffer, a first context model state associated with the first picture after encoding of the first picture; context-adaptively encoding a second picture, including progressively updating the context model state as bins are coded and storing, in the buffer, a second context model state associated with the second picture after encoding of the second picture; for encoding a current picture of the video, wherein the first and second pictures are previously-encoded pictures of the video and wherein the first context model state and the second context model state are at least two different stored context model states in the buffer for the context model, selecting one of the at least two stored context model states from the buffer; initializing the context model for context-adaptively encoding the current picture using the selected one of the at least two stored context model states; and context-adaptively entropy encoding the current picture to produce a bitstream of encoded data; and outputting the bitstream of encoded data. Appeal Br. 26 (Claims Appendix). REFERENCES AND REJECTION The Examiner rejects claims 1, 2, 4–13, and 15–27 under 35 U.S.C. § 103 as obvious over the combined teachings of He et al. (US 2012/0014457 A1, published Jan. 19, 2012) (“He”) and Cote et al. (US 2015/0092834 A1, published Apr. 2, 2015) (“Cote”). Final Act. 10–17.4 4 The Examiner rejected claims 1, 2, 4–13, and 15–27 under 35 U.S.C. § 101, as directed to patent-ineligible subject matter without significantly Appeal 2019-004665 Application 14/277,172 4 OPINION We review the appealed rejection for error based upon the issues identified by Appellant and in light of Appellant’s arguments and evidence. Ex parte Frye, 94 USPQ2d 1072, 1075 (BPAI 2010) (precedential). Arguments not made are waived. See 37 C.F.R. § 41.37(c)(1)(iv) (2018). Appellant does not persuade us that the Examiner errs, and we adopt as our own the findings and reasons set forth by the Examiner to the extent consistent with our analysis herein. Final Act. 10–12; Ans. 4–10. We add the following primarily for emphasis. “selecting one of the at least two stored context model states” The Examiner finds He teaches or suggests, “selecting one of the at least two stored context model states from the buffer,” as recited in claim 1. Final Act. 10 (citing He ¶¶ 82, 83); Ans. 4–5 (citing He ¶¶ 43, 82, and 83). Of particular relevance, the Examiner relies on He’s discussion of H.264/AVC and, specifically, of encoding a bit in the jth position based on the probability of any selected bit in the jth position in previous iterations (i– 1, i–2, etc.) of a given sequence i. Final Act. 10; see He ¶ 82 (“for a bit in the jth position in a given sequence i, its probability state is selected from amongst the 64 possible probabilities based on the history of bits in the jth position in previous sequences (i-1, etc.).”). Appellant argues that “He’s reference to selecting one of the probabilities for a (singular) context cannot be equated with selecting a full stored context model state (i.e.[,] the probabilities associated with every more. Final Act. 2–4, 7–8. The Examiner has withdrawn that rejection. Ans. 3. Appeal 2019-004665 Application 14/277,172 5 context in the model)” because “He is simply describing the act of updating the probability value associated with a specific context in the course of coding.” Appeal Br. 20. The Examiner responds: Appellant further argues that the probabilities described are not equated to context model states. However, Paragraph 82, very clearly states “[i]n the H.264/AVC example the probability (sometimes termed “state”, “probability state” or “context state”) for a bit is determined based on its context.” Thus it is clearly taught that reference frames are held in a buffer, and that context model states, are referred to and selected from among the 64 possible probabilities based on the history of bits in that particular position in previous sequences. Ans. 4–5. Appellant’s arguments are unpersuasive. Claim 1 recites that “a context model state defin[es] the respective probability associated with each context defined in the context model.” Appeal Br. 26 (emphasis added). Thus, the claim itself indicates that a context model state includes a probability. In addition, the Examiner finds that “context model states, as commonly understood in the relevant arts, are referred to and selected from among the 64 possible probabilities based on the history of bits in that particular position in previous sequences.” Final Act. 5. Appellant also fails to persuade us that the Examiner’s interpretation of “context model state” is inconsistent with Appellant’s Specification. In re Am. Acad. of Sci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004). Appellant has not shown that the Specification defines expressly the term “context model state” as recited in claim 1. Rather, similar to the claim itself, the Specification describes a “context model state” as including a probability. Spec. ¶ 41. Appeal 2019-004665 Application 14/277,172 6 He, in turn, defines a probability associated with each context for bits in various positions in a given sequence i. For example, He discloses that “for a bit in the jth position in a given sequence i, its probability state is selected from amongst the 64 possible probabilities based on the history of bits in the jth position in previous sequences (i-1, etc.).” He ¶ 82. Similarly, He teaches selecting a probability state for a bit in the jth position in the previous sequence i–2. We agree with the Examiner that He’s discussion of defining a probability for a bit position based on a bit position in a previous iteration of the sequence (e.g., i–2, i–1, etc.) teaches “selecting one of the at least two stored context model states,” as recited in claim 1. Accordingly, Appellant’s argument that He’s probabilities fails to teach the claimed “context model states” is unpersuasive because Appellant fails to establish that the Examiner’s interpretation of “context model state,” as recited in claim 1, is unreasonably broad. See Ans. 5. Appellant further argues that “He’s description of there being 64 fixed prescribed probabilities values cannot be equated with storing a snapshot of all the values of the contexts at a point in time and later selecting one of the stored snapshots from among the stored snapshots (context model states).” Appeal Br. 20. This argument is unpersuasive because the argued feature “storing a snapshot of all the values of the contexts at a point in time” (id.) is not recited in the claim and, therefore, is not commensurate with the scope of claim 1. In addition, the Examiner, in the Answer, finds for the first time that Cote “very clearly teaches the caching of and selecting from multiple context states in a context cache, and also the possibility of transitioning to multiple other states from each context state that is stored.” Ans. 7 (citing Appeal 2019-004665 Application 14/277,172 7 Cote ¶¶ 115, 116, 146, Figs. 13, 16). Appellant does not rebut the Examiner’s additional reasoning and findings. Consequently, Appellant does not persuade us of error in these additional factual findings or in the Examiner’s conclusion of obviousness. Accordingly, Appellant fails to show reversible error in the Examiner’s finding that the cited references teach or suggest “selecting one of the at least two stored context model states from the buffer,” as recited in claim 1. “initializing the context model” The Examiner finds He teaches or suggests, “initializing the context model for context-adaptively encoding the current picture using the selected one of the at least two stored context model states,” as recited in claim 1. Final Act. 10 (citing He ¶¶ 82, 83, 129–133). Of particular relevance, the Examiner relies on He’s initialization to zero of indexes i and j. He ¶ 130. Appellant argues that: Even if the position j is equated with a “context”, there is no suggestion in this portion of He that the context state (i.e.[,] associated probability) is being initialized to a starting value. Rather, He is simply saying his algorithmic process will start with sequence 0 and position 0 by initializing the indices to zero. Appeal Br. 22–23. Appellant’s argument is unpersuasive because it is not commensurate with the scope of the claim. Appellant argues that He fails to suggest “that the context state (i.e.[,] associated probability) is being initialized to a starting value.” Appeal Br. 23. Claim 1, however, does not recite initializing the context state, but rather recites “initializing the context model.” Id. at 26 (emphasis added). Appellant, therefore, does not persuade Appeal 2019-004665 Application 14/277,172 8 us of error in the Examiner’s finding that He’s “initialization of the indices to zero is considered an initialization of the context model.” Ans. 22–23. In addition, the Examiner finds that “Cote very clearly and unambiguously teaches initializing context models in the context adaptive encoding.” Ans. 10 (citing Cote ¶¶ 115, 116, 146, Figs. 13, 16). Of particular relevance, the Examiner finds that Cote’s “method may include initializing the context model states in a context memory lookup table (LUT).” Ans. 9 (emphasis omitted) (citing Cote Fig. 16, 1610, and ¶ 146). The Examiner finds that in Cote’s context model, “logic for calculating an updated probability . . . may implement a state machine that transitions between a finite number of context model states.” Ans. 8 (emphasis omitted) (citing Cote ¶ 115). Appellant, in turn, does not rebut the Examiner’s additional reasoning and findings from Cote articulated in the Answer. Consequently, Appellant does not persuade us of error in these additional factual findings or in the Examiner’s conclusion of obviousness. For the reasons discussed, Appellant does not persuade us of error in the Examiner’s obviousness rejection of independent claim 1. We, therefore, sustain the Examiner’s rejection of that claim, as well as the rejection of dependent claims 2, 4–13, and 15–27, which Appellant does not argue separately with particularity. See generally Appeal Br. 11–15 (nominally arguing claims 2, 4–13, and 15–27 without providing additional specific arguments). Appeal 2019-004665 Application 14/277,172 9 CONCLUSION We affirm the Examiner’s decision to reject claims 1, 2, 4–13, and 15–27 under 35 U.S.C. § 103. DECISION SUMMARY In summary: Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 2, 4–13, 15–27 103 He, Cote 1, 2, 4–13, 15–27 TIME PERIOD FOR RESPONSE No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED Copy with citationCopy as parenthetical citation