Ex Parte Kisel et alDownload PDFPatent Trial and Appeal BoardMar 24, 201612261640 (P.T.A.B. Mar. 24, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 12/261,640 10/30/2008 48116 7590 FAY SHARPE/LUCENT 1228 Euclid Avenue, 5th Floor The Halle Building Cleveland, OH 44115-1843 03/28/2016 FIRST NAMED INVENTOR Andrey Kise! UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. LUTZ 200900US01 4753 EXAMINER BOUT AH, ALINA A ART UNIT PAPER NUMBER 2443 NOTIFICATION DATE DELIVERY MODE 03/28/2016 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address( es): docketing@faysharpe.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte ANDREY KISEL, DA VE CECIL ROBINSON, and TIAAN SCHUTTE 1 Appeal2014-003079 Application 12/261,640 Technology Center 2400 Before ALLEN R. MacDONALD, CARLA M. KRIVAK, and MICHAEL M. BARRY, Administrative Patent Judges. BARRY, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from a Final Rejection of claims 1-11 and 13-18, which constitute all pending claims. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in-part. 1 Appellants identify Alcatel Lucent as the real party in interest. App. Br. 1. Appeal2014-003079 Application 12/261,640 Introduction Appellants' Specification states "[t]he proposed invention relates to methods and systems for providing resilient service quality in a managed multimedia delivery network also known as [a] super peer-to-peer (sP2P) network or caching overlay network." Spec. 1. Claim 1 is representative: 1. A managed multimedia delivery network for providing a multimedia service with resilient service quality, comprising: a plurality of caching nodes for caching multimedia data segments; an edge caching node for collecting requested multimedia data segments from the caching nodes and for serving a user equipment with the collected multimedia data; and a service gateway for providing cache information to the edge caching node indicating how to obtain the requested multimedia data segments from the caching nodes, the cache information indicating recommended caching nodes for maintaining the service quality, wherein the edge caching node comprises a service quality monitoring unit for monitoring the collection of the data segments from the caching nodes and for requesting cache information from the service gateway when the collection of data segments impacts the service quality. App. Br. 22 (Claims App'x). Rejections Claims 1-11 and 13-18 stand rejected under 35 U.S.C. § 103(a) over the combination of Lai et al. (US 2004/0193648 Al; Sept. 30, 2004), Lemoff et al. (US 2002/0181402 Al; Dec. 5, 2002), and Yang et al. (US 2007 /0294333 Al; Dec. 20, 2007). 2 Appeal2014-003079 Application 12/261,640 ANALYSIS We have reviewed the Examiner's rejections in light of Appellants' arguments. Except as discussed otherwise below, we adopt as our own the Examiner's findings and reasons, as clarified in the Answer. See Final Act. 2-23 and Ans. 2-14. Claims l, 7, 11, and 15-Recommended Caching Nodes For claim 1, the Examiner relies on Lem off to teach "recommended caching nodes for maintaining the service quality." Final Act. 6. Appellants argue that because Lemoff "only mention[s] a cache of routes to destination nodes" that "[ n ]ow here does Lemoff teach or suggest the claimed cache information, which recommends caching nodes, which in tum store multimedia data segments that are collected by an edge caching node." App. Br. 8.2 The Examiner answers that Lemoff' s "path discovery process" teaches strategies selecting a path to a node for improving service quality; such as by "fastest path" or "fewest hops." Ans. 24. We find unpersuasive Appellants' argument in reply that "contrary to the Examiner's assertion, nothing in the cited portions of Lemoff teaches or suggests that the destination nodes of Lemoff are caching nodes as set forth in the subject claims." Reply Br. 3. Under a broad but reasonable interpretation, in view of the Specification, claim 1 's "recommended caching nodes" correspond to nodes 2 As an initial observation on the "which in tum" claim requirements of Appellants' argument, we note, as does the Examiner (Ans. 23), the rejection relies on Lai, not Lemoff, for the storing of multimedia data segments that are collected by an edge caching node. See Final Act. 5. 3 Appeal2014-003079 Application 12/261,640 that the service gateway recommends from the "plurality of caching nodes. "3 We agree with the Examiner that, in combination with Lai and its teaching of "caching nodes, "4 Lemoff s disclosure of determining (and caching) recommended nodes for maintaining service quality, based on the network path to such nodes, teaches claim 1 's "recommended caching nodes." We accordingly sustain the rejection of claim 1. Appellants present essentially the same arguments for the other independent claims (7, 11, and 15) as for claim 1. See App. Br. 7-8, 12-14, and 17-18. Appellants present no argument for patentability of claims 5, 8-10, and 18 separate from the independent claims. Thus, for the reasons discussed supra, we also sustain the rejection of claims 5, 7-11, 15, and 18. Claim 2 - Service Gateway that Recommends Caching Nodes Appellants argue the Examiner errs in rejecting claim 2, which depends from claim 1, because " [ n] ow here in the cited passage( s) [of Lai] is there mention of a service gateway that recommends cache nodes to an edge node." App. Br. 9. The Examiner answers that Lai's task manager 206 in conjunction with streaming server 222 perform the recited functions of the service gateway. Ans. 25 (citing Lai i-fi-197, 103, 107, and Fig. 5A). Appellants in reply discuss the Examiner's citations to Lai and argue it discloses "a web server interface that directs a viewer client to an appropriate server from which to receive requested media content, but is silent with regard to a service gateway that recommends caching nodes to an edge node." Reply Br. 5. 3 This interpretation of "recommended caching nodes" is consistent with Appellants' arguments. 4 Appellants do not contest the Examiner's findings that Lai discloses claim 1 's "caching nodes." See App. Br. 7-8 and Reply Br. 3--4. 4 Appeal2014-003079 Application 12/261,640 Appellants do not persuade us the Examiner errs in the identification of components in Lai that map to the claim requirements, including the service gateway and edge caching node. See Final Act 9 and Ans. 25. We sustain the Examiner's rejection of claim 2. Claim 3 -Availability of Requested Data Segments Claim 3 adds to claim 2 the requirement for the service gateway to determine recommended caching nodes "based upon at least one of [ 1] availability of the requested data segments on the caching nodes, [2] ... load of the caching nodes, [3] available bandwidth for transmitting the requested data segments, and [ 4] transmission network congestion" (numbers added). In rejecting claim 3, the Examiner identifies that Lai teaches the first alternative ("requested data segment availability"). Final Act. 9. Appellants argue the Examiner errs in rejecting claim 3 because Lai "is silent with regard to the claimed aspect of recommending cache nodes as a function of availability of requested data segments." App. Br. 10. The Examiner answers with an explanation of findings for how Lai requests media content, determines available resources for delivering the requested content, and then executes the delivery. Ans. 25-26. Appellants in reply reiterate that none of the Examiner's citations to Lai mention "recommending caching nodes as a function of requested data segment availability." Reply 6. We are unpersuaded of error in the Examiner's finding that Lai's resource manager (208; see Fig. 2) determines available resources within the media transcoding engine (106) to deliver requested media content to accomplish the claim requirement. One of ordinary skill appreciates from Lai' s disclosure that available resources for content delivery obviously includes whether the data (segments) are 5 Appeal2014-003079 Application 12/261,640 available. See Jn re Keller, ("The test for obviousness is not whether ... the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art."); see also KSR Int 'l Co. v. Teleflex Inc., 550 U.S. 398, 420-21 (concluding that "[a] person of ordinary skill is also a person of ordinary creativity, not an automaton"). We accordingly sustain the rejection of claim 3. Appellants make essentially the same argument for claims 13 and 16, see Reply Br. 15-16 and 18-19; we accordingly sustain the rejection of those claims on the same basis. Claim 4 - Denying Recommendation of Alternative Caching Nodes Claim 4, depending from claim 2, adds a requirement for "denying recommending alternative caching nodes if the speed of collecting data segments ... is going to improve." Appellants argue Lai's cited passages discuss only that its configuration is adjustable and may be updated if network conditions change, and that it arranges for pipelining of steps, but nothing in Lai describes the claimed requirement of denying recommendation of alternative nodes if speed will improve. App. Br. 11-12. The Examiner's answer is that Lai's resource manager uses an algorithm where a predetermined threshold determines the speed based on the number of accesses in a given time, and the media content is retained or discarded or denied. Also, the resource manager 208 arranges for the pipelining of these steps through resource allocation within the media transcoding engine 206. The pipelining of these steps results in a faster delivery time or the speed associated for requested media by the media transcoding engine 106). Ans. 26. 6 Appeal2014-003079 Application 12/261,640 We are unpersuaded by the Examiner's findings regarding Lai' s teaching an algorithm that determines speed from incoming access requests, pipelining, and techniques for cache management. We agree with Appellants that Lai neither teaches nor suggests claim 4' s requirement to deny recommending alternative caching nodes if speed will improve. We accordingly do not sustain the Examiner's rejection of claim 4. For the same reason, we do not sustain the rejection of claims 14 and 17, which include the same requirements and stand rejected on the same ground. Claim 6 - Maintaining All Cores in the Cluster at a Same Frequency Claim 6, depending from claim 1, adds the requirement that "the service quality monitoring unit determines whether the speed of arriving data segments is sufficient to pull the end of each segment before it has to be delivered to the user equipment." The Examiner finds Lai teaches this requirement. Final Act. 11. Appellants argue Lai' s disclosure of pipelining and "resource allocation within the media transcoding engine" do not teach the claimed requirements to make a determination, before delivery of each segment, if speed of arriving segments is sufficient. App. Br. 11-12. The Examiner answers that in Lai the configuration is modified or updated depending on the number of users. The resource manager 208 arranges for the pipelining of delivering the requested media content through resource allocation within the media transcoding engine 106. The pipelining of these steps results in a faster delivery time for requested media by the media transcoding engine 106. Ans. 27. 7 Appeal2014-003079 Application 12/261,640 Appellants do not persuasively rebut the Examiner's findings. See Reply Br. 7 (concluding that "Lai fails to describe each and every aspect"). The obviousness analysis need not seek out precise teachings directed to the specific subject matter of the challenged claim, as the analysis can take account of the inferences and creative steps that a person of ordinary skill in the art would employ. See KSR at 418-21. We are persuaded by the Examiner's findings that the added requirements of claim 6 are obvious in view of the cited teachings in Lai regarding an "adjustable configuration" (i-f l 01) and pipeline management that "results in a faster delivery time" (i-f 154 ). We accordingly sustain the Examiner's rejection of claim 6. DECISION For the above reasons, we affirm the Examiner's rejection of claims 1-3, 5-11, 13, 15, 16, and 18, and we reverse the rejection of claims 4, 14, and 17. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). AFFIRMED-IN-PART 8 Copy with citationCopy as parenthetical citation