Ex Parte BRUENING et alDownload PDFPatent Trial and Appeal BoardDec 14, 201813686009 (P.T.A.B. Dec. 14, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE 13/686,009 11/27/2012 152569 7590 12/18/2018 Patterson & Sheridan, LLP - VMware 24 Greenway Plaza Suite 1600 Houston, TX 77046 FIRST NAMED INVENTOR Derek BRUENING UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. Al80.Cl 1049 EXAMINER BOURZIK, BRAHIM ART UNIT PAPER NUMBER 2191 NOTIFICATION DATE DELIVERY MODE 12/18/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): psdocketing@pattersonsheridan.com ipadmin@vmware.com vmware_admin@pattersonsheridan.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte DEREK BRUENING and VLADIMIR L. KIRIANSKY Appeal2018-004274 Application 13/686,009 1 Technology Center 2100 Before JOHN A. JEFFERY, JUSTIN BUSCH, and JAMES W. DEJMEK, Administrative Patent Judges. DEJMEK, Administrative Patent Judge. DECISION ON APPEAL Appellants appeal under 35 U.S.C. § 134(a) from a Final Rejection of claims 1-14, 16, and 1 7. The Examiner has indicated that claim 15 is allowable. Final Act. 42. We have jurisdiction over the remaining pending claims under 35 U.S.C. § 6(b ). We reverse. 1 Appellants identify VMware, Inc. as the real party in interest. App. Br. 3. Additionally, Appellants state the United States Government has certain rights in the invention. Spec. ,r 2. Appeal2018-004274 Application 13/686,009 STATEMENT OF THE CASE Introduction Appellants' disclosed and claimed invention generally relates to the use of software code caches in computer systems. Spec. ,r 3. According to the Specification, software code caches are used to store frequently executed sequences of code to avoid repeated re-translation of the frequently used code. Spec. ,r 4. Additionally, code caches may be shared amongst a plurality of different processes, as appropriate. Spec. ,r 36 Claim 1 is illustrative of the subject matter on appeal and is reproduced below with the disputed limitations emphasized in italics: 1. An apparatus for caching computer code from an application program comprising a plurality of modules that each comprise a separately loadable file, the apparatus comprising: (a) a volatile memory; (b) a non-volatile memory coupled to the volatile memory via a first bus; ( c) a processor coupled to the non-volatile memory via a second bus; ( d) an address bus connecting the processor and the non- volatile memory for delivering code request signals from the processor to the non-volatile memory; ( e) a non-volatile memory controller responsive to the code request signals for transferring requested code from the non-volatile memory to the processor if the requested code is stored in cache code files in the non-volatile memory; (t) a volatile memory controller responsive to the code request signal for transferring the requested code from the volatile memory to the processor via the non-volatile memory if the requested code is not stored in cache code files in the non- volatile memory; and 2 Appeal2018-004274 Application 13/686,009 (g) a shared code caching engine coupled to receive executed native code output from the volatile memory via the first bus, the executed native code comprising at least a portion of a module of the application program, and the shared code caching engine comprising code instruction sets for: (i) storing data corresponding to the executed native code in a plurality of cache data files at different locations in the non-volatile memory, wherein a cache data file contains runtime data corresponding to the executed native code,and (ii) using the plurality of separate cache data files to enable pre-loading of a runtime code cache in the volatile memory, the runtime code cache being a software- managed cache. The Examiner's Rejections 1. Claim 1 stands rejected under pre-AIA 35 U.S.C. § I03(a) as being unpatentable over Bamford et al. (US 2004/0215883 Al; Oct. 28, 2004) ("Bamford") and Vijay Janapa Reddi et al., Persistent Code Caching: Exploiting Code Reuse Across Executions and Applications, 1-13 (2007) ("Reddi"). Ans. 3-8. 2. Claims 2, 4--11, 13, 14, and 16 stand rejected under pre-AIA 35 U.S.C. § I03(a) as being unpatentable over Reddi and Derek L. Bruening, Efficient, Transparent, and Comprehensive Runtime Code Manipulation (Sept. 2004) (Ph.D. thesis, Massachusetts Institute of Technology). 2 Ans. 8- 30. 2 In the Answer, the Examiner appears to also rely on Bamford to teach converting the cache data file into a code cache file, as recited in independent claims 2 and 16. Ans. 49-50. Appellants respond substantively to the Examiner's reliance on Bamford. See Reply Br. 5-7. In the event of 3 Appeal2018-004274 Application 13/686,009 3. Claims 3, 12, and 17 stand rejected under pre-AIA 35 U.S.C. § 103(a) as being unpatentable over Reddi, Bamford, and Bruening. Ans. 30-42. ANALYSIS 3 Claim 1 Appellants argue the Examiner erred in relying on Bamford to teach various aspects of claim 1. App. Br. 13-15; Reply Br. 2--4. In particular, Appellants argue the Examiner erred in finding Bamford teaches a processor issuing "code request signals" to the non-volatile memory controller, as recited in claim 1. App. Br. 13; Reply Br. 2-3. In particular, Appellants contend Bamford describes database queries to retrieve data, not code. App. Br. 13 Bamford is generally directed to "improving the performance of a multiple node system by allocating, in two or more nodes of the system, partitions of a shared cache." Bamford, Abstract. In a disclosed example (cited by the Examiner (see Final Act. 4)), Bamford describes a node requesting a data item from the shared cache partition of another node. Bamford ,r 26. The node containing the shared cache partition associated with the data item searches the partition for the requested data and, if the further prosecution, we invite the Examiner to correct the statement of rejection for these claims to include Bamford. 3 Throughout this Decision, we have considered the Appeal Brief, filed November 8, 2017 ("App. Br."); the Reply Brief, filed February 21, 2018 ("Reply Br."); the Examiner's Answer, mailed December 21, 2017 ("Ans."); and the Final Office Action, mailed April 5, 2017 ("Final Act."), from which this Appeal is taken. 4 Appeal2018-004274 Application 13/686,009 data is not located, the node loads a copy of the data from disk into the shared cache partition. Bamford ,r,r 22, 26. Regarding the code request signals for transferring requested code, the Examiner finds "the data items are code needed by the requesting node and retrieved for execution." Ans. 44. The Examiner further explains that during the execution of code, Bamford describes the nodes can request data from shared memory in order to fulfill the code execution. Ans. 43. Thus, the Examiner finds the requested data is a portion of code. Ans. 43--44. Although the Specification states a code cache "can be used to store data or instructions that a program accesses each time during startup or frequently during operation of the program," the claim language is directed to "caching computer code." As further recited in the claim, a memory controller transfers "requested code" in response to the code request signals and a shared code caching engine "receive[s] executed native code," the native code comprising "at least a portion of a module of the application program." We disagree with the Examiner that the data items of Bamford are the requested code, as recited in the claims. For the reasons discussed supra, we are persuaded of Examiner error. Accordingly, we do not sustain the Examiner's rejection of independent claim 1. Claims 2, 3, 16, and 17 Independent claim 2 recites, in relevant part, converting the cache data file into a code cache file, wherein converting includes processing the runtime data in the cache data file to determine contents of the code cache file, and the contents 5 Appeal2018-004274 Application 13/686,009 of the code cache file include the native code from at least a portion of a module of the application program. Independent claims 3, 16, and 1 7 each recite a commensurate limitation. In rejecting claim 2, the Examiner finds Reddi teaches converting a cache data file into a code cache file because Reddi teaches a persistent code cache manager invokes a cache lookup function at the beginning of execution. Final Act. 9-10 (citing Reddi § 3.2.3). Further, the Examiner finds Reddi' s code caches contain traces and their associated data structures. Ans. 51 (citing Reddi § 3.2.1). The Examiner explains this teaching as "the data structure and the code cache are persisted in a disk separately, [and] during runtime the data structure is consulted in order to find the right code to avoid translation between execution[s]." Ans. 51 (citing Reddi § 3.2.3). Additionally, the Examiner finds Bamford teaches using a hash algorithm for data mapping (i.e., identifying which partition is associated with a given piece of data). Ans. 50 (citing Bamford ,r 23). The Examiner finds the partition-to-data mapping teaches the claimed converting by "processing during runtime of the partition-map in order to find the exact code." Ans. 50. Appellants argue neither Reddi nor Bamford, alone or in combination, teaches converting a first file (i.e., cache data file) to a second file (i.e., code cache file) or processing runtime data in the first file to determine the contents of the second file. App. Br. 15-16; Reply Br. 5-7. In particular, Appellants assert merely loading a saved code cache from disk (as the Examiner finds in Reddi) does not perform any conversion, as required by the claim language. App. Br. 15-16. Similarly, Appellants contend finding and loading a data item from a cache, as in Bamford, does not convert the data item. Reply Br. 5. 6 Appeal2018-004274 Application 13/686,009 We find the Examiner's reliance on the cited sections of Reddi and Bamford misplaced for teaching the converting a cache data file into a code cache file by processing the runtime data in the cache data file to determine the contents of the code cache file, as recited in claim 2. For example, although Reddi discloses a compilation unit translates application code into code units called traces and that, once compiled, a trace is placed in a code cache (see Reddi § 2.1 ), the sections of Reddi identified by the Examiner relate to reuse of persistent code caches. The Examiner's explanations do not provide sufficient evidence or technical reasoning to support a finding that Reddi teaches the claimed converting of a cache data file into a code cache file. Additionally, we disagree with the Examiner that Bamford's partition-to-data mapping teaches the claimed converting. Rather, Bamford describes the partition-to-data mapping is necessary for the partitioned shared cache implementation "to know which partition is responsible for holding which data items." Bamford ,r 22. For the reasons discussed supra, we are persuaded of Examiner error. Accordingly, we do not sustain the Examiner's rejection of independent claim 2. For similar reasons we do not sustain the Examiner's rejections of independent claims 3, 16, and 17, which recite commensurate limitations. Claims 4-14 Independent claim 4 recites a method for caching computer code from an application program comprising: [SJ electing for each block of native code, a code caching scheme for storing runtime data in a cache data file corresponding to the block of native code, from at least two different code caching schemes that each comprise a different demarcation of the 7 Appeal2018-004274 Application 13/686,009 runtime data into separable divisions of runtime data that can each be individually removed, replaced, or have their entrances or exits modified. Independent claims 12 and 13 each recite a commensurate limitation. In rejecting claim 4, the Examiner relies on Bruening to teach selecting a code caching scheme from at least two different code caching schemes for storing runtime data in a cache data file. Final Act. 14--15 (citing Bruening§§ 1.3, 6.3.1, 6.3.3). Appellants contend Bruening, as cited by the Examiner, describes resizing a code cache based on a ratio of regenerated fragments to replaced fragments, "where a fragment is a data structure in the software-managed code cache." App. Br. 16-17; Reply Br. 7-8. Appellants assert that merely resizing a cache does not teach the selection of a code caching scheme from among at least two different code caching schemes. App. Br. 16. In response, the Examiner finds Bruening teaches "efficient schemes for code reuse and bounding code cache size to match the working application," which the Examiner interprets to mean identifying the proper cache size to hold the working set of the application. Ans. 53. The Examiner further explains Bruening teaches different caching schemes to accommodate the application such as for small blocks Bruening uses small sizes (i.e. a small caching scheme) and for larger blocks, Bruening teaches using larger block sizes (i.e., a larger scheme). Ans. 54 ( citing Bruening § 6.3.4). Similarly, the Examiner finds Reddi also teaches two different schemes to store small block sizes and large block sizes. Ans. 55 ( citing Reddi § 4.6). As an initial matter, we note the claim language provides additional language regarding the code caching schemes as each scheme comprises "a 8 Appeal2018-004274 Application 13/686,009 different demarcation of the runtime data into separable divisions of runtime data that can each be individually removed, replaced, or have their entrances or exits modified." Additionally, in describing particular code caching schemes, the Specification identifies that a fine-grained caching scheme provides fine-grain control over code caches and allows unlinking (i.e., removing all incoming and outgoing jumps) and the deletion of individual blocks. Spec. ,r 70. Additionally, a fine-grained scheme "uses a plurality of data structures for each block of received native code." Spec. ,r 70. By contrast, a coarse-grained caching scheme is described as not supporting individual deletion or unlinking, and that data structures are used per module and not per block. Spec. ,r 72. Here, we agree with Appellants that the claimed different code caching schemes are not taught by merely resizing a code cache. Further, the Examiner has not provided sufficient evidence or technical explanation to support the finding that different cache sizes constitute a different cache scheme. Rather, the cited disclosures suggest the different cache sizes are the result of applying a single scheme, which alters the cache size depending on the size of the block to be cached. For the reasons discussed supra, we are persuaded of Examiner error. Accordingly, we do not sustain the Examiner's rejection of independent claim 4. For similar reasons, we do not sustain the Examiner's rejection of independent claims 12 and 13, which recite commensurate limitations. Additionally, we do not sustain the Examiner's rejection of claims 5-11 and 14, which depend directly or indirectly therefrom. 9 Appeal2018-004274 Application 13/686,009 DECISION We reverse the Examiner's decision rejecting claims 1-14, 16, and 17. REVERSED 10 Copy with citationCopy as parenthetical citation