Ex Parte Brown et alDownload PDFPatent Trial and Appeal BoardApr 23, 201411468107 (P.T.A.B. Apr. 23, 2014) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE 1 ___________ 2 3 BEFORE THE PATENT TRIAL AND APPEAL BOARD 4 ___________ 5 6 Ex parte DOUGLAS BROWN and ANITA RICHARDS 7 ___________ 8 9 Appeal 2011-010584 10 Application 11/468,107 11 Technology Center 2100 12 ___________ 13 14 15 Before HUBERT C. LORIN, ANTON W. FETTING, and 16 MICHAEL W. KIM, Administrative Patent Judges. 17 FETTING, Administrative Patent Judge. 18 DECISION ON APPEAL 19 Appeal 2011-010584 Application 11/468,107 2 STATEMENT OF THE CASE1 1 1 Our decision will make reference to the Appellants’ Appeal Brief (“App. Br.,” filed January 10, 2011) and the Examiner’s Answer (“Ans.,” mailed March 22, 2011). Douglas Brown and Anita Richards (Appellants) seek review under 2 35 U.S.C. § 134 of a final rejection of claims 1-5 and 7-19, the only claims 3 pending in the application on appeal. We have jurisdiction over the appeal 4 pursuant to 35 U.S.C. § 6(b). 5 The Appellants invented a way of “managing a plurality of database 6 systems” (Spec. 3, para. 0008). 7 An understanding of the invention can be derived from a reading of 8 exemplary claim 1, which is reproduced below [bracketed matter and some 9 paragraphing added]. 10 1. A system for managing a plurality of database systems, the 11 system including: 12 [1] an interface 13 for obtaining data 14 indicative of one or more operational characteristics 15 of each of the database systems; 16 Appeal 2011-010584 Application 11/468,107 3 [2] a monitor 1 that is responsive to the data 2 for providing a signal 3 indicative of an instruction 4 to adjust one or more of the operational 5 characteristics 6 of a selected one of the database systems; 7 [3] an input 8 for receiving a request from a user; 9 [4] a processor 10 responsive to the interface 11 for selecting one of the databases 12 to process the request, 13 and 14 [5] an output 15 for providing the request 16 to the selected database system 17 for processing. 18 The Examiner relies upon the following prior art: 19 Subramanyam US 5,701,471 Dec. 23, 1997 Karlsson US 2006/0294044 A1 Dec. 28, 2006 Claims 1-5 and 7-19 stand rejected under 35 U.S.C. § 103(a) as 20 unpatentable over Subramanyam and Karlsson. 21 Appeal 2011-010584 Application 11/468,107 4 ISSUES 1 The issues of obviousness turn primarily on the degree of patentable 2 weight afforded the limitation “to adjust one or more of the operational 3 characteristics,” the breadth of that limitation, and whether Karlsson shows 4 it was known to use feedback to close a system loop. 5 FACTS PERTINENT TO THE ISSUES 6 The following enumerated Findings of Fact (FF) are believed to be 7 supported by a preponderance of the evidence. 8 Facts Related to the Prior Art 9 Subramanyam 10 01. Subramanyam is directed to “testing the performance of 11 database management systems (DBMS), and particularly to a 12 modular benchmarking system that provides a uniform 13 environment for consistently testing the performance of multiple 14 database management systems so as to enable meaningful 15 comparisons of the test results.” Subramanyam 5-11. 16 02. Subramanyam discloses: 17 a [DBMS] benchmark testing system for testing 18 performance of a plurality of DBMS’s. The system stores 19 both DBMS independent and DBMS specific files in a 20 computer memory. The DBMS specific files include 21 performance statistics collection procedures for each said 22 DBMS, task performance procedures for each the DBMS 23 for executing checkpoints and other DBMS operations, and 24 environmental parameter definition files for each DBMS for 25 specifying DBMS environmental parameters that control 26 the configuration and operation of each DBMS. 27 Appeal 2011-010584 Application 11/468,107 5 Subramanyam 2:14-24. 1 03. DBMS independent test scripts specify operations to be 2 performed by specified ones of the DBMS’s so as to test 3 performance of the BBMS’s, and specify performance 4 statistics to be collected by the performance statistics 5 collection procedures while the DBMS performs the 6 specified operations. Test result files store benchmark test 7 results, which include performance statistics for each 8 benchmark test executed by the system under the control of 9 one of the test scripts, as well as information denoting the 10 DBMS tested, the test script used to perform the benchmark 11 test, the operations performed by the DBMS tested, and the 12 DBMS environmental parameters for the DBMS tested. 13 DBMS independent post test analysis procedures are used 14 to analyze the information stored in the test result files. As 15 a result, the files used to store benchmark test results are 16 self-documenting with respect to the system configuration 17 and database configuration on which the benchmark tests 18 were run and with respect to the operations performed by 19 the DBMS’s during the execution of the benchmark tests. 20 Subramanyam 2:25-43. 21 04. Referring to FIG. 1, the system includes a central 22 processing unit, a user interface, random access memory, 23 and secondary memory (e.g., disk storage). In addition, the 24 system will typically include database storage, which 25 generally includes additional hard disk storage devices for 26 the storage of database tables and files used in the process 27 of benchmark testing. 28 Subramanyam 3:10-18. 29 05. Subramanyam discloses: 30 [A] system includes a set of performance statistics 31 collection modules . . . , which collect statistical 32 information from the system and the DBMS’s while they 33 are performing a set of tasks. . . .[t]here are three levels of 34 performance statistics collection procedures: system level, 35 Appeal 2011-010584 Application 11/468,107 6 DBMS level, and test level procedures. The system level 1 procedures for collecting performance statistics collect 2 information such as: number of system calls made; CPU 3 usage including amount of time the system spent in user 4 mode and kernel mode; number of context switches made; 5 network statistics such as number of packets sent and 6 received per second and the packet collision rate; and I/O 7 statistics such as, for each disk used in the benchmark test, 8 average disk access time, number of disk accesses per 9 second, and the average size of the data blocks accessed. 10 Subramanyam 3:26-41. 11 06. Procedures for collecting DBMS level performance 12 statistics collect statistics on matters such as the numbers of 13 transactions aborted and completed, cache hit rates and 14 contention for latches. Procedures for collecting test 15 specific performance statistics collect specialized 16 performance statistics, such as number of queries 17 successfully handled and response rates when various 18 different levels of queries were used, and other performance 19 statistics associated with various phases of the tests 20 performed. 21 Subramanyam 3: 42-50. 22 07. “[T]he post test procedures . . . include an I/O analysis 23 procedure that analyzes the disk performance statistics and bring 24 to the user’s attention any problems identified by the I/O analysis 25 procedure.” Subramanyam 4:51-54. 26 08. FIG. 2 shows the directory structure to store test scripts, 27 procedures and parameters files associated with the DBMS 28 benchmark tests. The top level directories include a “generic” 29 directory that contains the main procedures, as well as the top 30 level . . . (i.e., DBMS independent) test scripts, the test parameter 31 template files and the test parameter files defined by users of the 32 Appeal 2011-010584 Application 11/468,107 7 system when setting up benchmark tests to be executed. The top 1 level directories also include a DBMS Vendors directory that 2 contains subdirectories of files for each DBMS, a Tools directory 3 that contains the procedures for collecting system level 4 performance statistics as well as post test procedures for analyzing 5 benchmark test results, a Control Files directory that contains 6 procedures and data files for controlling access to the benchmark 7 testing system, and an Output files directory for storing test result 8 files. Subramanyam 5:21-39, Fig. 2. 9 09. Test specific subdirectories 144-1, 144-2, . . . , contain 10 scripts and procedures specific to respective ones of the 11 benchmark tests. For instance, these procedures generate 12 database queries, transaction requests, and other DBMS 13 commands in particular sequences and combinations. The 14 test specific subdirectories 144 may also store test specific 15 performance statistics collection procedures. 16 Subramanyam 5:55-60. 17 Karlsson 18 10. Karlsson is directed to resource allocation, and more 19 specifically to dynamically controlling weights assigned to 20 consumers competing for a shared resource, wherein a 21 proportional-share scheduler schedules access to the shared 22 resource based on the assigned weights. Karlsson para. 0002. 23 11. Closed-loop systems have been employed for controlling 24 performance of systems in various ways. Karlsson para. 0009. 25 12. A closed-loop system is described in which a controller 26 monitors performance of a system and, based on the observed 27 Appeal 2011-010584 Application 11/468,107 8 performance and a desired performance goal, “autonomously 1 adjusts the weights assigned to competing consumers, wherein 2 the weights are used by a proportional-share scheduler for 3 scheduling access to the shared resource. Accordingly, user 4 interaction is not required for adjusting the weights assigned to the 5 competing consumers.” A technique “can be used for controlling 6 the weights assigned to competing consumers without requiring a 7 priori knowledge of the system (i.e., the shared resource and/or 8 interrelationships between the competing consumers).” Karlsson 9 para. 0016 (emphasis added). 10 ANALYSIS 11 We are not persuaded by the Appellants’ argument that the weights 12 assigned to customers in Karlsson are not equivalent to operational 13 characteristics of a selected database system. Br. 6. 14 The limitation at issue is that of being “responsive to the data for 15 providing a signal indicative of an instruction to adjust one or more of the 16 operational characteristics of a selected one of the database systems” (claim 17 1). We find the claim does not narrow the manner or degree in which a 18 signal is provided, in which the signal is manifest, how the signal is 19 indicative of an instruction, and how the instruction relates to adjusting 20 characteristics. Further, the limitation “to adjust one or more of the 21 operational characteristics” is aspirational rather than functional, not 22 affecting the recited method. Thus, this limitation, being non-functional, is 23 afforded no patentable weight. 24 Appeal 2011-010584 Application 11/468,107 9 In a non-precedential decision, our reviewing court reminded us of the 1 applicability of the precedential In re Lowry, 32 F.3d 1579 (Fed. Cir. 1994), 2 In re Gulack, 703 F.2d 1381 (Fed. Cir. 1983), and In re Bernhart, 417 F.2d 3 1395 (CCPA 1969) decisions. 4 We have held that patent applicants cannot rely on printed 5 matter to distinguish a claim unless “there exists [a] new and 6 unobvious functional relationship between the printed matter 7 and the substrate.” In re Gulack, 703 F.2d 1381, 1386 8 (Fed.Cir.1983). . . . 9 . . . . 10 . . . [T]he Board did not create a new “mental 11 distinctions” rule in denying patentable weight . . . . On the 12 contrary, the Board simply expressed the above-described 13 functional relationship standard in an alternative formulation—14 consistent with our precedents—when it concluded that any 15 given position label’s function . . . is a distinction “discernable 16 only to the human mind.” . . . ; see In re Lowry, 32 F.3d 1579, 17 1583 (Fed.Cir.1994) (describing printed matter as “useful and 18 intelligible only to the human mind”) (quoting In re Bernhart, 19 . . . 417 F.2d 1395, 1399 (CCPA 1969)). 20 In re Jie Xiao, 462 Fed. Appx. 947, 950-52 (Fed. Cir. 2011) (Non-21 precedential). Thus, non-functional descriptive material, being useful and 22 intelligible only to the human mind, is given no patentable weight. See also 23 In re Ngai, 367 F.3d 1336, 1339 (Fed. Cir. 2004). “The rationale behind this 24 line of cases is preventing the indefinite patenting of known products by the 25 simple inclusion of novel, yet functionally unrelated limitations.” King 26 Pharmaceuticals, Inc. v. Eon Labs, Inc., 616 F.3d 1267, 1279 (Fed Cir 27 2010)(citing Ngai, 367 F.3d at 1339). (The relevant inquiry here is whether 28 the additional instructional limitation has a “new and unobvious functional 29 relationship” with the method , that is, whether the limitation in no way 30 depends on the method, and the method does not depend on the limitation). 31 Appeal 2011-010584 Application 11/468,107 10 Here, the actual adjustment of one or more of the operational 1 characteristics does not depend on providing a signal, and providing a signal 2 does not depend on the adjustment. 3 So, as Subramanyam’s collection of performance statistics provides 4 signals in the form of initiating post test procedures indicative of instructions 5 to operators to analyze the statistics to see how to adjust the operating 6 parameters, this is within the scope of the limitation, with or without 7 patentable weight afforded the “to” clause. The Examiner applied Karlsson 8 to show that the use of feedback in a closed loop system was known to 9 eliminate the operator from the loop. What particular feedback Karlsson 10 used is irrelevant, as Karlsson was applied only to show the known use of a 11 closed loop system in general. 12 CONCLUSIONS OF LAW 13 The rejection of claims 1-5 and 7-19 under 35 U.S.C. § 103(a) as 14 unpatentable over Subramanyam and Karlsson is proper. 15 DECISION 16 The rejection of claims 1-5 and 7-19 is affirmed. 17 No time period for taking any subsequent action in connection with this 18 appeal may be extended under 37 C.F.R. § 1.136(a). See 37 C.F.R. 19 § 1.136(a)(1)(iv) (2011). 20 21 AFFIRMED 22 llw 23 Copy with citationCopy as parenthetical citation