Ex Parte Capps et alDownload PDFPatent Trial and Appeal BoardJun 29, 201612335696 (P.T.A.B. Jun. 29, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE FIRST NAMED INVENTOR 12/335,696 12/16/2008 Louis B. Capps JR. 65362 7590 07/01/2016 TERRILE, CANNATTI, CHAMBERS & HOLLAND, LLP IBM Austin P.O. BOX 203518 AUSTIN, TX 78720 UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. A US920080l49US1 1580 EXAMINER CALDWELL, ANDREW T ART UNIT PAPER NUMBER 2183 NOTIFICATION DATE DELIVERY MODE 07/01/2016 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address( es): tmunoz@tcchlaw.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte LOUIS B. CAPPS JR., RONALD E. NEWHART, THOMAS E. COOK, ROBERT H. BELL JR. and MICHAEL J. SHAPIRO Appeal2013-008032 Application 12/335,696 Technology Center 2100 Before STEVEN D.A. McCARTHY, MICHAEL L. HOELTER and JEREMY J. CURCURI, Administrative Patent Judges. McCARTHY, Administrative Patent Judge. DECISION ON APPEAL 1 STATEMENT OF THE CASE 2 The Appellants 1 appeal under 35 U.S.C. § 134(a) from the Examiner's 3 decision rejecting claims 1-20. We have jurisdiction under 35 U.S.C. 4 § 6(b). 5 We sustain the rejection of claims 1-8 and 17-20. We do not sustain 6 the rejection of claims 9-16. The Appellants identify International Business Machines Corporation as the real party in interest. (App. Br. 1 ). Appeal2013-008032 Application 12/335,696 1 CLAIMED SUBJECT MATTER 2 With reference to Figure 2, the Specification teaches: 3 To accomplish efficient workload scheduling [among 4 heterogeneous processing units in a multiprocessor system], 5 processing characteristics of an instruction set are determined 6 and then a processing unit having similar characteristics is used 7 to execute the instruction set. An operating system 230 has a 8 workload scheduler 232 that discovers an instruction set's 9 workload characteristics by initiating execution of the workload 10 across plural heterogeneous processing units. A performance 11 analyzer 234 analyzes performance metrics for each processing 12 unit that are provided by a performance sensor 236 located at 13 each processing unit. Based on the performance metrics 14 provided from each performance sensor 236, performance 15 analyzer 234 determines workload characteristics of the 16 instruction set for the processing unit associated with the 17 performance sensor 23 6 and stores an instruction set identifier, 18 workload characteristic and performance metric for each 19 instruction set and analyzed processing unit in a performance 20 analyzer database. Based on the analysis, workload scheduler 21 232 schedules a preferred processing unit to execute the 22 instructions and stores the processing unit and workload 23 characteristics in a scheduling database 240 for use in subsequent 24 executions of the instruction set. 25 (Spec., para. 18; see also id., para. 21 ). 26 Claims 1, 9 and 17 are independent. Claim 1 recites: 27 1. A method for processing information, the method 28 compnsmg: 29 executing a homogeneous instruction set at each of plural 30 processing units, the processing units having 31 heterogeneous characteristics including at least a first and 32 second of the plural processing units each having plural 33 cores, the first processing unit having a first ratio ofinteger 34 cores to floating point cores, the second processing unit 35 having a second ratio of integer cores to floating point 36 cores, the first ratio different from the second ratio; 2 Appeal2013-008032 Application 12/335,696 1 monitoring one or more performance metrics associated with 2 execution of the instructions at each of the plural 3 processing units; 4 comparing the performance metrics to determine one of the 5 processing units having a desired performance; and 6 selecting the processing unit having the desired performance to 7 execute the instructions. 8 (App. Br. 10 (Claims App'x)). 9 10 REJECTIONS ON APPEAL 11 The Examiner rejects claims 9, 13, 14 and 16 under pre-AIA 3 5 12 U.S.C. § 102(b) as being unpatentable over Farkas (US 2005/0013705 Al, 13 publ. Jan. 20, 2005). (See Final Off. Act. 2--4). 14 The Examiner rejects claims 1-5, 8 and 11 under pre-AIA 35 U.S.C. 15 § 103(a) as being unpatentable over Farkas in view of the Examiner's 16 Official Notice that "one of ordinary skill in the art would [have] 17 recognize[d] that floating point units would constitute a 'resource"' 18 possessed by at least certain processing cores. (Final Off. Act. 5; see also id. 19 at 4--7). 20 The Examiner rejects claim 6 under§ 103(a) in view of the teachings 21 cited in the rejection of parent claim 1; and further in view of the Examiner's 22 Official Notice that "one of ordinary skill in the art would [have] 23 recognize[d] that a prefetch unit would constitute a 'resource"' possessed by 24 at least certain processing cores (Final Off. Act. 7). 25 The Examiner rejects claim 7 in view of the teachings cited in the 26 rejection of parent claim 6; and further in view of the Examiner's Official 27 Notice that a "prefetch unit, as is known in the art, predicts or knows which 28 instruction addresses will be fetched in the future, checks the cache to 3 Appeal2013-008032 Application 12/335,696 1 determine if [the instruction] is in the cache, and if not, fetches [the 2 instruction] from memory in advance so the stall of waiting on the cache 3 miss can be minimized or eliminated entirely" (see Final Act. 8). 4 The Examiner rejects claim 10 under§ 103(a) as being unpatentable 5 over Farkas in view of the Examiner's Official Notice that "one of ordinary 6 skill in the art would [have] recognize[ d] that integer units would constitute 7 a 'resource"' possessed by at least certain processing cores. (Final Office 8 Act. 9; see also id. at 5). 9 The Examiner rejects claim 12 under§ 103(a) over Farkas in view of 10 the Examiner's Official Notice that "one of ordinary skill in the art would 11 [have] recognize[d] that a prefetch unit would constitute a 'resource"' 12 possessed by at least certain processing cores (Final Off. Act. 7). 13 The Examiner rejects claims 15 and 17-19 under§ 103(a) over Farkas 14 in view of the Examiner's Official Notice that: 15 [a] prefetch unit, as is known in the art, predicts or knows which 16 instruction addresses will be fetched in the future, checks the 1 7 cache to determine if [the instruction] is in the cache, and if not, 18 fetches [the instruction] from memory in advance so the stall of 19 waiting on the cache miss can be minimized or eliminated 20 entirely. Since a prefetch is essentially a cache operation, just 21 done in advance[.] [C]hecking cache hit and miss rates 1s 22 essentially the same as checking a prefetch hit and miss rate. 23 (Final Off. Act. 8 & 10; see also id. at 10-11). 24 The Examiner rejects claim 20 under§ 103(a) in view of the teachings 25 cited in the rejection of parent claim 17; and further in view of the 26 Examiner's Official Notice that "one of ordinary skill in the art would [have] 27 recognize[d] that a prefetch unit would constitute a 'resource"' possessed by 28 at least certain processing cores (Final Off. Act. 7; see also id. at 24--25). 4 Appeal2013-008032 Application 12/335,696 1 The Examiner cites D. Patterson & J. Hennessy, COMPUTER 2 ARCHITECTURE A QUANTITATIVE APPROACH 187-90 & 400--04 (Morgan 3 Kauffman Publs., San Francisco, Calif., 2d edition 1996) ("Patterson"), as 4 support for each fact taken by Official Notice. (See Ans. 5). 5 6 ISSUES 7 First, would it have been obvious to modify the teachings of Farkas so 8 as to: 9 execut[ e] a homogeneous instruction set at each of plural 10 processing units, the processing units having heterogeneous 11 characteristics including at least a first and second of the plural 12 processing units each having plural cores, the first processing 13 unit having a first ratio of integer cores to floating point cores, 14 the second processing unit having a second ratio of integer to 15 floating point cores, the first ratio different from the second ratio, 16 as recited in claim 1? (See App. Br. 4---6; Ans. 5-7 & 14--17; Reply Br. 2). 17 Second, does Farkas describe a multiprocessor system including "a 18 workload scheduler operable to simultaneously initiate a homogeneous 19 instruction set on ... plural heterogeneous processing units," as recited in 20 claim 9? (See App. Br. 3--4; Ans. 11-12 & 13-14; Reply Br. 1-2). 21 Third, would it have been obvious to modify the teachings of Farkas 22 to perform the recited method for scheduling instruction sets for execution 23 by selected plural heterogeneous processing units having "different 24 proportions of prefetch engines at each heterogeneous processing unit," as 25 recited in claim 17? (See App. Br. 6; Ans. 11-12). 26 Fourth, would it have been obvious to modify the teachings of Farkas 27 to perform the "method of Claim 6 wherein the homogeneous instructions 28 comprise operating system instructions, the performance metric comprises 5 Appeal2013-008032 Application 12/335,696 1 prefetch hit rates, and selecting comprises selecting the processing unit 2 having a disproportionately greater number of prefetch engines to execute 3 the operating system instructions," as recited in claim 7? (See App. Br. 6-7; 4 Ans. 9-10). 5 6 FINDINGS OF FACT 7 The record supports the following findings of fact ("FF") by a 8 preponderance of the evidence. 9 1. Farkas describes a computer system including a pool of 10 processing cores having different complexity, resource and performance 11 measures. (See Farkas, paras. 15 & 22). Paragraph 15 of Farkas says that 12 "[m]ulti-core processor system 100 is a heterogeneous multicore and core- 13 switching implementation in a chip-level multi-core processor (CMP) with 14 multiple, diverse processor cores that all execute the same instruction set." 15 2. According to Farkas, larger, more complex processing cores 16 may provide higher performance for at least some jobs at the expense of 17 occupying a greater marginal die area. (See Farkas, paras. 4 & 23). 18 3. Farkas also teaches that the "design choices in what processing 19 performance levels to provide with the hardware can depend on a forecast of 20 particular processing jobs that will be executed." (Farkas, para. 52). 21 4. Farkas teaches that the overall throughput of a multicore system 22 can be optimized by assigning the execution of each individual job (that is, 23 each thread or process) to the processor core best designed to handle the job. 24 (See Farkas, paras. 20, 21, 23 & 29). 25 5. Farkas teaches that: 6 Appeal2013-008032 Application 12/335,696 1 The relative performance of jobs on cores of different size and 2 complexity can be ascertained in a number of ways. The simplest 3 method would be to have the jobs annotated by users or 4 annotated from profiling of previous runs. Such results in static 5 assignments for the duration of an application's execution. 6 Another method would be to monitor the performance of jobs on 7 the system in real time, to move jobs from one size processor to 8 another, and to compute the relative performance obtained on 9 different cores. Such results in a dynamic assignment. 10 (Farkas, para. 30; see also id., para. 51). 11 6. Paragraphs 20, 21 and 23 of Farkas describe a method for 12 efficient workload scheduling resulting in dynamic assignments. According 13 to Farkas, individual jobs may initially be assigned to larger, more complex 14 cores. Subsequent jobs might be assigned to smaller, simpler cores. 15 (Farkas, para. 23). "If there are more jobs available to run than large 16 complex processors, then the available jobs are run first on one processor 17 type and then switched to another processor type." (Farkas, para. 20). 18 7. Farkas describes using a timer to periodically interrupt the 19 execution of the operating system. (Farkas, para. 20). The "decision to 20 reassign the workloads to different cores is based on the metrics obtained 21 during the test intervals, as well as other additional user-defined or 22 workload-defined metrics." (Farkas, para. 21 ). 23 8. Among the metrics that Farkas teaches obtaining are the 24 number of instructions executed per second and the number of cache misses 25 per instruction. (Farkas, para. 18). 26 9. We adopt the Examiner's finding that "one of ordinary skill in 27 the art would [have] recognize[d] that integer units would constitute a 28 'resource"' possessed by at least certain processing cores. (Final Off. Act. 29 9). 7 Appeal2013-008032 Application 12/335,696 1 10. We adopt the Examiner's finding that "one of ordinary skill in 2 the art would [have] recognize[ d] that floating point units would constitute a 3 'resource'" possessed by at least certain processing cores. (Final Off. Act. 4 5). 5 11. Pages 187-90 of Patterson describes instruction pipelines for a 6 DLX reduced instruction set computing processor including an integer 7 functional unit and floating point functional units. This teaching supports 8 the Examiner's finding that it was known to combine integer and floating 9 point functional units in a single processor unit. (See Ans. 6-7). 10 12. We adopt the Examiner's finding that "one of ordinary skill in 11 the art would [have] recognize[ d] that prefetch units [were] common in the 12 art, as [prefetch units permitted] the cache to be warmed up ahead of time, 13 so that there were fewer cache misses which stall[ ed] the system out." (Ans. 14 8). 15 13. Pages 400--01 of Patterson teach that prefetching of either data 16 or instructions is a technique that may be used to reduce cache misses. This 17 teaching supports the Examiner's finding that known processing units might 18 have different numbers of prefetch engines. (See Ans. 8). 19 14. We adopt the Examiner's finding a "prefetch unit, as is known 20 in the art, predicts or knows which instruction addresses will be fetched in 21 the future, checks the cache to determine if [the instruction] is in the cache, 22 and if not, fetches [the instruction] from memory in advance so the stall of 23 waiting on the cache miss can be minimized or eliminated entirely" (see 24 Final Off. Act. 9). This finding is supported by the teachings of pages 400- 25 04 of Patterson. 8 Appeal2013-008032 Application 12/335,696 1 ANALYSIS 2 First Issue 3 Claim 1 recites a method including the step of executing a 4 homogeneous instruction set at each of: 5 [plural] processing units having heterogeneous characteristics 6 including at least a first and second of the plural processing units 7 each having plural cores, the first processing unit having a first 8 ratio of integer cores to floating point cores, the second 9 processing unit having a second ratio of integer cores to floating 10 point cores, the first ratio different from the second ratio. 11 Farkas teaches combining processing units having not only different sizes 12 but different complexities. (See FF 1 & 2; see also Final Off. Act. 12-13). 13 The Examiner correctly takes Official Notice that processing units were 14 known to include both integer functional units and floating point functional 15 units. (FF 9-11). Farkas also teaches that the type and proportion of 16 resources, that is, functional units, to be included in each processing unit will l 7 depend on the instructions that the computing system will be expected to 18 perform. (See FF 3). Because claim 1 does not positively recite the 19 instruction sets to be executed by the plural processing units, one of ordinary 20 skill in the art, based on the teachings of Farkas, would have had reason to 21 include in a multiprocessor computer system processing units having 22 different ratios of integer functional units to floating point functional units to 23 perform tasks (or, in the language of claim 1, instruction sets) having 24 different arithmetic requirements. 25 The Appellants' arguments are not persuasive of a different result. 26 (See App. Br. 4---6; Reply Br. 2). Therefore, we sustain the rejection of 27 independent claim 1 and dependent claims 2-5 and 8, which are not argued 9 Appeal2013-008032 Application 12/335,696 1 separately, under§ 103(a) as being unpatentable over Farkas in view of facts 2 taken by Official Notice. 3 4 Second Issue 5 Claim 9 recites a multiprocessor system including "a workload 6 scheduler operable to simultaneously initiate a homogeneous instruction set 7 on ... plural heterogeneous processing units." (App. Br. 11 (Claims 8 App'x)). The Examiner rejects claims 9, 13, 14 and 16 under pre-AIA 35 9 U.S.C. § 102(b) as being anticipated by Farkas. (Final Act. 2). More 10 specifically, the Examiner finds that paragraphs 15 and 21 describe the 11 quoted limitation. (See Ans. 4--5 & 13-14; Final Off. Act. 3 & 12-13). The 12 Examiner's finding turns on the interpretation of the phrase "simultaneously 13 initiate a homogeneous instruction set." 14 The Specification uses the term "instruction set" in a nonstandard 15 manner. The ordinary meaning of the term "instruction set" is the "set of 16 machine instructions that a processor recognizes and can execute." 17 (MICROSOFT COMPUTER DICTIONARY (Microsoft Press, Redmond, 18 Washington, 5th edition 2002)("instruction set"); see also (McGRAW-HILL 19 DICTIONARY OF SCIENTIFIC & TECH. TERMS (McGraw-Hill, 6th edition 20 2003)("instruction set," def. 1: the "set of instructions which a computing or 21 data-processing system is capable of performing")). This is the sense in 22 which the Specification speaks of "a common set or subset of instructions 23 and protocols" in paragraph 11. On the other hand, claim 9 recites a 24 "performance analyzer operable to compare the performance metrics to 25 select one of the plural heterogeneous processing units to execute the 26 instruction set." Claims 8 and 18, for example, also explicitly refer to 10 Appeal2013-008032 Application 12/335,696 1 executing instruction sets. The Specification includes similar language. 2 (See, e.g., Spec. paras. 5, 16, 19 & 21). This language implies that claim 9 is 3 using the term "instruction set" to refer to executable code rather than to the 4 universe of instructions a processing unit can execute. 5 Paragraphs 2, 15 and 21 of Farkas fail to describe "a workload 6 scheduler operable to simultaneously initiate a homogeneous instruction set 7 on the plural heterogeneous processing units." (See App. Br. 3--4; Reply Br. 8 1-2). For example, paragraph 2 describes multiple processes or threads 9 running on multiple processors. It does not describe an identical 10 ("homogeneous") instruction set running on multiple processing units. 11 The Appellants correctly argue that "[p]aragraph [15] does not state 12 that a homogeneous instruction set is simultaneously initiated on plural 13 heterogeneous processors." (App. Br. 4). The Appellants argue that this is 14 the case because, "[a]lthough Farkas may show multiple processors 15 executing the same application at the same time, Farkas does not disclose 16 simultaneously initiating the same instruction set as recited by Claim 9." 17 (Reply Br. 1 ). To the extent that this argument interprets the term 18 "homogeneous instruction set" as being the same sequence of instructions, 19 or an identical sequence of instructions, simultaneously initiated on plural 20 heterogeneous processing units, the interpretation is consistent with the 21 teachings of the Specification. For example, a comparison of the last two 22 sentences of paragraph 16 of the Specification indicates that the 23 Specification uses the words "homogeneous" and "identical" 24 interchangeably to describe an instruction set. The term "homogeneous 11 Appeal2013-008032 Application 12/335,696 1 instruction set" may be contrasted with an "application," which may include 2 one or more sequences of instructions, that is, threads2 or processes. 3 Paragraph 15 of Farkas appears to use the term "instruction set" 4 according to the standard meaning of the term, that is, to refer to the 5 universe of instructions that a processor core is capable of executing. (See 6 FF 1 ). Although paragraph 15 explicitly says that "all [of the processor 7 cores] execute the same instruction set," the paragraph as a whole describes 8 the properties of the multi-core processing system 100 and not execution of 9 code by the system. Because of this, it is more reasonable to read the words 10 "all execute the same instruction set" as teaching that the processor cores in 11 the multi-core processing system I 00 should be capable of executing each 12 instructions in a particular set, rather than as saying that all of the processor 13 cores should execute identical sequences of instructions. A fair reading of 14 paragraph 15 of Farkas does not describe any structure capable of 15 "simultaneously initiat[ing] a homogeneous instruction set on the plural 16 heterogeneous processing units." For this reason, paragraph 15 of Farkas 1 7 does not describe the quoted limitation. 18 The Examiner cites paragraph 21 of Farkas as describing the quoted 19 limitation (see Ans. 4; Final Off. Act. 3). The Examiner does not appear to 20 explain the citation further in either the Final Office Action or the Answer. 21 It suffices for present purposes to find that paragraphs 20, 21 and 23 of 22 Farkas describe a method in which a job or instruction set initially assigned 23 to a first processing unit may be switched to a second processing unit based 2 The last sentence of paragraph 19 of the Specification says that "workload scheduler 232 initiates an application thread." A comparison of this language with that of claim 9 indicates that an "instruction set" is akin to an application thread (and different from an application). 12 Appeal2013-008032 Application 12/335,696 1 the performance of the first processing unit while executing the instruction 2 set. (FF 6). Paragraph 21 describes serially, rather than simultaneously, 3 initiating a homogeneous instruction set at two processing units. 4 Therefore, we do not sustain the rejection of claims 9, 13, 14 and 16 5 under § 102(b) as being anticipated by Farkas. Furthermore, because the 6 facts that the Examiner has taken by Official Notice, as detailed previously, 7 in support of the rejections of claims 10-12 and 15 do not remedy the 8 deficiencies in the disclosure of Farkas as applied to parent claim 9, we do 9 not sustain the rejection of claims 10 and 11 under§ 103(a) as unpatentable 10 over Farkas in view of Official Notice. 11 12 Third Issue 13 Claim 17 recites, with italics added to indicate the limitation in 14 dispute: 15 1 7. A method for scheduling instruction sets for 16 execution by a selected of plural heterogeneous processing units, 17 each heterogeneous processing unit having a workload 18 processing characteristic, the method comprising: 19 associating a pending instruction set with a workload processing 20 characteristic; and 21 scheduling the instruction set for execution on the processing 22 unit having the workload processing characteristic; 23 wherein the workload processing characteristic comprises 24 different proportions of prefetch engines at each 25 heterogeneous processing unit. 26 (App. Br. 13 (Claims App'x)). 27 Farkas teaches that one processor unit performance metric to be 28 monitored in a system that schedules instruction sets for plural 29 heterogeneous processing units is cache miss rate. (See FF 8). This 13 Appeal2013-008032 Application 12/335,696 1 teachings would have provided one of ordinary skill in the art reason to 2 include different number of prefetch engines in different processing units so 3 as to optimize each processing unit for different tasks which the processing 4 units might have been expected to perform. (See Final Off. Act. 10; FF 3, 12 5 & 13). The presence ofprefetch engines, in tum, would have provided one 6 of ordinary skill in the art reason to schedule tasks to processing units based, 7 at least in part, on the number of prefetch engines. (See Ans. 10). 8 The Appellants attempt to draw a distinction between the number of 9 prefetch engines associated with each processing unit and the "proportions 10 of prefetch engines at each heterogeneous processing unit," as recited in 11 claim 17. (See App. Br. 6). The only "proportions of prefetch engines" 12 discussed in the Specification appear to be the proportions of prefetch 13 engines associated with one processing engine as opposed to another. (See 14 Spec., para. 20). The Examiner's conclusion that it would have been 15 obvious to associate different numbers of prefetch engines with different 16 processing units necessarily would have resulted in associating different 1 7 proportions of prefetch engines to the processing units. The Appellants' 18 argument are not persuasive. 19 Therefore, we sustain the rejection of independent claim 17 and 20 dependent claims 18 and 19, which are not argued separately, under§ 103(a) 21 as being unpatentable over Farkas in view of facts taken by Official Notice. 22 In addition, we sustain the rejection of claim 20 under§ 103(a) on the basis 23 of the reasons detailed here and in connection with the First Issue. For 24 similar reasons, we sustain the rejection of claim 6, which depends from 25 claim 1, and claim 12, which depends from claim 9, under§ 103(a) as being 14 Appeal2013-008032 Application 12/335,696 1 unpatentable over Farkas in view of facts taken by the Examiner by Otlicial 2 Notice. 3 Fourth Issue 4 The Patent Owner argues the patentability of claim 7 separately from 5 that of parent claim 1. The argument also applies to claim 15. 6 Claim 7 recites the "method of Claim 6 wherein the homogeneous 7 instructions comprise operating system instructions, the performance metric 8 comprises prefetch hit rates, and selecting comprises selecting the 9 processing unit having a disproportionately greater number of prefetch 10 engines to execute the operating system instructions." The Appellants argue 11 that "Farkas and Official Notice fail to teach, suggest or disclose selecting a 12 processing unit with a disproportionately greater number of prefetch engines 13 based upon a prefetch hit rate performance metric." (App. Br. 7). As 14 discussed with respect to the Third Issue, one of ordinary skill in the art 15 would have had reason to include different number of prefetch engines in 16 different processing units so as to optimize each processing unit for different 17 tasks which the processing units might have been expected to perform. (See 18 Final Off. Act. 10; FF 3, 12 & 13). Farkas teaches monitoring cache miss 19 rate. (FF 13). In view of this teaching, it would have been obvious to 20 monitor a prefetch miss rate or its converse, a prefetch hit rate; and to 21 schedule tasks to processing units on the basis of this metric. (See Final Off. 22 Act. 8). 23 Therefore, we sustain the rejections of claims 7 and 15 under§ 103(a) 24 as being unpatentable over Farkas in view of facts taken by the Examiner by 25 Official Notice. 15 Appeal2013-008032 Application 12/335,696 1 DECISION 2 We AFFIRM the Examiner's decision rejecting claims 1-8 and 17- 3 20. 4 We REVERSE the Examiner's decision rejecting claims 9-16. 5 No time period for taking any subsequent action in connection with 6 this appeal may be extended under 37 C.F.R. § 1.136(a)(l). See 37 C.F.R. 7 § 1.136(a)(l)(iv) (2013). 8 9 AFFIRMED-IN-PART 16 Copy with citationCopy as parenthetical citation