Ex Parte LudwigDownload PDFBoard of Patent Appeals and InterferencesMar 30, 201110702262 (B.P.A.I. Mar. 30, 2011) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES ____________ Ex parte LESTER F. LUDWIG ____________ Appeal 2009-009356 Application 10/702,262 Technology Center 2800 ____________ Before JOSEPH F. RUGGIERO, JOHN A. JEFFERY, and MARC S. HOFF, Administrative Patent Judges. JEFFERY, Administrative Patent Judge. DECISION ON APPEAL Appellant appeals under 35 U.S.C. § 134(a) from the Examiner’s rejection of claims 1-31. We have jurisdiction under 35 U.S.C. § 6(b). We affirm-in-part. STATEMENT OF THE CASE Appellant’s invention provides phase-staggered panning of multi- channel audio signals. See generally Spec. ¶¶ 0306-09. Claim 1 is illustrative: Appeal 2009-009356 Application 10/702,262 2 1. A system for controlled staggered panning of multi-channel audio signals, said system comprising: a master time-varying control signal; a processor for generating at least three panning control signals responsive to a distinct phase-staggered function of said master time-varying control signal; and at least three controllable audio signal panning elements individually structured for panning a separate incoming audio signal responsive to an associated one of said at least three panning control signals. RELATED APPEALS This appeal is said to be related to six other appeals in connection with Application Serial Numbers (1) 09/812,400 (Appeal No. 2009-002201); (2) 10/676,926 (Appeal No. 2009-006844); (3) 10/680,591 (Appeal No. 2009-008916); (4) 10/703,023 (Appeal No. 2009-010281); (5) 10/702,415 (Appeal No. 2009-008141); and (6) 11/040,163 (Appeal No. 2010-009424). App. Br. 3; Ans. 2.1 We previously decided four of these appeals (09/812,400, 10/676,926, 10/702,415, and 10/680,591). See Ex parte Ludwig, No. 2009-002201, 2009 WL 3793386 (BPAI 2009) (non-precedential) (reversing the Examiner’s anticipation and obviousness rejections); see also Ex parte Ludwig, No. 2009-006844, 2010 WL 4917799 (BPAI 2010) (non-precedential) (same); Ex parte Ludwig, No. 2009-008141, 2011 WL 486163 (BPAI 2011) (non- precedential) (reversing Examiner’s obviousness rejections); Ex parte Ludwig, No. 2009-008916, 2011 WL 794287 (BPAI 2011) (non- precedential) (affirming Examiner’s obviousness rejection in part). 1 Throughout this opinion, we refer to (1) the Appeal Brief filed August 20, 2008; (2) the Examiner’s Answer mailed November 14, 2008; and (3) the Reply Brief filed January 15, 2009. Appeal 2009-009356 Application 10/702,262 3 CITED REFERENCES The Examiner relies on the following as evidence of unpatentability: Sgroi US 5,357,048 Oct. 18, 1994 Kinoshita US 5,734,724 Mar. 31, 1998 THE REJECTIONS 1. The Examiner rejected claims 1-8, 12, 14-21, 25, and 27-31 under 35 U.S.C. § 102(b) as anticipated by Kinoshita. Ans. 3-5. 2. The Examiner rejected claims 9-11, 13, 22-24, and 26 under 35 U.S.C. § 103(a) as unpatentable over Kinoshita and Sgroi. Ans. 6. THE ANTICIPATION REJECTION Regarding independent claim 1, the Examiner finds that Kinoshita discloses a system for controlled staggered panning in Figure 16 with every recited feature including a “master time-varying control signal” which is said to be produced by Kinoshita’s “signal processing control part” 20 by detecting various connected terminals over time. Ans. 3-5, 7. The Examiner also finds that Kinoshita’s “parameter setting part” 14C generates at least three “panning control signals” that are said to be responsive to a “phase- staggered function” of the parameter setting part’s “master time-varying control signal.” Ans. 3-5, 7-9. Moreover, Kinoshita’s “sound image processing parts” 8-1 to 8-N are said to each pan separate incoming audio signals responsive to an associated panning control signal. Id. Appellant argues that Kinoshita’s “signal processing control part” 20 does not produce a “master time-varying control signal” as claimed since Kinoshita’s signal from element 20 is based on detecting the number of Appeal 2009-009356 Application 10/702,262 4 conference participants—an event-driven factor that is said to be unrelated to time. App. Br. 16; Reply Br. 2-3. Appellant also contends that not only does Kinoshita’s “parameter setting part” 14C fail to generate signals—let alone panning control signals—Kinoshita’s parameters are also not responsive to a phase-staggered function of the master time-varying control signal as claimed. App. Br. 17-24; Reply Br. 3-6. Appellant also argues that Kinoshita does not disclose (1) varying a common time-varying control signal (a) periodically (claim 15), or (b) according to a triggered transient signal (claim 16); (2) providing each panning control signal to a mixer unit to produce plural mixed output signals (claim 17); (3) implementing the recited method within one signal processing layer of a multi-layered signal processing system (claim 25); and (4) adjusting amplitude of a time-varying value of each of the panning control signals (claim 30). The issues before us, then, are as follows: ISSUES Under § 102, has the Examiner erred by finding that Kinoshita discloses: (1) a master time-varying control signal as recited in claim 1? (2) a processor for generating at least three panning control signals responsive to a distinct phase-staggered function of the master time-varying control signal as recited in claim 1? Appeal 2009-009356 Application 10/702,262 5 (3) at least three controllable audio signal panning elements individually structured for panning a separate incoming audio signal responsive to an associated panning control signal as recited in claim 1? (4) a common time-varying control signal varies (a) periodically (claim 15), or (b) according to a triggered transient signal (claim 16)? (5) providing each panning control signal to a mixer unit to produce plural mixed output signals as recited in claim 17? (6) implementing the recited method within one signal processing layer of a multi-layered signal processing system as recited in claim 25? (7) adjusting amplitude of a time-varying value of each of the panning control signals as recited in claim 30? FINDINGS OF FACT (FF) 1. Kinoshita’s system controls audio signal processing in a multi- point teleconference via a communication network. To this end, conference participants’ terminals (e.g., TM-1 to TM-4) connect to audio communication control unit 100 (comprising switching part 11 and audio signal mixing control part 10) via network 40. Kinoshita, col. 1, ll. 4-8; col. 5, l. 17 – col. 6, l. 7; Fig. 5. Kinoshita’s multi-point teleconferencing system with an audio communication control unit in Figure 5 is reproduced below: Appeal 2009-009356 Application 10/702,262 6 Kinoshita’s multi-point teleconferencing system with an audio communication control unit in Figure 5 2. In one embodiment, Kinoshita’s audio communication control unit 100 processes audio signals of N participants using different sets of acoustic transfer functions as sound image control parameters to localize the participants’ reproduced sounds at different spatial positions. To this end, switching part 11 selects “J” communication lines from an unspecified number of communication lines 40, where 1 ≤ J ≤ M (the number of terminals simultaneously connected to the network). “Signal processing control part” 20 receives a connection confirm signal and similar control signals that are transmitted from respective terminals via switching part 11. Signal processing control part 20 (1) detects the number “M” of connected terminals from such control signals, and (2) sends the detected number to (a) “amplification factor setting part” 35, and (b) “parameter setting part” 14C. Kinoshita, col. 15, l. 6 – col. 16, l. 53; Fig. 16. Kinoshita’s audio communication control unit in Figure 16 is reproduced below: Appeal 2009-009356 Application 10/702,262 7 Kinoshita’s audio communication control unit in Figure 16 3. Parameter setting part 14C sets acoustic transfer functions (HJL(θJ) and HJR(θJ)) corresponding to target spatial positions that “sound image processing parts” 8-1 to 8-N convolve with audio signals received from associated amplifiers 36-1 to 36-N to produce respective left and right audio signals. These audio signals are then sent to corresponding left and right mixers 5L, 5R of “mixing part” 15. Kinoshita, col. 6, ll. 50-56; col. 15, ll. 25-44; col. 16, ll. 7-53; Figs. 16-17. Kinoshita’s sound image processing part 8-1 and its convolution functions are shown in Figure 17 reproduced below: Appeal 2009-009356 Application 10/702,262 8 Kinoshita’s sound image processing part 8-1 and its convolution functions in Figure 17 4. In another embodiment, Kinoshita’s system accounts for changes in the number of terminals participating in a teleconference by updating the target positions of sounds originating from the remaining terminals. The corresponding sets of acoustic transfer functions (HL(θJ) and HR(θJ)) are set accordingly. Kinoshita, col. 26, l. 53 – col. 27, l. 29; Fig. 25. 5. According to Appellant’s Specification: The invention provides for a much more homogeneous method for multi-channel periodic-sweep auto-panning, namely that of arranging the signal pan images in a phase-staggered constellation swept by a single modulating sweep oscillator. A simple example is that of stereo cross-panning where two input signals pan between stereo speakers in synchronized complementary directions. Another example is that of staggering the phases of a multiple phase output modulating sweep oscillator in some preassigned arrangement, such as offset from each other by a common phase-offset value. This may be used to pan the sounds from each individual vibrating element so that the individual panned sound images follow one another between two speakers. Spec. ¶ 0308 Appeal 2009-009356 Application 10/702,262 9 ANALYSIS Claims 1-13 and 27-29 We begin by construing a key disputed limitation of claim 1 which calls for, in pertinent part, a “master time-varying control signal.” We emphasize the term “time-varying” here, for the Examiner and Appellant reach opposite conclusions regarding whether the signal from Kinoshita’s “signal processing control part” 20—a signal that is based on detecting the number of conference participants—is “time-varying.” Based on the record before us, we see no reason why the signal from Kinoshita’s “signal processing control part” 20 cannot be “time-varying” in view of the term’s scope and breadth. Notably, Appellant does not squarely define the term “time-varying” in the Specification, let alone a “time-varying control signal.” We therefore construe a “time-varying control signal” to include control signals where at least one aspect or parameter of the signal varies with time.2,3 2 Accord McGraw-Hill Dictionary of Elec. & Comp. Eng’g 588 (2004) (defining “time-varying system” as “[a] system in which certain quantities governing the system’s behavior change with time, so that the system will respond differently to the same input at different times.”); see also Comprehensive Dictionary of Electrical Engineering 694 (Phillip A. Laplante ed.) (2d ed. 2005) (defining “time-invariant system” as “the system in which the parameters are stationary with respect to time during the operation of the system.”). But cf. Ferrel G. Stremler, Introduction to Communication Systems (2d ed. 1982) (“A system is time-invariant if a time shift in the input results in a corresponding time shift in the output . . . .” and further noting that “[t]he output of a time-invariant system depends on time differences and not on absolute values of time. Any system not meeting this requirement is said to be time-varying.”). 3 Although some of the cited references in n.3 supra were published after the effective filing date of Appellant’s invention, we see no difficulty in Appeal 2009-009356 Application 10/702,262 10 Despite Appellant’s contention that Kinoshita discloses an event- varying signal—not a time-varying signal (App. Br. 16; Reply Br. 2-3)— nothing in claim 1 precludes the signal from Kinoshita’s “signal processing control part” 20 from being “time-varying” since it is based on aspects that change over time, namely the detected number of conference participants. See FF 2. Simply put, the number of conference participants—a key factor in formulating the signal produced from Kinoshita’s signal processing control part” 20 (id.)—can change over time, a fact confirmed by Kinoshita’s alternative embodiment, which accounts for these changes in a particular way, namely by updating the corresponding transfer functions. FF 4. Appellant’s contention that this change in participants in Kinoshita does not produce a time-varying control signal (App. Br. 16) is unavailing. Since the value of “M” (the detected number of participants) changes over time, that change will likewise revise the corresponding control signal from Kinoshita’s “signal processing control part” 20. See FF 2, 4. Therefore, this control signal is “time-varying” in that sense. And we see no reason why Kinoshita’s “parameter setting part” 14C does not generate at least three “panning control signals” as the Examiner indicates (Ans. 3-5, 7-9), particularly in view of the term’s scope and breadth. First, we agree with the Examiner that the outputs from parameter setting part 14C are “signals,” for they electrically transfer parameter-based information from the parameter setting part 14C to the respective sound referring to these standard engineering sources here, for Appellant likewise cites definitions published after the effective filing date. See, e.g., App. Br. 22. Appeal 2009-009356 Application 10/702,262 11 image processing parts 8-1 to 8-N. See FF 2-3. Indeed, it is difficult to envision any other way to electrically transfer this parameter-based information to Kinoshita’s sound image processing parts other than via signals. Appellant’s reliance on Wikipedia to support the contention that signals are different than parameters (App. Br. 20-21) is unavailing. Leaving aside the fact that Wikipedia is a non-peer-reviewed Internet source of dubious reliability with little probative value,4 the fact that parameters may be static (as Appellant contends) does not obviate their transmission via signals to the sound image processing parts. Even assuming, without deciding, that the parameters used to set the acoustic transfer functions in Kinoshita are static, these parameters will nevertheless be transmitted via signals. See FF 2-3. And since these control signals dictate the relative spatial position of the audio signal in the stereo field (i.e., the relative left and right channels of the stereo field), the signals fully meet “panning control signals” as claimed. We reach this conclusion even assuming, without deciding, that the left and right audio signals produced by Kinoshita’s sound image processing parts involve switching between channels or otherwise result in all audio being directed to one channel (left or right) relative to the other as Appellant suggest (App. Br. 22-24). Even in these extreme cases, the audio signals are 4 See, e.g., Ex parte Three-Dimensional Media Group, Ltd., No. 2009- 004087, 2010 WL 3017280 (BPAI 2010) (non-precedential), at *17 (“Wikipedia is generally not considered to be as trustworthy as traditional sources for several reasons, for example, because (1) it is not peer reviewed; (2) the authors are unknown; and (3) apparently anyone can contribute to the source definition”). Appeal 2009-009356 Application 10/702,262 12 still panned—albeit “hard left” or “hard right.”5 In any event, Appellant’s contention that switching between audio channels somehow teaches away from panning is unavailing not only in view of the art’s recognition of “hard” panning, but also since such “teaching away” arguments are irrelevant to anticipation.6 That said, however, we do not find that Kinoshita’s panning control signals from “parameter setting part” 14C are responsive to a distinct phase- staggered function of the master time-varying control signal as claimed. Although Appellant does not squarely define the term “phase-staggered,” Appellant nevertheless indicates in the Specification that by staggering the phases of a signal (e.g., from a modulating sweep oscillator) in a 5 The concept of panning—including “hard” panning—is described below: There’s nothing mysterious about panning. Every channel is monaural, and when two output channels are used as a stereo pair, the panning function allows you to alter the percentage of the signal that is sent through each . . . . This, in turn, determines where the signal will image. For example, when the pan is centered, the volume of the left channel and the right channel are equal, and the instrument will image in the center. If you move the pan slightly to the left, you increase the volume of the left channel and decrease the volume of the right channel, and the instrument images farther to the left. Finally, if you position the pan “hard left,” meaning all the way to the left side, you maximize the volume of the left channel and completely cut off the right channel, and the instrument images to the far left. Peter McIan & Larry Wichman, The Musician’s Guide to Home Recording 237 (1994). 6 “‘[T]eaching away’ is irrelevant to anticipation.” Leggett & Platt, Inc. v. VUTEk, Inc., 537 F.3d 1349, 1356 (Fed. Cir. 2008) (citation omitted). Appeal 2009-009356 Application 10/702,262 13 predetermined arrangement, sounds can be panned dynamically such that individual panned sounds follow one another between speakers. FF 5. Although not limiting, this description nevertheless informs our construction of the term “phase-staggered function” which, at a minimum, must involve some sort of staggering of signal phases. Although the Examiner’s Answer is not a model of clarity on this point, the Examiner apparently takes the position that since Kinoshita provides at least three outputs from different, unspecified “sound processing units, ” a phase-staggered output is allegedly provided. Ans. 8 (emphasis added). Presumably, the Examiner’s position is based on the multiple outputs from “sound image processing parts” 8-1 to 8-N as being “phase- staggered.” See id. But the claim requires that the panning control signals (which the Examiner equates to the signals generated by the “parameter setting part” 14C) are responsive to a distinct phase-staggered function of the master time-varying control signal (which the Examiner equates to the signal from “signal processing control part” 20). See Ans. 7. Since these panning control signals are recited as “responsive to” a phase-staggered function of the “master time-varying control signal,” to produce this response, the phase-staggered function must therefore be upstream from the device that generates the panning control signals, namely the “parameter setting part” 14C. The Examiner’s basis for the rejection is therefore problematic for this reason alone, for the Examiner relies on the outputs from “sound image processing parts” 8-1 to 8-N in connection with the recited “phase-staggered function”—components that are downstream from the “parameter setting part” 14C. Appeal 2009-009356 Application 10/702,262 14 Nevertheless, the Examiner simply fails to demonstrate how the “master time-varying control signal” generated by Kinoshita’s “signal processing control part” 20 has distinct phases at all—let alone that they are staggered in a predetermined fashion as would be the case with a distinct phase-staggered function of such a signal.7 See FF 5. We are therefore persuaded that the Examiner erred in rejecting independent claim 1 and dependent claims 2-8, 12, and 27-29 for similar reasons. Since our decision is dispositive regarding our reversing independent claim 1, we need not address Appellant’s separate arguments regarding dependent claims 2-4, 8, 12, and 28 (App. Br. 24-27; Reply Br. 6- 8).8 Claim 14 We will, however, sustain the Examiner’s rejection of independent claim 14. Claim 14 recites “[a] method of controlled staggered panning of multi-channel audio signals” with limitations similar to those recited in claim 1, but lacks claim 1’s requirement that the panning control signals are responsive to a distinct phase-staggered function of a time-varying control signal. 7 See generally William H. Hayt, Jr. & Jack E. Kemmerly, Engineering Circuit Analysis 268-70 (3d ed. 1978) (discussing phase relationships of sinusoidal signals). 8 We do, however, address commensurate arguments made in connection with claims depending on independent claim 14 since, as noted infra, we affirm the Examiner’s rejection of that claim. Appeal 2009-009356 Application 10/702,262 15 It is this distinction that is dispositive, for nothing in claim 14 precludes Kinoshita’s generating panning control signals using an associated function of a common time-varying control signal noted above. Our analysis regarding the commensurate features in claim 1 applies equally here. We are therefore not persuaded that the Examiner erred in rejecting claim 14. Claim 15 We will not, however, sustain the Examiner’s rejection of claim 15 which recites that the common time-varying control signal varies periodically. We cannot say—nor has the Examiner shown—that the time- varying signal from Kinoshita’s “signal processing control part” necessarily varies periodically—it may vary in a non-periodic (or even random) fashion. See FF 2-4. Indeed, even if it is probable that this control signal varies periodically, that too is insufficient for inherent anticipation.9 We are therefore persuaded that the Examiner erred in rejecting claim 15. Claim 16 We will, however, sustain the Examiner’s rejection of claim 16 calling for the common time-varying control signal to vary according to a triggered transient signal. Since the value of “M” (the detected number of 9 “Inherency . . . may not be established by probabilities or possibilities. The mere fact that a certain thing may result from a given set of circumstances is not sufficient.” In re Robertson, 169 F.3d 743, 745 (Fed. Cir. 1999) (citations omitted). Appeal 2009-009356 Application 10/702,262 16 participants) in Kinoshita changes over time, that change will likewise revise the corresponding control signal from Kinoshita’s “signal processing control part” 20. See FF 2, 4. This detection of participants would be based, at least in part, on signals associated with switching part 11 that are sent to signal processing control part 20. See id. Since these participant-based signals would be (1) triggered at least by the participants’ connection to the conference, and (2) transient by their very nature (and therefore comport with the Examiner’s undisputed definition of “transient” (Ans. 9)), they constitute “triggered transient signals” which would vary the common time- varying control signal as noted previously. See id. We are therefore not persuaded that the Examiner erred in rejecting claim 16. Claims 17-21 We will not, however, sustain the Examiner’s rejection of claim 17 which recites providing each panning control signal to a mixer unit to produce plural mixed output signals. Although the Examiner alleges that Kinoshita’s panning control signals are provided to mixer unit (5L, 5R) (Ans. 4), these identified control signals are actually provided to the sound image processing parts (8-1 to 8-N)—not the mixing part 15 which includes the relied-upon left and right mixers 5L, 5R. See FF 2-3. To the extent that the Examiner’s position is based on Kinoshita’s sound image processing parts as constituting part of the recited “mixer unit” (e.g., the “mixer unit” collectively constituting the (1) sound image Appeal 2009-009356 Application 10/702,262 17 processing parts 8-1 to 8-N, and (2) mixing part 15) has simply not been articulated on this record. Nor will we speculate in this regard here in the first instance on appeal. We are therefore persuaded that the Examiner erred in rejecting claim 17, and dependent claims 18-21 for similar reasons. Since our decision is dispositive regarding our reversing claim 17, we need not address Appellant’s separate arguments regarding dependent claim 21 (App. Br. 26; Reply Br. 7). Claim 25 We will, however, sustain the Examiner’s rejection of claim 25 which calls for implementing the method within one signal processing layer of a multi-layered signal processing system. We see no error in the Examiner’s mapping each sound image processing part 8-1 to 8-N in Kinoshita as corresponding to “signal processing layers” (Ans. 10), particularly in view of their commensurate parallel signal-processing functionality with respect to the number of detected conferees. See FF 2-3. We are therefore not persuaded that the Examiner erred in rejecting claim 25. Claims 30 and 31 We will not, however, sustain the Examiner’s rejection of claim 30 which recites adjusting amplitude of a time-varying value of each of the panning control signals. The Examiner has not shown—nor can we find— anything in Kinoshita indicating that the amplitude of each panning control signal (i.e., the signals from “parameter setting part” 14C) is adjusted, let Appeal 2009-009356 Application 10/702,262 18 alone adjusting the amplitude of a time-varying value of these control signals as claimed. As Appellant indicates (App. Br. 27; Reply Br. 8), the Examiner’s reliance on the alleged inherent “control portion” of an audio signal in Kinoshita (Ans. 10) is inapposite, for the recited amplitude adjustment pertains not to audio signals, but rather to the panning control signals that (1) were identified as generated by “parameter setting part” 14C (Ans. 7), and (2) located upstream from the generated audio signals from sound image processing parts 8-1 to 8-N. See FF 2-3. To the extent that the Examiner’s position is based on some other audio signals in Kinoshita as constituting at least in part the recited “panning control signals” has simply not been clearly articulated on this record, and, in any event, conflicts with the Examiner’s previous mapping of these control signals to those generated by Kinoshita’s parameter setting part 14C.10 Nor will we speculate in this regard here in the first instance on appeal. We are therefore persuaded that the Examiner erred in rejecting claim 30, and dependent claim 31 for similar reasons. THE OBVIOUSNESS REJECTION The Examiner finds that that Kinoshita discloses every recited feature except for (1) controlling various recited functions via incoming MIDI signals (claims 22-24), and (2) implementing the recited method within a 10 Compare Ans. 7 (noting that “[e]lement 14C generates at least three panning control signals”) with Ans. 10 (alleging that “audio signals will inherently have a control portion to define the output produced by the signal”) (emphases added). Appeal 2009-009356 Application 10/702,262 19 spatially-distributed timbral realization system (claim 26). The Examiner, however, cites Sgroi as teaching these features in concluding the claims would have been obvious. Ans. 6, 10. Regarding claims 22-24, Appellant argues that the Examiner’s reliance on Sgroi is flawed since not only does it fail to teach or suggest the recited control functions, but there is also no reason to combine the references since, among other things, (1) Kinoshita’s teleconferencing system has no need for MIDI, and (2) Sgroi’s MIDI controller has no place in Kinoshita’s teleconferencing system. App. Br. 27-28; Reply Br. 8-9. Appellant reiterates the improper combinability argument regarding claim 26. App. Br. 28. The issue before us, then, is as follows: ISSUE Under § 103, has the Examiner erred in rejecting claims 22-24 and 26 by finding that Kinoshita and Sgroi collectively would have taught or suggested the recited limitations of these claims? This issue turns on whether the Examiner’s reason to combine the teachings of these references is supported by articulated reasoning with some rational underpinning to justify the Examiner’s obviousness conclusion. ADDITIONAL FINDINGS OF FACT 6. Sgroi’s MIDI controller can automatically randomize four key elements of sound (i.e., timbre, pitch, volume, and dynamic response) on multiple MIDI channels to produce new sounds. Sgroi, Abstract. Appeal 2009-009356 Application 10/702,262 20 7. Sgroi’s Figure 2 illustrates a MIDI electronic music system including (1) MIDI source 22 (e.g., a MIDI controller, synthesizer, sequencer, etc.); (2) MIDI sound generators 24; and (3) sound system 26. Sgroi, col. 3, ll. 17-27, 50-55; Fig. 2. 8. Sgroi’s Figure 2 shows (1) a MIDI data path between MIDI source 22 (dashed arrows), and (2) an audio signal path between the MIDI sound generators 24 and sound system 26 (solid arrows). Sgroi, Fig. 2. 9. Sgroi’s Figure 3 shows a MIDI-controllable electronic synthesizer. A MIDI signal 46 is routed to MIDI data processor 30. Notes are generated by waveform generators 34 and modified by waveform modifiers 36-40. Then, the outputs are summed 42 to produce the audio output 44. Sgroi, col. 3, ll. 28-49; Fig. 3. 10. Sgroi’s Figure 4 illustrates a MIDI controller 51 comprising (1) note modifiers 48; (2) switch inputs 52; and (3) keyboard 56 whose outputs are routed to scanner 54. The scanner (1) generates events (e.g., note on, note off, timbre change, volume change, pitch bend, etc.) based on changes on the received inputs, and (2) writes the events to event compiler 58 (a buffer). Sgroi, col. 3, l. 56 — col. 4, l. 23; Fig. 4. 11. Sgroi’s MIDI controller processor 62 implements a series of subroutines that convert events into MIDI commands. Specifically, processor 62 (1) retrieves the next event from the event compiler; (2) executes the subroutine corresponding to that event; and (3) transmits the appropriate MIDI command from the MIDI Out port 66. Sgroi, col. 5, ll. 1- 58; Fig. 4. Appeal 2009-009356 Application 10/702,262 21 12. Sgroi’s MIDI controller 51 includes a randomizer 64 that performs a high level additive synthesis where entire timbres (waveforms and modifiers) are combined to create new sounds. This is fulfilled through the MIDI standard protocol. Sgroi, col. 6, ll. 25-29; Fig. 4. 13. Sgroi enables changing timbre through MIDI. Sgroi, col. 7, l. 47 – col. 8, l. 26; Figs. 7-8. ANALYSIS We will not sustain the Examiner’s rejection of claims 22-24 and 26 essentially for the reasons indicated by Appellant. App. Br. 27-28; Reply Br. 8-9. Although MIDI is a well known musical instrument digital interface protocol as the Examiner indicates (Ans. 10)—a fact amply evidenced by Sgroi’s MIDI generation and control functions (FF 6-12)—that hardly means that it would have been obvious to use these kinds of musical control signals in a Kinoshita’s teleconferencing system. Compare FF 1-4 with FF 6-12. As its name suggests, Kinoshita’s teleconferencing system is a technology used for an entirely different purpose, namely teleconferencing, and has nothing to do with music, let alone MIDI. To say that skilled artisans could somehow use MIDI control signals in Kinoshita’s teleconferencing system as the Examiner proposes simply strains reasonable limits. To be sure, patentability under § 103 requires that an improvement must be more than the predictable use of prior art elements according to their established functions. KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 417 (2007). But here, we cannot say—nor has the Examiner shown—that Appeal 2009-009356 Application 10/702,262 22 utilizing Sgroi’s MIDI control signals as proposed (FF 6-13) would yield a predictable result regarding the recited control functions in claims 22-24 in Kinoshita’s teleconferencing system that has nothing to do with music generation whatsoever—let alone MIDI protocols. Nor has the Examiner shown why skilled artisans would have a rational basis for applying Sgroi’s MIDI-based timbral change capabilities to Kinoshita’s teleconferencing system. Compare FF 1-4 with FF 13. Here again, we see no rational basis on this record why skilled artisans would combine a MIDI-based scheme such as that in Sgroi to Kinoshita’s teleconferencing system absent impermissible hindsight using Appellant’s own disclosure as a blueprint.11 We are therefore persuaded that the Examiner erred in rejecting claims 22-24 and 26. For similar reasons, we likewise reverse the Examiner’s obviousness rejection of claims 9-11 and 13. Nor has the Examiner shown that Sgroi cures the previously-noted deficiencies of Kinoshita even if these references were combinable—which they are not. CONCLUSION Under § 102, the Examiner did not err in rejecting claims 14, 16, and 25, but erred in rejecting claims 1-8, 12, 15, 17-21, and 27-31. Under § 103, the Examiner erred in rejecting claims 9-11, 13, 22-24, and 26. 11 “It is impermissible to use the claimed invention as an instruction manual or ‘template’ to piece together the teachings of the prior art so that the claimed invention is rendered obvious . . . .” In re Fritch, 972 F.2d 1260, 1266 (Fed. Cir. 1992). Appeal 2009-009356 Application 10/702,262 23 ORDER The Examiner’s decision rejecting claims 1-31 is affirmed-in-part. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). AFFIRMED-IN-PART pgc Application/Control No. 10/702,262 Applicant(s)/Patent Under Reexamination Notice of References Cited Examiner Marlo Fletcher Art Unit 2800 Page 1 of 1 U.S. PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Name Classification A US- B US- C US- D US- E US- F US- G US- H US- I US- J US- K US- L US- M US- FOREIGN PATENT DOCUMENTS * Document Number Country Code-Number-Kind Code Date MM-YYYY Country Name Classification N O P Q R S T NON-PATENT DOCUMENTS * Include as applicable: Author, Title Date, Publisher, Edition or Volume, Pertinent Pages) X U William H. Hayt Jr. & Jack E. Kemmerly, "Engineering Circuit Analysis" 268-70 X V Ferrel G. Stremler, "Introduction to Communication Systems" 2d ed. 1982 W X *A copy of this reference is not being furnished with this Office action. (See MPEP § 707.05(a).) Dates in MM-YYYY format are publication dates. Classifications may be US or foreign. U.S. Patent and Trademark Office PTO-892 (Rev. 01-2001) Notice of References Cited Part of Paper No. Delete Last PagelAdd A PageAuto-Fill Copy with citationCopy as parenthetical citation