Ex Parte AartsDownload PDFPatent Trial and Appeal BoardAug 31, 201612996034 (P.T.A.B. Aug. 31, 2016) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE FIRST NAMED INVENTOR 12/996,034 12/03/2010 Ronald M. Aarts 24737 7590 09/02/2016 PHILIPS INTELLECTUAL PROPERTY & STANDARDS 465 Columbus A venue Suite 340 Valhalla, NY 10595 UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 2008P00626WOUS 5089 EXAMINER DOUGHERTY, SEAN PATRICK ART UNIT PAPER NUMBER 3736 NOTIFICATION DATE DELIVERY MODE 09/02/2016 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address( es): marianne.fox@philips.com debbie.henn@philips.com patti. demichele@Philips.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte RONALD M. AARTS Appeal2014-008694 Application 12/996,034 Technology Center 3700 Before DONALD E. ADAMS, JEFFREY N. FREDMAN, and TIMOTHY G. MAJORS, Administrative Patent Judges. PERCURIAM DECISION ON APPEAL This is an appeal 1 under 35 U.S.C. § 134 involving claims to an acoustical patient monitoring system using a classifier and a microphone. The Examiner rejected the claims on the grounds of anticipation and obviousness. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 Appellant identifies the Real Party in Interest as Koninklijke Philips Electronics N.V. (see App. Br. 1 ). Appeal2014-008694 Application 12/996,034 Statement of the Case Background Appellant's invention relates to "a patient monitoring system [that] includes one or more microphones that detect acoustic events generated by a patient and generate signals comprising information describing the acoustic events, a processor that timestamps the acoustic event signals, and a classifier that classifies each acoustic event signal into one of a plurality of acoustic event classes" (Spec. 2: 3-7). The Claims Claims 1-11 and 13-20 are on appeal. Independent claim 1 is representative and reads as follows (emphasis added): 1. A patient monitoring system, including: one or more microphones that detect acoustic events generated by a patient, and generate signals compnsmg information describing the acoustic events; a processor that timestamps the acoustic event signals; a classifier that classifies each acoustic event signal into one of a plurality of acoustic event classes; and a memory that stores computer-executable instructions that are executed by the processor, including instructions for: filtering the acoustic event signal and identifj;ing signatures associated with the acoustic event signal prior to digitizing the acoustic event signal; digitizing the acoustic event signal; classifj;ing the acoustic event signal into one of a plurality of acoustic event classes as a function of the identified signatures. 2 Appeal2014-008694 Application 12/996,034 The Issues A. The Examiner rejected claims 1-7 under 35 U.S.C. § 102(b) as being anticipated by Shalon2 (Ans. 2--4). B. The Examiner rejected claims 8-11 and 13-20 under 35 U.S.C. § 103(a) as obvious over Shalon, Meyer, 3 and Thiagarajan4 (Ans. 6-8). A. 35 USC§ 102(b) over Shalon The Examiner finds that Shalon teaches a classifier that classifies each acoustic event signal into one of a plurality of acoustic event classes (deriving an activity related signature therefrom, thereby enabling classification of a specific activity associated with the non-verbal acoustic energy, paragraph 001 O; distinction between, for exampling, chewing and swallowing, paragraph 0261 ); and filtering the acoustic event signal (a preprocessing stage filters out noise, paragraph 0240) and identifying signatures associated with the acoustic event signal prior to digitizing the acoustic event signal (a preprocessing module detects the presence of eating activity and automatically conditions the signal using automatic gain control on the analog signal prior to being digitized [emphasis added], paragraph 0254) .... (Ans. 3.) The issue with respect to this rejection is: Does the evidence of record support the Examiner's findings that Shalon anticipates Appellant's claimed invention? 2 Shalon et al., US 2006/0064037 Al, published Mar. 23, 2006. 3 Meyer et al., US 6,063,043, issued May 16, 2000. 4 Thiagarajan, US 7,806,833 B2, issued Oct. 5, 2010. 3 Appeal2014-008694 Application 12/996,034 Findings of Fact 1. Shalon teaches a system for detecting activity related to non-verbal acoustic energy generated by a subject comprising: (a) a sensor mountable on or in a body region of the subject, the sensor being capable of sensing the non-verbal acoustic energy; and (b) a processing unit being capable of processing the non-verbal acoustic energy sensed by the sensor and deriving an activity related signature therefrom, thereby enabling classification of a specific activity associated with the non-verbal acoustic energy. (Shalon i-f 10; see also Ans. 2-3.) 2. Shalon teaches Acoustic energy generated by chewing, swallowing, biting, sipping, drinking, teeth grinding, teeth clicking, tongue clicking, tongue movement, jaw muscles or jaw bone movement, spitting, clearing of the throat, coughing, sneezing, snoring, breathing rate, breathing depth, nature of the breath, heart beat, digestion, motility to or through the intestines, tooth brushing, smoking, screaming, user;s voice or speech, other user generated sounds, and ambient noises in the user's immediate surroundings can be monitored through one or more sensors (e.g. microphones) positioned in or around the ear area, on the skull, neck, throat, chest, back or abdomen regions. . . . Microphones in different positions or orientations can be tuned to detect sounds originating within the user's body as opposed to ambient sounds surrounding the user. ... Each microphone can be optimized to receive a specific range of sound frequencies corresponding to the signal to be measured. The sensing element can be designed to be sensitive to a wide range of frequencies of the acoustic energy generated in the head region, ranging from approximately 0.001 hertz up to approximately 100 kilohertz. The sensing element can be sensitive to just a narrow range of frequencies and a multiplicity of sensing elements used to cover a broader range of frequencies. 4 Appeal2014-008694 Application 12/996,034 (Shalon il I 17; see also Ans. 4.) 3. Shalon teaches The sensed ingestion activity can be transmitted without being preprocessed in which case it can represent a simple time and duration of ingestion, intensity and rate of ingestion, count of bites, chews, swallows, etc., over a time period. If desired, transmitted non-processed data, such as by way of example raw audio recordings of ingestion sounds, can be processed by the third party to derive activity related signatures etc. (Shalon i-f 192; see also Ans. 3.) 4. Shalon teaches The bone conduction microphone is designed to sense the acoustic energy generated within the mouth during eating. The microphone's analogue electrical output is transmitted to processing unit 14 for signal processing. A preprocessing stage filters out noise, normalizes the energy level, and segments the sampled sound into analysis frames. Features are then extracted from the signal using spectral signature analysis to identify waveforms with eating microstructure events (signatures). The extracted components are then evaluated by a statistical classifier that combines the observed data (the features) with prior information about the patterns to segment the input data into specific event categories such as chews, sips, and speech. The extracted acoustic energy patterns are then mapped into food intake events. (Shalon i-f 240; see also Ans. 3.) 5. Shalon teaches Sensor unit 12 (bone conduction microphone in this case) records the sounds made by chewing, swallowing, biting, sipping, and drinking. The salient acoustic features are extracted using a statistical-based pattern recognition system to classify the sounds into specific events. The output of the recognizer can be a hypothesized event sequence that can be 5 Appeal2014-008694 Application 12/996,034 used to track the flow of ingested food. The accuracy of the hypothesized output can be validated using a database of sounds annotated by a panel ofhmnan expert listeners. (Shalon i-f 252; see also Ans. 3--4.) 6. Shalon teaches that "[a] preprocessing module detects the presence of eating activity and automatically conditions the signal using automatic gain control on the analog signal prior to being digitized" (Shalon i-f 254; see also Ans. 3). 7. Shalon teaches "band-pass filters (BPF) 104 and low-pass filter (LPF) 106 to allow signals of certain frequency through" (Shalon i-f 271; see also Ans. 4 ). Principles of Law A prior art reference can only anticipate a claim if it discloses all the claimed limitations "arranged or combined in the same way as in the claim." Wm. Wrigley Jr. Co. v. Cadbury Adams USA LLC, 683 F.3d 1356, 1361 (Fed. Cir. 2012). Analysis We adopt the Examiner's findings of fact and reasoning regarding the scope and content of the prior art (Ans. 2-11; FF 1-7) and agree that claims 1-7 are anticipated by Shalon. We address Appellant's arguments below. Claim 1 Appellant contends that "[ n ]othing in the cited passage describes the claimed feature of filtering detected acoustic events and identifying signal signatures prior to digitization" (App. Br. 6; see also Reply Br. 3). Appellant further argues that "Shalon states 'The signals received from sensor unit 12 (and other input systems) are preferably first digitized,"' and 6 Appeal2014-008694 Application 12/996,034 that "Shalon teaches away from identifying signatures in a signal prior to digitization of the signal, in favor of digitizing the signal prior to any processing or signal signature identification" (App. Br. 6; see also Reply Br. 3-5). We are not persuaded. Shalon teaches that "[a] preprocessing module detects the presence of eating activity and automatically conditions the signal using automatic gain control on the analog signal prior to being digitized" (FF 6 (emphasis added)), and "band-pass filters (BPF) 104 and low-pass.filter (LPF) 106 to allow signals of certain frequency through" (FF 7 (emphasis added)). Therefore, Shalon is expressly teaching an example where signals are filtered prior to being digitized, even if this is not preferred (FF 6). See Merck & Co. Inc. v. Biocraft Labs. Inc., 874 F.2d 804, 807 (Fed. Cir. 1989) ("'the fact that a specific [embodiment] is taught to be preferred is not controlling, since all disclosures of the prior art, including unpreferred embodiments, must be considered."') Moreover, the concept of teaching away is inapplicable when, as here, the rejection is based on anticipation. See Celeritas Techs., Ltd. v. Rockwell Int'! Corp., 150 F.3d 1354, 1361 (Fed. Cir. 1998) ("Thus, the question whether a reference 'teaches away' from the invention is inapplicable to an anticipation analysis."). Therefore, we agree with the Examiner that Shalon discloses the filtering of the acoustic event signal in a preprocessing state that filters out noise (paragraph 0240) and the identifying of signatures associated with an acoustic event signal prior to digitizing the acoustic event, as Shalon discloses a preprocessing module that detects the presence of eating activity and automatically conditions the 7 Appeal2014-008694 Application 12/996,034 signal using automatic gain control on the analog signal prior to being digitized as set forth at paragraph 0254 of Shalon. (Ans. 9.) We also agree with the Examiner that "the detection performed by the preprocessing module reads upon the 'identifying' limitation, the signal which is conditioned and normalized prior to digitization reads upon the 'signal' limitation, and the amplitude of the signal generated by the sensor unit 12 reads upon the 'signature' limitation" (id.), and that "the gain control performed on the signal detected by the preprocessing module, as set forth in paragraph 0254, is another type of identifying a signal signature performed before digitization" (id. at 10). We recognize, but find unpersuasive, Appellant's contention that "[ s ]ince the microphone is designed to detect a single type of acoustic event (chewing), Shalon is not concerned with identifying signatures associated with the acoustic event, nor does it classify the acoustic event since the type of event is known upon detection by the specially-designed bone conduction microphone" (Reply Br. 4). Shalon teaches Acoustic energy generated by chewing, swallowing, biting, sipping, drinking, teeth grinding, teeth clicking, tongue clicking, tongue movement, jaw muscles or jaw bone movement, spitting, clearing of the throat, coughing, sneezing, snoring, breathing rate, breathing depth, nature of the breath, heart beat, digestion, motility to or through the intestines, tooth brushing, smoking, screaming, user's voice or speech, other user generated sounds, and ambient noises in the user's immediate surroundings can be monitored through one or more sensors (e.g. microphones) positioned in or 8 Appeal2014-008694 Application 12/996,034 around the ear area, on the skull, neck, throat, chest, back or abdomen regions. (FF 2.) Therefore, Shalon teaches detecting acoustic events other than chewing. "Attorney's argument in a brief cannot take the place of evidence." In re Pearson, 494 F.2d 1399, 1405 (CCPA 1974). Claim 4 Appellant argues that "the cited passage mentions that each microphone can be optimized to receive a specific frequency range, but is silent with regard to employing active noise cancellation, let alone ANC that suppresses non-patient generated acoustic events" (App. Br. 7; see also Reply Br. 8-9). Appellant also argues that "the fact that Shalon optimizes microphone position and tuning to avoid unwanted frequencies is further evidence that Shalon does not employ active noise cancellation to suppress such frequencies" (Reply Br. 9). We do not find these arguments persuasive. Shalon teaches Microphones in different positions or orientations can be tuned to detect sounds originating within the user's body as opposed to ambient sounds surrounding the user. . . . Each microphone can be optimized to receive a specific range of sound frequencies corresponding to the signal to be measured. The sensing element can be designed to be sensitive to a wide range of frequencies of the acoustic energy generated in the head region, ranging from approximately 0.001 hertz up to approximately 100 kilohertz. The sensing element can be sensitive to just a narrow range of frequencies 9 Appeal2014-008694 Application 12/996,034 and a multiplicity of sensing elements used to cover a broader range of frequencies. (FF 2 (emphasis added).) Accordingly, we agree with the Examiner that the "suppression of non-patient generated acoustic even[t]s is also fully disclosed because Shalon explicitly states at paragraph 0117 that by using ANC, "sounds surrounding the user" can be avoided by optimizing positions, orientations and frequency ranges of the microphones. (Ans. 11; cf Shalon i-f 121 ("[t]he ambient noise or the noise of the speaker can be cancelled out from the microphone input using passive and active means"); i-f 340 ("[ t ]he system can also be used as noise cancellation headphones by monitoring the ambient noise arriving in the ear region of the user and then generating the opposite sound pattern and transmitting this cancellation wave through the bones of the user to the ear").) See In re Sneed, 710 F.2d at 1548. Accordingly, we affirm the rejection of claim 1. Appellant does not argue the claims 2, 3, and 5-7 separately, and therefore, claims 2, 3, and 5-7 fall with claim 1. B. 35 USC§ 103(a) over Shalon, Meyer, and Thiagarajan Claims 11, 19 and 20 Appellant similarly argues that "[ n ]othing in the cited passage describes the claimed feature of filtering detected acoustic events and identifying signal signatures prior to digitization," and "Shalon states 'The signals received from sensor unit 12 (and other input systems) are preferably first digitized"' (App. Br. 8-11; see also Reply Br. 6-8). We are not persuaded for the reasons discussed above. 10 Appeal2014-008694 Application 12/996,034 Claim 17 Appellant similarly argues that "the cited passage mentions that each microphone can be optimized to receive a specific frequency range, but is silent with regard to employing active noise cancellation, let alone ANC that suppresses non-patient generated acoustic events" (App. Br. 10; see also Reply Br. 8-9). We are not persuaded for the reasons discussed above. Accordingly, we affirm the rejection of claims 11, 17, 19 and 20. Appellant does not argue the other claims separately, and therefore, claims 13-16 fall with claim 11, and claim 19 falls with claim 18. SUMMARY In summary, we affirm the rejection of claim 1 under 35 U.S.C. § 102(b) as being anticipated by Shalon. Claims 2-7 fall with claim 1. We affirm the rejection of claims 8-11, 19, and 20 under 35 U.S.C. § 103(a) as obvious over Shalon, Meyer, and Thiagarajan. Claims 13-17 fall with claim 11, and claim 19 falls with claim 18. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a). AFFIRMED 11 Copy with citationCopy as parenthetical citation