Ex Parte Lee et alDownload PDFPatent Trial and Appeal BoardJul 7, 201713756428 (P.T.A.B. Jul. 7, 2017) Copy Citation United States Patent and Trademark Office UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O.Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 13/756,428 01/31/2013 Bowon Lee 83138895 1046 22879 HP Tnr 7590 07/11/2017 EXAMINER 3390 E. Harmony Road Mail Stop 35 NGUYEN, NHAT HUY T FORT COLLINS, CO 80528-9544 ART UNIT PAPER NUMBER 2172 NOTIFICATION DATE DELIVERY MODE 07/11/2017 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): ipa.mail@hp.com barbl@hp.com y vonne.bailey @ hp. com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte BOWON LEE and RONALD W. SCHAFER Appeal 2017-001328 Application 13/756,428 Technology Center 2100 Before JOHN A. EVANS, JOYCE CRAIG, and STEVEN M. AMUNDSON, Administrative Patent Judges. AMUNDSON, Administrative Patent Judge. DECISION ON APPEAL Appellants1 seek our review under 35 U.S.C. § 134(a) from a final rejection of claims 1—16, i.e., all pending claims. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 Appellants identify the real party in interest as Hewlett-Packard Development Company, LP. App. Br. 3. Appeal 2017-001328 Application 13/756,428 STATEMENT OF THE CASE The Invention According to the Specification, the invention relates to a “method performed by a processing system” that includes “providing, to an audio service, a virtual microphone selection corresponding to at least one of a set of audio source devices determined to be in proximity to the processing system” and “receiving, from the audio service, an output audio stream that is formed from one of a set of source audio streams received from the set of audio source devices and corresponds to the virtual microphone selection.” Abstract.2 Exemplary Claims Independent claims 1 and 9 exemplify the subject matter of the claims under consideration and read as follows: 1. A method performed by a processing system, the method comprising: generating a user interface including a representation of a set of audio source devices in proximity to the processing system, wherein an arrangement of the representation of the set of audio source devices on the user interface corresponds to physical positions of the audio source devices relative to the processing system in a setting, wherein the setting comprises a room, an auditorium, or an event site; receiving, via the user interface, a virtual microphone selection corresponding to a first one of the set of audio source devices; 2 This decision uses the following abbreviations: “Spec.” for the Specification, filed January 31, 2013; “Final Act.” for the Final Office Action, mailed January 16, 2015; “App. Br.” for the Appeal Brief, filed June 12, 2015; “Ans.” for the Examiner’s Answer, mailed September 1, 2016; and “Reply Br.” for the Reply Brief, filed October 31, 2016. 2 Appeal 2017-001328 Application 13/756,428 providing, to an audio service, the virtual microphone selection; and receiving, from the audio service, an output audio stream that is formed from a first one of a set of source audio streams received from the set of audio source devices and corresponds to the virtual microphone selection. 9. An article comprising at least one non-transitory machine-readable storage medium storing instructions that, when executed by a processing system of an audio service, cause the processing system of the audio service to: receive source audio streams from audio source devices having a defined relationship; receive a first virtual microphone selection corresponding to a first one of the audio source devices, wherein the first virtual microphone selection is received from a second one of the audio source devices; and in response to the first virtual microphone selection, provide, to the second audio source device, a first output audio stream that is at least partially formed from a source audio stream received from the first audio source device. App. Br. 18, 20 (Claims App.). The Prior Art Supporting the Rejections on Appeal As evidence of unpatentability, the Examiner relies on the following prior art: Singer et al. (“Singer”) US 5,889,843 Mar. 30, 1999 Urisaka et al. (“Urisaka”) US 2001/0024233 Al Sept. 27, 2001 Yin et al. (“Yin”) US 8,681,203 B1 Mar. 25, 2014 (filed Aug. 20, 2012) The Rejections on Appeal Claims 1—8 and 13—16 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Singer and Urisaka. Final Act. 5—12; Ans. 2—9. 3 Appeal 2017-001328 Application 13/756,428 Claims 9—12 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Singer, Urisaka, and Yin. Final Act. 12—15; Ans. 10—12. ANALYSIS We have reviewed the rejections of claims 1—16 in light of Appellants’ arguments that the Examiner erred. For the reasons explained below, we disagree with Appellants’ assertions regarding error by the Examiner. We adopt the Examiner’s findings in the Final Office Action (Final Act. 2—15) and Answer (Ans. 2—15). We add the following to address and emphasize specific findings and arguments. The § 103(a) Rejection of Claims 1—8 and 13 16 Independent Claim 1: “A Representation” Corresponding to the “Physical Positions of the Audio Source Devices Relative to the Processing System” Appellants argue that the Examiner erred in rejecting claim 1 because Singer fails to teach or suggest “generating a user interface including a representation of a set of audio source devices in proximity to the processing system” where “the representation of the set of audio source devices on the user interface” corresponds to the “physical positions of the audio source devices relative to the processing system.” App. Br. 7—10; Reply Br. 4—6. More specifically, Appellants assert that “the arrangement of the icons 172, 174, 176” in Singer Figure 8a “does not correspond to the physical positions of persons 172, 174, 176 relative to user 184” but instead “corresponds to who is currently conversing with user 184.” App. Br. 8. Appellants also assert that “the sensors 82, 84, and 86” in Singer Figure 3 “do not sense the physical positions of objects 62, 64, and 66 relative to a processing system, or the proximity from the objects 62, 64, and 66 to a processing system.” Id. 4 Appeal 2017-001328 Application 13/756,428 at 9. Appellants further assert that “[s]ensing the positions of objects 62, 64, and 66 within a physical space does not indicate the positions of objects 62, 64, and 66 relative to a processing system.” Id. Appellants’ assertions do not persuade us of Examiner error. The Examiner finds that Singer teaches that (1) “sensors 82, 84 and 86 are attached to audio source[s] 62, 64 and 66” respectively and (2) the sensors are used “to update the audio objects’ locations dynamically.” Ans. 12 (citing Singer 5:50-65, 6:5—8, Fig. 3). In particular, Singer discloses “a method of audio communication between a plurality of users” where “a user has a bidirectional audio communication link” with audio input and output devices. Singer 1:17—19, 1:52—54, 3:20—27, 3:37-44, 4:64—5:6, Fig. 1. Singer teaches that each user has his or her own graphical user interface. Id. at 9:19—22. In addition, Singer discloses that an interface provides “metaphorical representations ... in the form of physical representations” relating to the users. Id. at 1:57—59, 4:28—31, 5:44-45, 5:52—54. Thus, the objects 62, 64, and 66 in the Figure 3 interface represent users with audio source devices and interfaces. Id. at 5:42-47, 5:55—59. Also, Singer discloses that the sensors 82, 84, and 86 “provide dynamic position sensing so that changes in the physical location” of the objects 62, 64, and 66 “can be determined expeditiously” and “changed dynamically” in the Figure 3 interface. Id. at 5:42-47, 5:63—6:8, Fig. 3; see App. Br. 9 (quoting Singer 5:52—6:1). The Examiner finds that the objects 62, 64, and 66 are displayed “relative to some coordinate system” and “their location is relative to each other.” Final Act. 3, 5; Ans. 12—13. The Specification explains that “the terms processing system and device are used interchangeably,” such that a processing system can act as 5 Appeal 2017-001328 Application 13/756,428 an audio source device and an audio source device can act as a processing system. Spec. Ill; see id. 113. Consequently, one of the three objects 62, 64, and 66 in the Figure 3 interface can act as a processing system (e.g., object 66), and the other two objects can act as audio source devices (e.g., objects 62 and 64). If the objects 62 and 64 move in their respective subspaces and the object 66 remains stationary, the interface will “change[] dynamically” to reflect that movement relative to object 66. Thus, the interface includes a representation of a set of two audio source devices (e.g., objects 62 and 64) corresponding to the “physical positions of the audio source devices relative to” a processing system (e.g., object 66). Appellants assert that “sensors 82, 84, and 86 of Singer do not sense the positions of the objects 62, 64, and 66 relative to the processing system that displays the arrangement on the user interface.” Reply Br. 4—5 (citing Singer 5:52—6:1, Fig. 3). But claim 1 does not specify how physical positions are sensed or even require a sensor. So that assertion does not distinguish claim 1 from Singer. Independent Claim 1: “A Setting” That “Comprises a Room, an Auditorium, or an Event Site” Appellants argue that the Examiner erred in rejecting claim 1 because Singer fails to teach or suggest “generating a user interface including a representation of a set of audio source devices in proximity to the processing system” where “the representation of the set of audio source devices on the user interface” corresponds to “a setting” that “comprises a room, an auditorium, or an event site.” App. Br. 7—10; Reply Br. 6—7. Referring to Singer Figure 8a, Appellants argue that “persons 172, 174, [and] 176 are in sites that are remote from the site of user 184” and “[b]eing in different 6 Appeal 2017-001328 Application 13/756,428 remote sites is not the same as being in a room, auditorium, or event site.” App. Br. 8. Appellants’ arguments do not persuade us of Examiner error because the Examiner finds that Singer discloses audio conferencing in the auditory environment of a meeting room. Final Act. 3, 5—6 (citing Singer 1:10—50, 5:50-65, 9:49—10:10, Fig. 3). More specifically, Singer discloses three audio sources corresponding to “three remotely-located persons A, B, and C.” Singer 7:58—61, 9:49—61 ; see Final Act. 3, 5—6. Singer explains, however, that “the user is currently conversing with person C.” Singer 10:4—5 ; see Final Act. 3, 5. Singer also explains that while persons A and B are “located away” from the user, “either of persons A or B” may “verbally get the attention of the user, and furthermore, the user can spatially distinguish between persons A and B.” Singer 10:5—10; see Final Act. 3,5. Consequently, Singer teaches or suggests that the user and persons A, B, and C occupy the same room to permit (1) person C to “converse” with the user, (2) persons A and B to “verbally” get the user’s attention, and (3) the user to “spatially distinguish persons A and B.” In addition, Singer discloses that “a pair of speakers 190” in Figure 8b creates the “auditory environment” shown in the interface. Singer 9:56—63; see Final Act. 6. Figure 8b depicts the pair of speakers 190, the user, and persons A, B, and C located within that auditory environment, i.e., within the same room. Singer Fig. 8b. Thus, Singer teaches or suggests an interface including a representation of a set of audio source devices corresponding to “a setting” that “comprises a room, an auditorium, or an event site.” Moreover, we observe that Urisaka discloses several audio source devices in the same room. See, e.g., Urisaka H 74, 86, 141, Fig. 3 (microphones 71 and 72). 7 Appeal 2017-001328 Application 13/756,428 Independent Claim 13 Appellants argue that the Examiner erred in rejecting independent claim 13 because “Singer in view of Urisaka fails to teach or suggest” the following limitations in claim 13: receiving a virtual microphone selection corresponding to a first one of the set of audio source devices in the representation via the user interface; providing the virtual microphone selection to an audio service that receives a set of source audio streams from the set of audio source devices; and receiving, from the audio service, an output audio stream that corresponds to the virtual microphone selection and is formed from a source audio stream received by the audio service from the first one of the set of audio source devices. App. Br. 11—12; Reply Br. 7—8. In particular, Appellants contend that Singer does not disclose a processing system that (1) “receives a selection corresponding to one of the devices . . . and provides that selection to an audio service that receives audio streams from all of the devices” and (2) “receives an audio stream corresponding to the selected device . . . from the audio service.” App. Br. 12; see Reply Br. 7—8. In addition, Appellants contend that Urisaka’s control circuit “does not receive a selection of a virtual microphone, does not provide the virtual microphone selection to an audio service, and does not receive an audio signal corresponding to the selected device from the audio service.” App. Br. 13. Appellants’ contentions do not persuade us of Examiner error because they address the references individually, and the rejection rests on the combination of references. Final Act. 5—6. Where a rejection rests on a combination of references, an appellant cannot establish nonobviousness by 8 Appeal 2017-001328 Application 13/756,428 attacking the references individually. See In re Merck & Co., 800 F.2d 1091, 1097 (Fed. Cir. 1986). Here, the Examiner finds that Urisaka discloses receiving a virtual microphone selection corresponding to an audio source device. Final Act. 3—4, 6 (citing Urisaka 86, 109, 141, Figs. 3, 15, 28); see Ans. 13—14. For example, Urisaka Figure 3 illustrates an interface with icons representing the positions of several cameras 66 and microphones 71 and 72 arranged with respect to “a seat layout or the like in an office or the like.” Urisaka H 17, 71, 74, 86, 141, Fig. 3. The Examiner finds that Urisaka discloses that when a user selects a particular camera icon in the interface, “the system will select ‘a set of source audio streams’ associated with the camera.” Ans. 13—14 (citing Urisaka 1109); see Final Act. 6 (citing step S702 in Urisaka Fig. 6); see also Urisaka 1109, Fig. 6. Hence, by selecting a particular camera icon in the interface, i.e., a virtual camera, the user also selects a microphone icon in the interface, i.e., a virtual microphone. In addition, Urisaka teaches that (1) “audio inputs may be controlled independently of the camera control” and (2) a user may remotely control a microphone “independently of the camera control” by “clicking the corresponding microphone icon” in the interface. Urisaka Tflf 138—142. Further, the Examiner finds that Singer discloses receiving from an audio service an output audio stream formed from one of a set of source audio streams and corresponding to a microphone. Final Act. 6 (citing Singer 9:49—10:10, Figs. 8a, 8b, 9, 10); Ans. 13 (citing Singer 3:39-44, 9:52—54, Fig. 8b). More specifically, Singer discloses “using a plurality of microphones configured to capture a spatial representation of the auditory space being sensed.” Singer 3:39-42; see Ans. 13. Singer also discloses an 9 Appeal 2017-001328 Application 13/756,428 audio conferencing system with: (1) audio transceivers at various locations, (2) graphical user interfaces associated with the audio transceivers having icons corresponding to the locations of the audio transceivers, (3) a processor associated with each interface for generating control signals, and (4) an audio mixer that (a) receives audio signals from each of the audio transceivers, (b) variably amplifies one or more received audio signals based on the control signals from the processor, and (c) provides audio signals to an audio output device. Singer 2:7—30, 7:61—8:1, 8:15—18, 8:35—40, 9:1—5, 9:19—22, 9:62—10:10, 10:43—46, Figs. 6, 8a, 8b, 9, 10. Singer explains that any user can manipulate an icon on his or her interface to alter the sound for the audio source corresponding to the icon, e.g., change the sound volume of another user. Id. at 6:53—58, 8:4—6, 9:66—10:2, 11:32—38. Consequently, the combination of disclosures in Singer and Urisaka taken as a whole teaches or suggests the disputed limitations in claim 13. Referring to Singer Figure 8b, Appellants assert that “Singer does not teach or suggest that the user 184 can select one of the objects 160, 162, and 164 on the user interface to receive an audio stream from the selected object.” Reply Br. 8. But Singer discloses that “each user has the ability to personalize his/her spatial auditory environment by using the graphical user interface 120 to place each of the other users at corresponding locations in the display space.” Singer 9:19—22. In Figure 8b, reference numerals 160, 162, and 164 denote three additional users called persons A, B, and C. Id. at 9:51—55. Because any user can manipulate an icon on his or her interface to alter the sound for the audio source corresponding to the icon, the user in Figures 8a and 8b could move the icon corresponding to person A to increase the sound volume for person A and also move the icons 10 Appeal 2017-001328 Application 13/756,428 corresponding to persons B and C to decrease the sound volume, e.g., down to zero. See id. at 6:53—58, 8:4—6, 9:49—10:2, 11:32—38. Accordingly, Singer teaches or suggests user selection of an individual audio stream. Summary for Independent Claims 1 and 13 For the reasons discussed above, Appellants’ arguments have not persuaded us that the Examiner erred in rejecting claims 1 and 13 for obviousness based on Singer and Urisaka. Hence, we sustain the rejection of claims 1 and 13. Dependent Claims 2,4—8, and 14—16 Claims 2 and 4—8 depend from claim 1, while claims 14—16 depend from claim 13. Appellants argue that these dependent claims are “allowable by virtue of being dependent from allowable independent claims 1 and 13.” App. Br. 13. Because Appellants do not make separate substantive patentability arguments for these dependent claims, we sustain the obviousness rejection of these dependent claims for the same reasons as claims 1 and 13. See 37 C.F.R. § 41.37(c)(l)(iv). Dependent Claim 3 Claim 3 depends from claim 1 and requires that “the arrangement [on the interface] is based on the positions of the set of audio source devices relative to the processing system.” App. Br. 18 (Claims App.). Appellants attempt to distinguish claim 3 from Singer on the same grounds as claim 1. Compare App. Br. 13—14, with id. at 8—10. Accordingly, we sustain the obviousness rejection of claim 3 for the same reasons as claim 1. 11 Appeal 2017-001328 Application 13/756,428 The § 103(a) Rejection of Claims 9—12 Appellants argue that the Examiner erred in rejecting independent claim 9 because “Yin fails to teach or suggest” the following limitations in claim 9: receive a first virtual microphone selection corresponding to a first one of the audio source devices, wherein the first virtual microphone selection is received from a second one of the audio source devices; and in response to the first virtual microphone selection, provide, to the second audio source device, a first output audio stream that is at least partially formed from a source audio stream received from the first audio source device. App. Br. 15—16 (emphasis omitted); Reply Br. 9—10 (emphasis omitted). Appellants assert that “in Yin, the device does not receive a selection of a particular device from a second device, and does not provide an output audio stream from that particular device to the second device.” App. Br. 15. Appellants also assert that “the muting of a first group of audio streams in Yin is based on the property of the audio streams, and not based on a selection of a first device received from a second device.” Id. Appellants further assert that microphone muting according to Yin is the opposite of “providing an output audio stream received from the selected device to the second device.” Id.; Reply Br. 10. Appellants’ assertions do not persuade us of Examiner error because they address the references individually, and the rejection rests on the combination of references. Final Act. 5—6, 13; see Ans. 14—15. For example, the Examiner relies on Urisaka, not Yin, for disclosing the limitation concerning receiving a virtual microphone selection corresponding to an audio source device. Final Act. 3—4, 6, 13; see Ans. 13— 12 Appeal 2017-001328 Application 13/756,428 14. As discussed above for claim 13, Appellants have not apprised us of error in the Examiner’s findings or reasoning regarding Urisaka. Further, the Examiner finds that Yin discloses selecting certain audio streams to mute and other audio streams to distribute during a communication session. Final Act. 13 (citing Yin Abstract, 1:38—55, 28:19—55 (claim 1), Figs. 1, 4). For example, Yin explains that a device for executing communication processes “distributes a second group of the audio streams” during a communication session “while muting the first group of audio streams.” Yin Abstract. Referring to the control panel buttons shown in Figure 4, Yin explains that a user may operate one button to “turn the smart mute feature off or on” and operate another button to “manually mute [or unmute] the audio input” from the user’s device. Id. at 24:10-43, Fig. 4 (interface 23B including control panel 28B with buttons 122 and 124). Thus, Yin teaches or suggests providing an audio stream from a first audio source device to a second audio source device in response to a user selection. Additionally, as discussed above for claim 13, Singer discloses that any user can manipulate an icon on his or her interface to alter the sound for the audio source corresponding to the icon. Singer 6:53—58, 8:4—6, 9:66— 10:2, 11:32—38. Thus, Singer also teaches or suggests providing an audio stream from a first audio source device to a second audio source device in response to a user selection. Because the combination of disclosures in Singer, Urisaka, and Yin taken as a whole teaches or suggests the disputed limitations in claim 9, we sustain the obviousness rejection of claim 9. 13 Appeal 2017-001328 Application 13/756,428 Claims 10-12 depend from claim 9. Appellants argue that these dependent claims are “allowable by virtue of being dependent from allowable independent claim 9.” App. Br. 16. Because Appellants do not make separate substantive patentability arguments for these dependent claims, we sustain the obviousness rejection of these dependent claims for the same reasons as claim 9. See 37 C.F.R. § 41.37(c)(l)(iv). DECISION We affirm the Examiner’s decision to reject claims 1—16. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l)(iv). See 37 C.F.R. § 41.50(f). AFFIRMED 14 Copy with citationCopy as parenthetical citation