Ex Parte MakelaDownload PDFPatent Trial and Appeal BoardDec 30, 201311173781 (P.T.A.B. Dec. 30, 2013) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE __________ BEFORE THE PATENT TRIAL AND APPEAL BOARD __________ Ex parte MIKKO K. MAKELA __________ Appeal 2011-011525 Application 11/173,781 Technology Center 2100 __________ Before TONI R. SCHEINER, ERIC GRIMES, and JEFFREY N. FREDMAN, Administrative Patent Judges. FREDMAN, Administrative Patent Judge. DECISION ON APPEAL This is an appeal1 under 35 U.S.C. § 134 involving claims to a method and apparatus for managing user inputs with a controller. The Examiner rejected the claims as obvious. We have jurisdiction under 35 U.S.C. § 6(b). We reverse. 1 Appellant identifies the Real Party in Interest as Nokia Corporation (App. Br. 2). Appeal 2011-011525 Application 11/173,781 2 Statement of the Case Background “The teachings in accordance with the exemplary embodiments of this invention relate generally to browsers capable of displaying pages containing textual elements and, more specifically, relate to browsers used with mobile devices having a limited display screen area and a limited user input capability” (Spec. 1, ll. 7-10). The Claims Claims 1-30 and 32-38 are on appeal. Claim 1 is representative and reads as follows (emphasis added): 1. A method, comprising: receiving signals from a plurality of user inputs, where at least one user input is a multi-function input operable in a first mode to provide a display control input, and in a second mode to provide another function; and when operating the at least one user input in the first mode, inhibiting operation of the at least one user input in the second mode, where inhibiting operation is performed automatically by a controller in response to a change in content shown on a display in response to operation in the first mode. The issue The Examiner rejected claims 1-30 and 32-38 under 35 U.S.C. § 103(a) as obvious over Nishiyama2 and Bear3 (Ans. 4-11). The Examiner finds that Nishiyama teaches “a method to operate a user interface having an output display and an input” (id. at 4). The Examiner finds that Nishiyama teaches 2 Nishiyama, US 7,019,731 B2, issued March 28, 2006. 3 Bear et al., US 2004/0257341 A1, published Dec. 23, 2004. Appeal 2011-011525 Application 11/173,781 3 [R]eceiving signals from a plurality of user inputs, where at least one user input is a multi-function input operable in a first mode to provide a display control input, and in a second mode to provide another function (column 1 lines 52-58, line 67 to column 2 line 3 → Nishiyama discloses a hand held device with a ten-key keypad wherein the keys can contain the function of inputting numerals and also to operate an image displayed on display). (Id. at 4-5.) In addition, the Examiner finds that Nishiyama teaches “the selection of the function of use for the keys are based upon the current state of the device such as when a Web page or a map is presented then the function of operating an image displayed will be available to the user” (id. at 5). The Examiner finds that “[w]hile providing one skilled in the art with reasonable expectation that the functions of the keys depend upon the focused content on the display, Nishiyama does not explicitly teach this limitation” (id.). The Examiner finds that Bear et al. teaches [W]hen operating the at least one user input in the first mode, inhibiting operation of the at least one user input in the second mode, where inhibiting operation is performed automatically by a controller in response to a change in content shown on a display (page 4 paragraph [0086], page 5 paragraphs [0087]-[0088], page 6 paragraphs [0105]-[0106] → Bear discloses wherein the input that affects an object is dependent of the object that is the focus of attention). (Id.) The Examiner finds that While Bear does not explicitly teach this limitation taking place in response to operation in the first mode, it is reasonably suggestive to one skilled in the art by way of the examples provided such as the various objects that provide the determination of the logical input (selection objects, Appeal 2011-011525 Application 11/173,781 4 content objects, movable drawing objects, etc[.]) that use of the navigation functions in both Nishiyama and Bear to non- content objects (links and graphics) would prevent the use of numerical or text input options (id.). The Examiner finds it obvious “to have combined the keypad on a handheld device of Nishiyama with the automatic function selection of Bear to provide an effective approach toward a reduction in size of an electronic apparatus needing keys for operating a screen, with improved operability while providing the basis for an interface device that users can immediately identify an[d] use to navigate information in a simple and consistent way” (id. at 5-6). The issue with respect to this rejection is: Does the evidence of record support the Examiner’s conclusion that Nishiyama and Bear et al. render the claims obvious? Findings of Fact 1. Nishiyama teaches A ten-key keypad for inputting numerals mounted on a portable telephone T is adapted to a zoom function and a scroll function for operating an image displayed on a display. The user can use the keys of the ten-key keypad to select the mode in which case the keys also serve as image operation keys. (Nishiyama, abstract.) 2. Nishiyama teaches [A] method of using a ten-key keypad has a first feature of comprising the step of adapting the ten-key keypad, mounted on an electronic apparatus for inputting numerals, Appeal 2011-011525 Application 11/173,781 5 to an image-operation function for operating an image displayed on a display of the electronic apparatus, to allow the keys of the ten-key keypad to serve as image-operation keys by means of mode selection. . . . Due to this adaptation, upon mode selection, the ten-key keypad normally used for inputting numbers is allowed to serve also as an image-operation keypad (Nishiyama, col. 1, l. 52 – col. 2, l. 3). 3. Nishiyama teaches that To attain the aforementioned object, a method of using a ten-key keypad has a sixth feature, in addition to the configuration of the first feature, of further comprising the step of adapting any one of the keys of the ten-key keypad to a switching function for selecting image-operation mode for the image-operation function. With the method of using the ten-key keypad in the sixth feature, the switching function of selecting the image- operation mode is assigned to any one of the keys of the ten- key keypad. By pressing the relevant key, the image- operation mode assigned to the ten-key keypad is selected. This method makes it possible to adapt the ten-key keypad to a plurality of types of image-operation mode, such as a zoom operation, a scroll operation, for example. (Nishiyama, col. 3, ll. 10-23.) 4. Nishiyama teaches that A function key ‘F’ mounted on the operating section Tb of the portable telephone T is adapted to a switching function of switching the ten-key keypad between normal numeral-input mode and image-operation mode as described later. In the ten-key keypad switched to image-operation mode by actuating the function key F, a numeric key ‘5’ Appeal 2011-011525 Application 11/173,781 6 located at the center of the ten-key keypad is adapted to a function of switching between scroll mode and zoom mode (Nishiyama, col. 6, ll. 58-66; see Figure 1). 5. Nishiyama teaches that In the above portable telephone T, the numerals for, e.g., a telephone number are inputted through the ten-key keypad on the operating section Tb in a conventional manner. When operating an image displayed on the display Ta, first, the user activates the switching function assigned to the function key ‘F’ to switch the ten-key keypad to the image- operation mode. Then, the user presses the numeric key ‘5’ to select the desired operation mode from among the zoom mode, scroll mode 1 and scroll mode 2 for operation through the ten-key keypad (Nishiyama, col. 8, ll. 6-15; see Figure 1). 6. Bear teaches “systems, methods, and products for enhanced user navigation to compliment [sic] (but not necessarily replace) a computer keyboard . . . by providing a robust navigation interface” (Bear, abstract). 7. Bear teaches that “[t]he present invention may comprise: a minimally necessary group of commands; combining the functionality a set of at least two command calls into a single logical button” (id.). 8. Bear teaches the invention being based on “objects” (Bear 4-5 ¶ 86). Bear teaches that: An “object” . . . constitute[s], without limitation, a dialog box, menu, web page, text page, movable drawing object, or some other such item in a computer system as such are known and appreciated by those of skill in the art. For the purpose of describing the invention, it will be presumed that all objects can be conveniently divided into Appeal 2011-011525 Application 11/173,781 7 one of four categories: (1) selection objects, such as a dialog box, menu, etc., where a user selects an element from among a plurality of elements; (2) content objects, such as an editable text object; (3) movable drawing objects (MDOs); and (4) audio objects. (Id.) 9. Bear teaches that Whenever a button is pressed . . . such elemental physical interactions create appropriate electronic signals constituting a logical input for use with the invention as described herein . . . . However, for convenience, references to the elements available for physical interactions (e.g., a button) shall constitute a direct reference to the logical input resulting from each such physical interaction. In other words, input device elements—including buttons . . . shall constitute logical inputs for the embodiments described herein when physically acted upon. Thus, by way of unlimited example only, an “ENTER button” is one form of a “logical input for ENTER.” (Id. at 5 ¶ 87.) 10. Bear teaches that At the heart of the various embodiments of the present invention is a main button . . . group which provides the basis for an interface device that users can immediately identify and use to navigate information in a simple and consistent way. The embodiments generally comprises [sic] a core group of logical buttons for a minimally necessary group of commands (core commands) and, in some embodiments, additional logical buttons for a secondary set of navigation commands (secondary commands). . . . In other embodiments, comprising relatively few physical components but possessing a substantial number of logical button [sic], tremendous navigational functionality is possible that goes far beyond core commands and secondary Appeal 2011-011525 Application 11/173,781 8 commands, but may also include general commands which, in some cases, may be object, application, or device specific and/or revisable. (Id. at 5 ¶ 88.) 11. Bear et al. teaches that FIG. 3A is a flow chart depicting the logic for the ENTER button in certain embodiments of the present invention. When the ENTER button is pressed at block 302, the ENTER button system determines, at block 304, if the object is a selection object (and not a content object or a movable drawing object) and, if not, for the present embodiment no other event occurs and the system returns at block 350. (Events other than the null event of the present invention are certainly possible for content objects and movable drawing objects, as will be appreciated by those possessing sufficient skill in the relevant art.) On the other hand, if the object is in fact a selection object, at block 306 the system determines if an active element in the object is already selected. If an active element is already selected, at block 312 an “execute” event occurs that is equivalent to depressing the Enter key on a keyboard (and which results in an Open, Accept, or OK of the selected element as appropriate, and as such events are known and appreciated by those of skill in the art); the system then returns at block 350. On the other hand, if an active element is not already selected, then at block 308 the system then makes a determination as to whether an element of the object has been marked as the Initial Focus (as a default selection element) and if so, then at block 314 the element marked as the Initial Focus is selected and thereafter the system returns at block 350. Finally, if there is no Initial Focus, then at block 316 the system selects the first listed element of the object and returns at block 350. (Id. at 6 ¶ 105.) Appeal 2011-011525 Application 11/173,781 9 12. Bear et al. teaches that Naturally, variations to the logic flow depicted in FIG. 3B can and will be desirable under certain circumstances. For example, consider FIG. 3B which is a flow chart depicting just such a variation in the logic for the ENTER button depicted in FIG. 3A. In this embodiment— and after already determining (a) at block 304 that the object is a selection object, (b) at block 306 that an active element has not already been selected, and (c) at block 308 that the object has no Initial Focus (identical to the method of FIG. 3A)—at block 310 the system of FIG. 3B further determines whether any active elements are visible and, if so, at block 318 would then select the first visible element or, if not, at block 316 the system would then select the first listed element. This and other such subtle variations in logic are herein disclosed by the present invention. (Id. at 6 ¶ 106.) Principles of Law “‘[R]ejections on obviousness grounds cannot be sustained by mere conclusory statements; instead, there must be some articulated reasoning with some rational underpinning to support the legal conclusion of obviousness.”’ KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 418 (2007) citing In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006). Analysis Nishiyama reasonably teaches a method of receiving signals from a plurality of user inputs where at least one user input is a multi-function input in a first mode to provide a display control input, and in a second mode to provide another function (FF 1-5). Nishiyama further teaches that when operating the at least one user input in the first mode, operation of the at Appeal 2011-011525 Application 11/173,781 10 least one user input in the second mode may be inhibited by the user’s actuation of a specific function key (FF 1-5). Bear teaches a button that can execute more than one command (FF 6-7). Bear teaches automatically inhibiting the operation of certain “objects” based on the “focus” of the object within a certain display and executing the command associated with that “focus” (FF 8-12). However, regarding the claim step of “when operating the at least one user input in the first mode, inhibiting operation of the at least one user input in the second mode, where inhibiting operation is performed automatically by a controller in response to a change in content shown on a display in response to operation in the first mode,” the Examiner finds that Bear allows the system to automatically determine which mode/function should be executed based on a current state or focus of an application (content shown on a display). Bear describes how the command issued by the system takes into consideration the current object of focus and issues the command accordingly; thereby a change in content by a user using the first mode would be issued accordingly. (Ans. 13.) Appellant contends that “nowhere is it taught or suggested that such an operation would be performed in response to a change in content shown on the display in response to an operation in the first mode” (App. Br. 13). Appellant further contends that Nowhere in the disclosure of Bear is it taught or suggested that the state of an application or the current focus is directly related to the content shown on the display. Conversely, the state of an application and the current focus can be independent from the content shown on the display. Thus, Bear does not allow the system to automatically determine Appeal 2011-011525 Application 11/173,781 11 which mode/function should be executed based on the content shown on the display . . . (Reply Br. 2). We find that the Appellant has the better position. The Examiner acknowledges that Nishiyama does not teach “the function of the keys depend[ing] upon the focused content on the display” (Ans. 5). The Examiner provided no evidence that Bear performs this step, and provides no evidence or argument that the step of automatically “inhibiting operation of the at least one user input in the second mode” in response to the content shown on a display would have been obvious to one of ordinary skill in the art. Instead, the Examiner relies upon Bear to show how the active focus element within a display determines logical input (id. at 5). However, as Appellant points out, the “object of focus and the content shown on the display are mutually exclusive in the embodiments taught by Bear” (Reply Br. 2). Consequently, Bear cannot reasonably be interpreted to suggest automatic control of the focus and content relative to one another as required by the “inhibiting operation” limitation in claim 1. Thus, the Examiner has not provided evidence that Bear teaches the claim limitation that is missing from Nishiyama, and therefore has not satisfied the burden of providing sufficient evidence for a case of obviousness. Conclusion of Law The evidence of record does not support the Examiner’s conclusion that Nishiyama and Bear render the claims obvious. Appeal 2011-011525 Application 11/173,781 12 SUMMARY In summary, we reverse the rejection of claims 1-30 and 32-38 under 35 U.S.C. § 103(a) as obvious over Nishiyama and Bear. REVERSED cdc Copy with citationCopy as parenthetical citation