Ex Parte MasudaDownload PDFPatent Trial and Appeal BoardMay 24, 201812995610 (P.T.A.B. May. 24, 2018) Copy Citation UNITED STA TES p A TENT AND TRADEMARK OFFICE APPLICATION NO. FILING DATE FIRST NAMED INVENTOR 12/995,610 12/01/2010 Tooru Masuda 22850 7590 06/07/2018 OBLON, MCCLELLAND, MAIER & NEUSTADT, L.L.P. 1940 DUKE STREET ALEXANDRIA, VA 22314 UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www .uspto.gov ATTORNEY DOCKET NO. CONFIRMATION NO. 353134US8PCT 2117 EXAMINER LEE,KWANGB ART UNIT PAPER NUMBER 2617 NOTIFICATION DATE DELIVERY MODE 06/07/2018 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): patentdocket@oblon.com oblonpat@oblon.com tfarrell@oblon.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE BEFORE THE PATENT TRIAL AND APPEAL BOARD Ex parte TOORU MASUDA Appeal2016-007581 1 Application 12/995,610 Technology Center 2600 Before CARLA M. KRIVAK, CAROLYN D. THOMAS, and JOSEPH P. LENTIVECH, Administrative Patent Judges. LENTIVECH, Administrative Patent Judge. DECISION ON APPEAL Pursuant to 35 U.S.C. § 134(a), Appellant2 appeals under 35 U.S.C. § 134(a) from the final rejection of claims 1-20, which constitute all the claims pending in this application. An oral hearing was held on April 24, 2018. We have jurisdiction over the pending claims under 35 U.S.C. § 6(b). We affirm-in-part. 1 The record includes a transcript of the oral hearing held April 24, 2018. 2 According to Appellant, the real party in interest is Sony Corporation. App. Br. 2. Appeal2016-007581 Application 12/995,610 STATEMENT OF THE CASE Appellant's Invention Appellant's invention generally relates to displaying additional information with a feature point in an image. Spec. ,r 1. For example, Appellant's invention allows adding hand-drawn data to decorate a moving image in moving image content. Spec. ,r 23. Claims 1 and 10, which are illustrative, read as follows: 1. An image processing device comprising: circuitry configured to detect a feature point from stored moving image data; associate the feature point with additional information, the feature point being selected based on feature point selection information, wherein the additional information includes image data that is displayed as added to a reproduction of the stored moving image data; analyze a changing behavior of the feature point through the reproduction of the stored moving image data; generate data indicating change content to change display of the image data of the additional information associated with the feature point based on motion information indicating the behavior of the feature point analyzed and a display scenario indicating a change pattern to change the additional information associated with the feature point according to the behavior of the feature point; and control a display device to display the reproduction of the stored moving image data and to display the image data of the additional information based on the change content such that the image data of the additional information has motion corresponding to motion of the feature point of the stored moving image data. 2 Appeal2016-007581 Application 12/995,610 10. The image processing device according to claim 1, wherein when selection information to select at least two feature points is input, the circuitry is configured to generate data to generate a display size of the additional information according to a distance between the at least two feature points. References The Examiner relies on the following prior art in rejecting the claims: Ohmori et al. Sonoda et al. Kowald US 6,339,431 Bl US 2006/0126963 Al US 7,606,397 B2 Rejection Jan. 15,2002 June 15, 2006 Oct. 20, 2009 Claims 1-20 stand rejected under 35 U.S.C. § I03(a) as being unpatentable over the combination of Kowald, Ohmori, and Sonoda. Final Act. 2-17. ANALYSIS Claim 1 Issue 1: Did the Examiner err by finding that the combination of Kowald, Ohmori, and Sonoda teaches or suggests generate data indicating change content to change display of the image data of the additional information associated with the feature point based on motion information indicating the behavior of the feature point analyzed and a display scenario indicating a change pattern to change the additional information associated with the feature point according to the behavior of the feature point, as recited in claim 1? 3 Appeal2016-007581 Application 12/995,610 Appellant contends the cited references do not teach or suggest the disputed limitations. App. Br. 6-8; Reply Br. 1-3. According to Appellant, the Examiner incorrectly finds Kowald's metadata teaches the claimed "additional information." App. Br. 6 ( citing Final Act. 3). Appellant argues Kowald does not teach displaying the metadata as added to a reproduction of the edited sequence or generating data indicating change content to change a display of the metadata, as required by claim 1. App. Br. 6-7. Appellant further argues "even assuming arguendo that using metadata (524) as prompts in the editing process in Kowald corresponds to displaying the metadata (524), Kowald still fails to describe generating the change content and changing display of the metadata (524) based on the change content such that the metadata (524) has a motion corresponding to the associated features." App. Br. 7. Appellant argues "Sonoda does not describe displaying additional information together with the reproduction of the photo movie, much less generating data indicating change content to change the additional information based on the scenario file." Id.; see also Reply Br. 2-3. Appellant argues Sonoda, instead, teaches using the scenario file for creating or editing the photo movie. App. Br. 7. Appellant argues the cited references, therefore, "at best describe creating a movie by altering the sequence of various video segments based on the scenario file, and cannot reasonably be interpreted as describing moving the prompts or annotations based on the scenario file during the playback of the movie." App. Br. 8. We do not find Appellant's arguments persuasive. The Examiner finds Ohmori teaches capturing images via a digital video camera or a digital still camera and storing image data corresponding to the captured images. 4 Appeal2016-007581 Application 12/995,610 Ans. 3 (citing Kowald, Fig. 5; 5:10-13, 27--44). Kowald teaches that a classification system analyzes the captured images and outputs classification data, configured as metadata, associated with each image. Kowald 5:27-32. The Examiner also finds Kowald teaches using a template-based approach to output an edited sequence of the captured images based on the metadata. Ans. 3--4 ( citing Kowald 7:40-55; 8:20-30; 11 :42-50). The Examiner finds Ohmori teaches a system for allowing a user to add an annotation to a dynamic image displayed on a portable terminal. Ans. 7-8 ( citing Ohmori, Figs. 1, 2). The Examiner also finds Ohmori teaches displaying the annotation matched to the content of the dynamic image while displaying the dynamic image changing over time. Ans. 8. The Examiner finds Sonoda describes an editing program containing various accompanying data, the accompanying data including a scenario file and decorative images to decorate frame images. Ans. 8. Thus, the Examiner asserts By combining the above editing methods such as the Kowald's [] editing system, the Ohmori' s editing annotation, and the Sonoda' s editing scenario selection, the combined method can clearly support the reproduction of the images and the video data with changing the additional information based on the scenario file, therefore it would be obvious the method in Kowald as modified by Ohmori and Sonoda to provide a display scenario indicating a change pattern to change the additional information associated with the moving object according to the behavior of the object. Ans. 8. As such, the Examiner relies upon the combined teaching of the references to teach or suggest the disputed limitations. Appellant's arguments do not persuasively address the Examiner's findings regarding the combined teachings of the references and, therefore, are unpersuasive of error. See In re Merck & Co. Inc., 800 F.2d 1091, 1097 (Fed. Cir. 1986). 5 Appeal2016-007581 Application 12/995,610 Appellant's argument that the cited references "at best describe creating a movie by altering the sequence of various video segments based on the scenario file, and cannot reasonably be interpreted as describing moving the prompts or annotations based on the scenario file during the playback of the movie" (App. Br. 8) is unpersuasive because it is essentially based on a bodily incorporation of the teachings of Ohmori and Sonoda into the system of Kowald. However, "it is not necessary that the inventions of the references be physically combinable to render obvious the invention under review." In re Sneed, 710 F.2d 1544, 1550 (Fed. Cir. 1983). The relevant inquiry is whether the claimed subject matter would have been obvious to those of ordinary skill in the art in light of the combined teachings of those references. See In re Keller, 642 F.2d 413,425 (CCPA 1981). We note for emphasis that Ohmori teaches "if the annotation is added to a person's image in the dynamic image and the person's image moves to another position ... the annotation is also moved by moving of the person's image in the dynamic image." Ohmori 7:41-53. Ohmori further teaches: [T]he annotation is moved the same distance as the object. However, as shown in FIG. 17, the annotation F5 often disappears when the display position of the annotation moves in synchronization with the object OBJl movement. In this case, as shown in FIGS. 18A and 18B, if all image area of the annotation is located inside the frame when the display position of the annotation is moved, the annotation is displayed as it is. As shown in FIG. 18C, when all image area of the annotation is located outside the frame when the display position of the annotation is moved, a reflected image F5' of the annotation F5 may be displayed at opposite position of the object OBJl. As a method to generate the reflected image F5 ', a symmetric image of the annotation F5 for a vertical line passing through a center of gravity of the object OBJl is created on the frame. 6 Appeal2016-007581 Application 12/995,610 Ohmori 9:35--49; see also Ohmori Figs. 18A---C. Thus, Ohmori teaches or suggests "generat[ing] data indicating change content to change display of the image data of the additional information associated with the feature point [e.g., generation of the reflected image F5'] based on motion information indicating the behavior of the feature point analyzed and a display scenario indicating a change pattern to change the additional information associated with the feature point according to the behavior of the feature point [ e.g., when the image area of the annotation is located outside the frame when the display position of the annotation is moved]." For the foregoing reasons, we are not persuaded the Examiner erred in finding the combination of Kowald, Ohmori, and Sonoda teaches or suggests the disputed limitations. Issue 2: Did the Examiner err by combining Kowald, Ohmori, and Sonoda? Appellant contends the combination of Kowald, Ohmori, and Sonoda is improper. App. Br. 8; Reply Br. 3--4. In particular, Appellant contends the combination of cited references is improper because Kowald and Sonoda are non-analogous art. App. Br. 8. In this regard, Appellant argues: Kowald describes a method and system for automated classification of digital images and/ or shots for convenient video editing by a filmmaker. Sonoda describes a frame classification information providing device for creating a photo movie. Therefore, at best Kowald and Sonoda are directed to technology pertinent to editing a piece of moving image data itself. In contrast, Claim 1 of the present application is directed to arranging movement of an image that is displayed as added to a reproduction of a piece of moving image data. As such, the structure and method of editing a video in Kowald and Sonoda and the present application cannot be considered by a person having ordinary skill in the art as being directed to the same field 7 Appeal2016-007581 Application 12/995,610 of endeavor, and the matter Kowald and Sonoda deal with cannot logically commend itself to the attention of one of ordinary skill in the art in considering the problem he/she faced. App. Br. 8. We are not persuaded by Appellant's contentions. "A reference qualifies as prior art for an obviousness determination under § 103 only when it is analogous to the claimed invention." In re Klein, 647 F.3d 1343, 1348 (Fed. Cir. 2011) (citation omitted). "Two separate tests define the scope of analogous prior art: ( 1) whether the art is from the same field of endeavor, regardless of the problem addressed and, (2) if the reference is not within the field of the inventor's endeavor, whether the reference still is reasonably pertinent to the particular problem with which the inventor is involved." In re Bigio, 381 F.3d 1320, 1325 (Fed.Cir.2004) (citation omitted); see also Klein, 647 F.3d at 1348. Although the Specification describes and recites displaying image data of additional information based on change content such that the image data of the additional information has motion corresponding to motion of a feature point of moving image data in the written description and claims, "[t]he field of endeavor of a patent is not limited to the specific point of novelty, the narrowest possible conception of the field, or the particular focus within a given field." Unwired Planet, LLC v. Google Inc., 841 F.3d 995, 1001 (Fed. Cir. 2016). Appellant's Specification provides that the invention relates to "an image processing device, an image processing method, and computer program that display additional information displayed to be accompanied with a feature point in an image." Spec. ,r 1; see also Spec., Title. Thus, we find the field of endeavor is image processing devices and methods. Kowald is related to automated classification of digital images 8 Appeal2016-007581 Application 12/995,610 and editing a sequence of images based upon the classification to achieve a desired aesthetic effect. Kowald, Abstract. Sonoda is related to an image editing device for creating a photo movie based on frame classification information. Sonoda, Abstract. As such, we find Kowald and Sonoda are both directed to imaging processing devices and methods and, therefore, within Appellant's field of endeavor. For the foregoing reasons, we are not persuaded the Examiner erred in finding the combination of Kowald, Ohmori, and Sonoda teaches or suggests the disputed limitations. Claim 10 Issue 3: Did the Examiner err by finding the combination of Kowald, Ohmori, and Sonoda teaches or suggests "wherein when selection information to select at least two feature points is input, the circuitry is configured to generate data to generate a display size of the additional information according to a distance between the at least two feature points," as recited in claim 1 O? With respect to claim 10, the Examiner finds Kowald teaches "adding extra information and selecting a desired template to achieve a desired visual effect" and, therefore, teaches or suggests "wherein when selection information to select at least two feature points is input." Final Act. 9 ( citing Kowald 1:31-32, 10: 1-17). The Examiner finds Ohmori teaches or suggests the remaining limitations of claim 10. Id. (citing Ohmori, Figs. 15-16; claim 1; col. 8:47---63, 9:7-22). In the Answer, the Examiner finds: Kowald discloses determining a size of the located face with respect to a size of the image; and classifying the image based on the relative size of the face with respect to the image in col. 4: 9 Appeal2016-007581 Application 12/995,610 26-34 (Kowald). In addition, the Kowald uses a zoom feature, so that during the zoom, the image is automatically cropped to retain a size within that of the display. Once the correct sequence is formed, the sequence is edited in step [720] by applying the selected template to the sequence, and this results in the output presentation of step [722] which may be sent for storage or directly reproduced to a display arrangement in col. 11: 11-19, 38-50 (Kowald). Ans. 10. Appellant contends the combination of Kowald, Ohmori, and Sonoda does not teach or suggest the limitations recited in claim 10 because the cited references do not teach or suggest determining a display size of an annotation based on two selected objects. App. Br. 9; Reply Br. 4. According to Appellant, "Ohmori describes that, when a user inputs an annotation, the system selects the object that has the least distance to the input annotation among plural objects as the object associated with the input annotation." App. Br. 9 (citing Ohmori 8:47-63). Appellant argues Ohmori, however, "does not describe determining a display size of the annotation based on two selected objects," as required by claim 10. Id. Appellant argues Kowald' s teaching regarding the use of a zoom feature fails to teach or suggest the disputed limitation because Kowald does not teach that the zoom feature generates data for generating a display size of the additional information according to a distance between at least two feature points, as required by claim 10. Reply Br. 4. We find Appellant's arguments persuasive. Ohmori teaches "[b]y referring to the display position [of an object] stored as the variables M, m, the annotation transformation processing section 105f calculates a moving distance (Xm, Y m) of the object OBJI from the previous frame to the present frame, and moves the display position of the annotation as (Xm, 10 Appeal2016-007581 Application 12/995,610 Ym) on the present frame," thus the object and the annotation are moved an equal distance. Ohmori 9:6-18; see also Ohmori, Fig. 16; col. 9:60-63, 10:32-34. Ohmori, therefore, teaches determining a movement of additional information ( e.g., an annotation) according to a distance between two feature points. With respect to a size of the annotation, Ohmori teaches that the size of the annotation is enlarged or reduced in proportion to the enlargement or reduction of the size of the object and not according to the distance between at least two feature points, as required by claim 10. See Ohmori col. 9:50- 55. We agree with the Examiner that Kowald teaches determining a size of a face with respect to a size of an image and automatically cropping an image to retain a size within that of the display. Ans. 10 ( citing Kowald 4:26-34; 11: 11-19, 38-50). However, the Examiner's findings fail to explain how determining a size of a face with respect to a size of an image and automatically cropping the image to retain a size within that of the display, as taught by Kowald, teaches or suggests generating a display size of the additional information according to a distance between the at least two feature points, as required by claim 10. Accordingly, we do not sustain the Examiner's rejection of claim 10. DECISION We affirm the Examiner's rejection of claims 1-9 and 11-20. We reverse the Examiner's rejection of claim 10. No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(l )(iv). 11 Appeal2016-007581 Application 12/995,610 AFFIRMED-IN-PART 12 Copy with citationCopy as parenthetical citation