Ex Parte SundarDownload PDFBoard of Patent Appeals and InterferencesApr 20, 200409626362 (B.P.A.I. Apr. 20, 2004) Copy Citation The opinion in support of the decision being entered today was not written for publication and is not binding precedent of the Board. Paper No. 22 UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE BOARD OF PATENT APPEALS AND INTERFERENCES ____________ Ex parte SATISH SUNDAR ____________ Appeal No. 2004-0278 Application No. 09/626,362 ____________ ON BRIEF ____________ Before FRANKFORT, McQUADE, and NASE, Administrative Patent Judges. NASE, Administrative Patent Judge. DECISION ON APPEAL This is a decision on appeal from the examiner's final rejection of claims 6 to 11 and 15 to 20. Claims 1 to 5 and 12 to 14, the only other claims pending in this application, have not been appealed (brief, p. 2). We REVERSE. Appeal No. 2004-0278 Application No. 09/626,362 Page 2 1 Claim 18 reads as follows: The method of claim 10, the path generated is within an useable free-space of the processing system. However, parent claim 10 is not a method claim but an apparatus claim. The appellant may wish to amend claim 18 to read as follows: The apparatus of claim 10 wherein the path generated is within an useable free-space of the processing system. BACKGROUND The appellant's invention generally relates to automatic calibration of a robot in a processing system (specification, p. 1). A copy of the dependent claims under appeal is set forth in the appendix to the appellant's brief.1 Claims 6 and 10, the independent claims under appeal, read as follows: 6. A method for calibrating a robot in a processing chamber, comprising: a) determining a useable free-space in the processing chamber; b) positioning an emitter at a target point; c) positioning a receiver at a point on an end effector of the robot; d) determining a distance between the point on the end effector of the robot and the target point using signals from the emitter and the receiver; and e) determining a path from the point on the end effector of the robot and the target point, using a path planning algorithm that minimizes a distance function between the point on the end effector of the robot and the target point within the useable free space. 10. An apparatus for calibrating a robot in a processing system, comprising: a) a sensor disposed at a location to be taught; b) a receiver disposed on a robot end effector; and c) a microprocessor connected to receive signals from the sensor and the receiver to determine a distance between the sensor and the receiver, wherein the microprocessor generates a path using incremental movements that minimizes the distance between the sensor and the receiver. Appeal No. 2004-0278 Application No. 09/626,362 Page 3 The prior art references of record relied upon by the examiner in rejecting the appealed claims are: Mizuno et al. 5,876,325 Mar. 2, 1999 (Mizuno) Greer 6,128,585 Oct. 3, 2000 Claims 6 to 11 and 15 to 20 stand rejected under 35 U.S.C. § 102(a) as being anticipated by Mizuno. Claims 6 to 11 and 15 to 20 stand rejected under 35 U.S.C. § 102(a) as being anticipated by Greer. Rather than reiterate the conflicting viewpoints advanced by the examiner and the appellant regarding the above-noted rejections, we make reference to the final rejection (Paper No. 8, mailed March 27, 2002) and the answer (Paper No. 18, mailed March 7, 2003) for the examiner's complete reasoning in support of the rejections, and to the brief (Paper No. 17, filed November 27, 2002) and reply brief (Paper No. 19, filed May 6, 2003) for the appellant's arguments thereagainst. Appeal No. 2004-0278 Application No. 09/626,362 Page 4 OPINION In reaching our decision in this appeal, we have given careful consideration to the appellant's specification and claims, to the applied prior art references, and to the respective positions articulated by the appellant and the examiner. As a consequence of our review, we make the determinations which follow. A claim is anticipated only if each and every element as set forth in the claim is found, either expressly or inherently described, in a single prior art reference. Verdegaal Bros. Inc. v. Union Oil Co., 814 F.2d 628, 631, 2 USPQ2d 1051, 1053 (Fed. Cir.), cert. denied, 484 U.S. 827 (1987). The anticipation based on Mizuno We will not sustain the rejection of claims 6 to 11 and 15 to 20 under 35 U.S.C. § 102(a) as being anticipated by Mizuno. Mizuno provides a surgical manipulator system which comprises a manipulator and a remote-control device for controlling the manipulator and in which the manipulator can be moved in a desired direction, not restricted by the positional relation between it and the remote-control device. Figure 11 is a schematic representation of one embodiment of the surgical manipulator system. As shown, the system comprises a Appeal No. 2004-0278 Application No. 09/626,362 Page 5 surgical manipulator 51 and a 3D position sensor 52. The manipulator 51 has a structure similar to the slave manipulators 3 and 4 incorporated in the embodiment shown in Figure 1. The manipulator 51 is secured to one side of an operating table and is controlled in the same way as either slave manipulator used in the first embodiment. The 3D position sensor 52 is one designed to remote-control a manipulator, and is used to remote-control the surgical manipulator 51. To state more precisely, the sensor 52 comprises a source coil 53 and a sense coil 54. Either coil is substantially a cube and has three coil elements wound around three orthogonal axes, for generating and receiving magnetic fields. A drive circuit (not shown) supplies pulse currents, one after another, to the coil elements of the source coil 53. The three coil elements of the source coil 53 generate three reference magnetic fields in the space occupied by the surgical manipulator system, as shown in Figure 13. Hence, the coil elements constitute a transmitting section. The sense coil 54 detects the reference magnetic fields generated by the source coil 53 and determines its own position and orientation. Hence, the sense coil 54 functions as a receiving section. Both coils 53 and 54, or at least the sense coil 54 is small enough to be gripped with the hand 56 as illustrated in Figure 11. Alternatively, the sense coil 54 may be fixed to the HMD (Head Mounted Display) which the surgeon wears, so that the slave manipulator may be operated as the surgeon's head moves. Appeal No. 2004-0278 Application No. 09/626,362 Page 6 As shown in Figure 12, the sensor 52 further comprises a relative position detecting circuit 58 for detecting the position and orientation of the sense coil 54, both relative to the source coil 53. Provided outside the sensor 52 is a manipulator controller 59. The controller 59 is designed to remote-control the surgical manipulator 51 in accordance with the relative 3D position detected by the coils 53 and 54 of the 3D position sensor 52. The 3D position sensor 52 and the manipulator controller 59 constitute a remote-control system for controlling the manipulator 51. The 3D position sensor 52 and the manipulator controller 59 constitute a remote-control system 55. Mizuno teaches (column 14, line 35, to column 15, line 18) that: The remote-control system 55 can be applied to the first embodiment, i.e., the surgical manipulator system shown in FIG. 1--as illustrated in FIG. 14. As shown in FIG. 14, two slave manipulators 3 and 4 are secured at their bases 7 and 8 to the right and left sides of an operating table 1, respectively. Two source coils 53, each functioning as a transmitting section, are secured to the right side of the operating table 1. Two sense coils 54, each functioning as a receiving section, are held in the hands of a surgeon or attached to the head band the surgeon wears. The first source coil 53 and the first sense coil 54 constitute a first 3D position sensor 52a. The second source coil 53 and the second sense coil 54 constitute a second 3D position sensor 52b. The first slave manipulator 3 is operated in accordance with the relative 3D positions of the coils 53 and 54 of the first position detector 52a. A task coordinate system 62 is set, in addition to the base coordinate system 61 of the first 3D position sensor 52a. The slave manipulator 3 is operated such that the vector P representing the position which the sensor coordinate system 63 set for the sense coil 54 and assumes in the task coordinate system 62 coincides with the vector p representing the position which the TCP of the first slave manipulator 3 takes in the base coordinate system 13 set for the first slave manipulator 3. The task coordinate system 62 can be freely altered, by using the Appeal No. 2004-0278 Application No. 09/626,362 Page 7 sensor coordinate system 63 (an orthogonal coordinate system) set for the sense coil 54. For example, a sensor coordinate system 63 set at a time may be used as the task coordinate system 62. Once the task coordinate system 62 has been set at any desired position, the base coordinate system 13 of the slave manipulator 3 (in which the TCP is present) becomes identical in orientation to the task coordinate system 62 (in which the sensor coordinate system 63 exist). This makes it easy for the surgeon to operate the slave manipulator 3. The second slave manipulator 4 is operated in accordance with the relative 3D positions of the coils 53 and 54 of the second position detector 52b. The second slave manipulator 4 is operated such that the vector Q representing the position which the sensor coordinate system 63 sets for the sense coil 54 and assumes in the base coordinate system 61 set for the second position detector 52b coincides with the vector q representing the position which the TCP of the second slave manipulator 4 takes in the task coordinate system 65 set freely for the second slave manipulator 4. The task coordinate system 65 can be freely altered, by the first or second method applied in the first embodiment. Once the task coordinate system 65 has been set, the vector Q and the vector q become identical in orientation as shown in FIG. 14. The surgeon can therefore operate the second slave manipulator 4 with ease. In the rejection of claims 6 to 11 and 15 to 20 based on Mizuno, the examiner (final rejection, pp. 2-3) determined that (1) the claimed sensor/emitter was readable on Mizuno's sense coil 54; (2) the claimed receiver disposed on a robot end effector was readable on Mizuno's detecting circuit 58; and (3) the claimed microprocessor to determine a distance between the sensor/emitter and the receiver was readable on Mizuno's manipulator controller 59. The examiner further stated that Mizuno's manipulator controller 59 inherently "will take the most efficient path between the current Appeal No. 2004-0278 Application No. 09/626,362 Page 8 position of the end effector and the target [i.e., a path using incremental movements that minimizes the distance between the sensor coil 54 and the detecting circuit 58]. The appellant argues throughout the briefs that Mizuno does not disclose either (1) determining a path from the point on the end effector of the robot and the target point, using a path planning algorithm that minimizes a distance function between the point on the end effector of the robot and the target point within the useable free space as recited in claim 6, or (2) a microprocessor connected to receive signals from the sensor and the receiver to determine a distance between the sensor and the receiver, wherein the microprocessor generates a path using incremental movements that minimizes the distance between the sensor and the receiver as recited in claim 10. We agree. In that regard, there is no disclosure whatsoever in Mizuno of determining the distance between the source coil 53 and the sense coil 54 or the generation of a path that minimizes the distance between the source coil 53 and the sense coil 54. As such claims 6 and 10 are not anticipated by Mizuno. The examiner's position that such limitations are inherent in Mizuno is shear speculation unsupported by the teachings of Mizuno. Since all the limitations of claims 6 and 10 are not disclosed in Mizuno for the reasons set forth above, the decision of the examiner to reject independent claims 6 Appeal No. 2004-0278 Application No. 09/626,362 Page 9 and 10, and claims 7 to 9, 11 and 15 to 20 dependent thereon, under 35 U.S.C. § 102(a) as anticipated by Mizuno is reversed. The anticipation based on Greer We will not sustain the rejection of claims 6 to 11 and 15 to 20 under 35 U.S.C. § 102(a) as being anticipated by Greer. Greer discloses a method and apparatus for calibrating a noncontact gauging sensor with respect to an external coordinate system. A sensor array is positioned at a vantage point to detect and calibrate its reference frame to the external reference frame demarcated by light-emitting reference indicia. The sensor array encompasses a wide view calibration field and provides data indicating the spatial position of light sources placed within the calibration field. A tetrahedron framework with light-emitting diodes at the vertices serves as a portable reference target that is placed in front of the feature sensor to be calibrated. The sensor array reads and calibrates the position of the light-emitting diodes at the vertices while the structured light of the feature sensor is projected onto the framework of the reference target. The structured light intersects with and reflects from the reference target, providing the feature sensor with positional and orientation data. These data are correlated to map the coordinate system of the feature sensor to the coordinate system of the external reference frame. A Appeal No. 2004-0278 Application No. 09/626,362 Page 10 computer-generated virtual image display compares desired and actual sensor positions through a real time feedback system allowing the user to properly position the feature sensor. A typical gauging station for an automotive vehicle part as shown in Figure 1 could take the form shown in Figure 2. Workpieces to be gauged at gauging station 200 rest on transporting pallets 220, which are moved along an assembly line via pallet guides 230 that pass through guide channels 231 in the pallet. At the gauging station 200, a sensor mounting frame 210 (only one half of which is shown in perspective in Figure 2) surrounds the workpiece 100 to be gauged and provides a plurality of mounting positions for a series of optical gauging sensors or feature sensors 240-1 through 240-n. The calibration system includes a calibration sensor array 300 that may be positioned at a convenient vantage point, such as above the space that is occupied by the workpiece in the gauging station. The calibration system further includes a portable reference target 400. The reference target 400 can be mounted on any suitable fixture, allowing it to be positioned in front of the feature sensor 240 for the calibration operation. In Figure 3 a first reference target 400 is shown attached to a simple tripod stand 402 with cantilevered arm 404. A second reference target 400b is attached by Appeal No. 2004-0278 Application No. 09/626,362 Page 11 bracket directly to the feature sensor 240. Referring to Figure 6, the portable reference target 400 comprises a framework of individual struts 402 with light-emitting diodes 404 at the vertices. The light-emitting diodes are preferably separately controllable to allow them to be selectively and individually illuminated by the calibration system. Greer teaches (column 7, lines 28-57) that: Referring to FIGS. 7a-7c, the calibration technique will now be described. Referring to FIG. 7a, the sensor array 300 is first calibrated with respect to the fixed reference frame RE. This is done by sequentially illuminating light-emitting diodes 280a-280c while the sensor array collects data on the position of each. These data are then used to calibrate the sensor array with respect to the external reference frame RE. This is illustrated diagrammatically by the dashed line C1. Referring next to FIG. 7b, the portable reference target is placed within the calibration field of the sensor array and the light-emitting diodes at the vertices of the reference target 400 are sequentially illuminated while data are collected by the sensor array. These data are then used to calibrate the position of the portable reference target with respect to the fixed reference frame RE. This is illustrated diagrammatically by dashed line C2. Finally, the feature sensor 240 projects structured light onto the portable reference target 400 and the feature sensor collects reflected light data from the portable reference target. Specifically, the position of the straight edges of the target are ascertained and these are used to describe the spatial position of the reference target in the reference frame of the feature sensor. Once these data are collected the feature sensor is then calibrated to the fixed reference frame RE, as diagrammatically illustrated by dashed line C3. In the foregoing example, the sensor array was calibrated first, the position of the portable reference target was calibrated second and the feature sensor was calibrated third. Of course, these operations could be performed in a different sequence to achieve the same end result. Appeal No. 2004-0278 Application No. 09/626,362 Page 12 Greer further teaches (column 8, line 42, to column 9, line 11) that: Referring to FIG. 9, the system for providing this virtual image sensor positioning feature is illustrated. FIG. 9 is a software block diagram showing the components that may be used to generate the virtual image. The desired position of the feature sensor is stored in data structure 500 as a vector representing the six degrees of freedom (x, y, z, pitch, yaw and roll) needed to specify the position and orientation of the feature sensor. The desired position data can be predetermined by calibration using an existing workpiece, such as a master part. However, in many applications it may be more expedient to supply the sensor position data directly from the computer-aided design workstation that is used to generate the manufactured workpiece specifications. In this regard, CAD workstation 502 is illustrated in FIG. 9. Meanwhile, the software system must also ascertain the current position of the feature sensor and this information is stored in data structure 504, also as a vector representing the six degrees of freedom. The current position of the feature sensor is generated using data that is captured in real time from the calibration sensor array and the feature sensor being positioned. Specifically, the sensor array is used to detect the reference indicia that are related to the external reference frame. These data are stored in data structure 506. The portable reference target is placed in front of the feature sensor; it may be physically attached to the front of the sensor by means of a convenient bracket, if desired. As will be seen, the precise positioning of the reference target in front of the feature sensor is not critical. The light-emitting diodes at the vertices of the reference target are sequentially illuminated and read by the sensor array. These data are captured as reference target LED data and stored in data structure 508. Finally, the feature sensor is used to project structured light onto the reference target and the reflected pattern returned from that target is read by the feature sensor as structured light data. These data are stored in data structure 510. In the rejection of claims 6 to 11 and 15 to 20 based on Greer, the examiner (final rejection, pp. 5-6) determined that (1) the claimed sensor/emitter was readable on Appeal No. 2004-0278 Application No. 09/626,362 Page 13 Greer's portable reference target 400; (2) the claimed receiver disposed on a robot end effector was readable on Greer's feature sensor 240; and (3) the claimed microprocessor to determine a distance between the sensor/emitter and the receiver was readable on Greer's CAD workstation 502. The examiner further stated that Greer's CAD workstation 502 inherently "will take the most efficient path between the current position of the end effector and the target [i.e., a path using incremental movements that minimizes the distance between the portable reference target 400 and the feature sensor 240]. The appellant argues throughout the briefs that Greer does not disclose either (1) determining a path from the point on the end effector of the robot and the target point, using a path planning algorithm that minimizes a distance function between the point on the end effector of the robot and the target point within the useable free space as recited in claim 6, or (2) a microprocessor connected to receive signals from the sensor and the receiver to determine a distance between the sensor and the receiver, wherein the microprocessor generates a path using incremental movements that minimizes the distance between the sensor and the receiver as recited in claim 10. We agree. In that regard, there is no disclosure in Greer of determining the distance between the portable reference target 400 and the feature sensor 240 or the generation of a path that minimizes the distance between the portable reference target 400 and the Appeal No. 2004-0278 Application No. 09/626,362 Page 14 feature sensor 240. As such claims 6 and 10 are not anticipated by Greer. The examiner's position that such limitations are inherent in Greer is total speculation unsupported by the teachings of Greer. Since all the limitations of claims 6 and 10 are not disclosed in Greer for the reasons set forth above, the decision of the examiner to reject independent claims 6 and 10, and claims 7 to 9, 11 and 15 to 20 dependent thereon, under 35 U.S.C. § 102(a) as anticipated by Greer is reversed. Appeal No. 2004-0278 Application No. 09/626,362 Page 15 CONCLUSION To summarize, the decision of the examiner to reject claims 6 to 11 and 15 to 20 under 35 U.S.C. § 102(a) is reversed. REVERSED CHARLES E. FRANKFORT ) Administrative Patent Judge ) ) ) ) ) BOARD OF PATENT JOHN P. McQUADE ) APPEALS Administrative Patent Judge ) AND ) INTERFERENCES ) ) ) JEFFREY V. NASE ) Administrative Patent Judge ) Appeal No. 2004-0278 Application No. 09/626,362 Page 16 APPLIED MATERIALS, INC. 2881 SCOTT BLVD. M/S 2061 SANTA CLARA, CA 95050 JVN/jg Copy with citationCopy as parenthetical citation