Weka.IO LTDDownload PDFPatent Trials and Appeals BoardJul 9, 20212020002043 (P.T.A.B. Jul. 9, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/283,553 10/03/2016 Maor Ben Dayan 60041US02 9545 23446 7590 07/09/2021 MCANDREWS HELD & MALLOY, LTD 500 WEST MADISON STREET SUITE 3400 CHICAGO, IL 60661 EXAMINER VERDERAMO III, RALPH A ART UNIT PAPER NUMBER 2136 NOTIFICATION DATE DELIVERY MODE 07/09/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): mhmpto@mcandrews-ip.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________________ Ex parte MAOR BEN DAYAN, LIRAN ZVIBEL, and OMRI PALMON ____________________ Appeal 2020-002043 Application 15/283,553 Technology Center 2100 ____________________ Before JOHN A. EVANS, JUSTIN BUSCH, and JOHN P. PINKERTON, Administrative Patent Judges. PINKERTON, Administrative Patent Judge. DECISION ON APPEAL Appellant1 appeals under 35 U.S.C. § 134(a) from the Examiner’s Final Rejection of claims 1, 3–8, 10–14, and 16–54. Claims 2, 9, and 15 are canceled. We have jurisdiction under 35 U.S.C. § 6(b). We AFFIRM IN PART. 1 We use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies Weka.IO LTD as the real party in interest. Appeal Br. 1. Appeal 2020-002043 Application 15/283,553 2 STATEMENT OF THE CASE Introduction Appellant generally describes the disclosed and claimed invention as relating to a storage system comprising one or more computing devices configured to provide one or more distributed file systems to client applications. Spec. ¶ 7. Figure 1 is reproduced below. Figure 1 depicts an example implementation of storage system 100. Id. ¶ 11. As shown in Figure 1, client application 101 may generate a network file system (“NFS”) request or file system call, in response to which an I/O request may be generated by NFS server 103 or file system driver 105 Appeal 2020-002043 Application 15/283,553 3 and sent to storage system front end 107. Id. Storage system front end 107 relays the data to relevant storage back end 111 of storage system back ends 109, which stores the relevant information on object storage 115 and communicates with storage system SSD agent 119. Id. SSD agent 119 contains SSD 121 and provides it access to all back nodes 109. Id. ¶ 16. File system metadata is stored on an SSD, and relevant data may be migrated from an SSD to object storage as a background asynchronous process. Id. ¶ 27. Claims 1, 8, 14, 21, 27, 32, 38, 44, and 49 are independent. Independent claims 1 and 21, which are reproduced below, are illustrative of the subject matter on appeal: 1. A method for operating a storage system, comprising: receiving an I/O request by a storage system front end on a first server of a plurality of servers; determining a relevant storage system back end, of a plurality of storage system back ends, according to the I/O request, each server of the plurality of servers comprising one or more storage system back ends of the plurality of storage system back ends, wherein the plurality of storage system back ends are organized into a plurality of erasure-coded stripes, and wherein each erasure-coded stripe of the plurality of erasure-coded stripes spans more than one server of the plurality of servers, and wherein each server spanned by an erasure-coded stripe is located in a different failure domain; relaying information associated with the I/O request to the relevant storage system back end; communicating metadata associated with the I/O request between the relevant storage system back end and a first solid state drive (SSD) of a plurality of SSDs via an SSD agent of a plurality of SSD agents; writing the information associated with the I/O request to the first SSD, wherein the I/O request is a write operation; and Appeal 2020-002043 Application 15/283,553 4 migrating the information associated with the I/O request from the first SSD to an object store as a background asynchronous process. Appeal Br. 21 (Claims App.). 21. A method for operating a storage system, comprising: receiving an I/O request by a storage system front end on a first server of a plurality of servers; determining a relevant storage system back end, of a plurality of storage system back ends, according to the I/O request, each server of the plurality of servers comprising one or more storage system back ends of the plurality of storage system back ends, wherein the plurality of storage system back ends are organized into a plurality of erasure-coded stripes, and wherein each erasure-coded stripe of the plurality of erasure- coded stripes spans more than one server of the plurality of servers, and wherein each server spanned by an erasure-coded stripe is located in a different failure domain; relaying information associated with the I/O request to the relevant storage system back end; communicating metadata associated with the I/O request between the relevant storage system back end and a first solid state drive (SSD) of a plurality of SSDs via an SSD agent of a plurality of SSD agents; adding a second SSDs to the plurality of SSDs of the storage system; and redistributing data already written to the plurality of SSDs, wherein the redistribution is a virtualization of the storage system across the plurality of SSDs. Id. at 25–26 (Claims App.). Rejection on Appeal Claims 1, 3–8, 10–14, and 16–54 stand rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Youngworth (US 2015/0006846 A1; published Jan. 1, 2015). Final Act. 2–38. Appeal 2020-002043 Application 15/283,553 5 ANALYSIS Claims 1, 3–8, 10–14, 16–20, 22, 23, 28, 33, 34, 39, 40, 45, 50, and 51 Claim 1 recites, in relevant part, “migrating the information associated with the I/O request from the first SSD to an object store as a background asynchronous process.” Appeal Br. 21 (Claims App.). Claims 3–8, 10–14, 16–20, 22, 23, 28, 33, 34, 39, 40, 45, 50, and 51 recite similar limitations. See id. at 21–25, 26, 27, 29, 30, 32, 33 (Claims App.). In rejecting claim 1, the Examiner finds Youngworth discloses this limitation. Final Act. 4–5 (citing Youngworth ¶¶ 36, 142–44, 176, 224–25); see also Ans. 40–41 (additionally citing Youngworth ¶¶ 178, 181). Appellant contends the Examiner’s findings for this limitation are erroneous. Appeal Br. 14–16; Reply Br. 4–5; see also Appeal Br. 16–20 (referring to the same arguments for claims 8, 14, 22, 28, 33, 39). For the reasons stated below, we agree with Appellant that the Examiner erred. Youngworth is related to storing files in a computer storage system including a plurality of memory-storage hosts. Youngworth ¶ 18, Abstract. In operation, an I/O (such as a read or write) request targeting a block or chunk of data may be made by a client and satisfied by a portion of a local or remote cluster of physical storage disks based on certain mapping algorithms. Id. ¶¶ 41, 58, 71, 146. Appeal 2020-002043 Application 15/283,553 6 Figure 4 of Youngworth is reproduced below. Figure 4 of Youngworth depicts a storage system 400 for provisioning logical unit numbers (“LUN”) as storage units. Id. ¶ 66. As shown in Figure 4, physical node 402 includes Saratoga Speed Block Level Cluster (“SSBLC”) controller 404, which sits behind SAN Target 406, and storage 408 that provides a local mirror for stored data. Id. Controller 404 includes data structures 410, which relate exported LUN 412 to NA_LUN 414. Id. NA_LUN 414 includes chunk records for stored chunks of data, and CRUSH Hash module 416 maps a data chunk to location(s) in storage 408 (for example, a local mirror) or storage 418 (for example, a remote mirror). Id. Chunks may map to the physical storage of more than one disk on more than one physical node for mirroring or some other form of RAID. Id. ¶ 137. Each physical node in Youngworth operates as a back end and as a front end. Id. ¶ 37. In the front end, storage may be exported from physical node 402 to external clients via fiber channel 422 (or equivalent networking Appeal 2020-002043 Application 15/283,553 7 structure) to access SAN Target 406’s target LUN 424, which corresponds to exported LUN 412. Id. ¶¶ 37, 66. In the back end, mapping to the local and clustered set of disks may be employed to satisfy storage requests. Id. ¶ 37. LUNs 412 may be backed by storage segmented into chunks with IDs hashed by crush hash 416 to spread the locations to which chunk data will be assigned across multiple disks and chassis spanning the mirrors according to LUN policy. Id. ¶ 243. The back-end where data is sent may be contained in an SSBLC back end object such as an SSBLC Object Storage Disk. Id. Figure 11 of Youngworth is reproduced below. Figure 11 depicts system 1100 including SSBLC controller 1102 and disk storage system (SSBLC object storage disk) 1104 that supports object storage based on chunk and related identifiers. Id. ¶ 224. On read and write requests (read or write command 1108 identifying chunk ID, NA_LUN ID and Node ID) issued from SSBLC controller 1102, disk interface/controller 1106 accesses physical disk storage. Id. ¶ 225. Operations that relate chunk ID to the memory location (block size and offset) at object storage disk 1104 Appeal 2020-002043 Application 15/283,553 8 are performed at disk interface/controller 1106, which maintains data structures in FLASH/RAM memory including disk metadata 1110 and cached data objects 1112. Id. ¶ 226. Youngworth describes that when a back-end node discovers it must upgrade, it sends messages to the out-of-date clients attempting new I/Os that they must upgrade. Id. ¶ 187. Youngworth also describes that, chunk metadata, such as chunk IDs, timestamps, and CRUSH IDs may be compiled into lists, which may be updated and traversed, in response to which actions with respect to the chunks, such as requests, may be taken. Id. ¶¶ 178–181. Among other arguments, Appellant contends that none of the cited descriptions of Youngworth discloses “migrating the information associated with the I/O request from the first SSD to an object store as a background asynchronous process,” as recited. Appeal Br. 14–16; Reply Br. 4–5. In support of this contention, Appellant makes the following assertions. First, paragraphs 224 through 226 of Youngworth fail to show that Youngworth’s local mirror 408 and remote mirror 418 (a “plurality of SSDs” according to the Examiner) comprise Youngworth’s cache memory (“the first SSD” according to the Examiner). Appeal Br. 15. Second, the Final Office Action fails to show that Youngworth’s cached data objects 1112 (see Youngworth ¶¶ 224–26), out of band commands (see id. ¶ 36), metadata (see id. ¶¶ 142– 44), or asynchronous broadcast PAX OS update message (see id. ¶ 176) is migrated “from the first SSD to an object store” or “as a background asynchronous process.” Appeal Br. 15, 16. Third, the Examiner fails to show that Youngworth’s “out of band commands and events,” metadata, or asynchronous broadcast PAX OS update message is equivalent to Appeal 2020-002043 Application 15/283,553 9 Youngworth’s “messages to the out-of-date clients” (“information associated with the I/O request” according to the Examiner). Id. at 16. In response to Appellant’s arguments, the Examiner asserts that “paragraph [0178] of Youngworth discloses when a node enters state 2 it goes through the collection of chunks it holds in its backing store,” and “[a]ny chunk present is checked to see whether it still belongs on the node with the new CRUSH algorithm.” Ans. 40–41 (citing Youngworth ¶ 178). The Examiner further finds that “[p]aragraph [0181] describes that in the background the update list is traversed and when an entry with a higher generation number is found or when an entry is available for a missing item in backing store, a request is made,” such that “[o]nce the list is traversed, the node is considered data sync’d.” Id. at 41 (citing Youngworth ¶ 181). The Examiner then finds that “[t]hese citations of Youngworth appear to show data migration process that proceeds in the background in order for the system to remain online.” Id. Appellant responds that Youngworth’s paragraphs 178 and 181 do not identify an “I/O request,” “information associated with the I/O request,” a “first SSD,” “migrating the information associated with the I/O request,” or an “object store” as claimed. Reply Br. 5. We are persuaded of Examiner error. Youngworth discloses writing the information associated with an I/O request to the first SSD, for example, with its description of writing data to a particular chunk within one of the local or remote mirror disks on a physical node. Youngworth ¶¶ 56, 119–23, 127, 180. Youngworth describes writing information associated with the I/O request (for example, Chunk ID, NA_LUN ID, or Node ID) from the SSBLC controller to an SSBLC Object Storage Disk (an object store). Id. Appeal 2020-002043 Application 15/283,553 10 ¶¶ 224–26, Fig. 11. Youngworth further describes the creation and consolidation of lists (about nodes and chunk metadata) for consideration prior to executing read or write operations to particular chunks. Id. ¶¶ 178– 82. Youngworth also describes performing data recovery reads and writes (for example, after a crash), sending chunk metadata lists to a recovering node, assembling “migration lists,” and running metadata and data consolidation processes in the background. Youngworth ¶¶ 54, 90–92 142, 155, 157, 170, 172, 225. But the Examiner has not shown, nor is the record otherwise clear, that Youngworth describes migrating one or more of the above described types of request related information from one of its back end disk drives (for example, local mirror 408 or remote mirror 418) to an object store (for example, SSBLC object storage disk 1104), let alone performing the migration as “a background asynchronous process.” Therefore, we are persuaded the Examiner erred in finding Youngworth discloses “migrating the information associated with the I/O request from the first SSD to an object store as a background asynchronous process” as recited. Because the Examiner has not adequately shown that Youngworth discloses this limitation, we are constrained by this record to reverse the rejection of independent claim 1 under 35 U.S.C. § 102(a)(1) for anticipation based on Youngworth. For similar reasons, we reverse the rejection of independent claims 8 and 14, and dependent claims 3–7, 10–13, 16–20, 22, 23, 28, 33, 34, 39, 40, 45, 50, and 51 under 35 U.S.C. § 102(a)(1) for anticipation based on Youngworth. Each of these claims recites a limitation identical to or commensurate with the one at issue in claim 1, and the Examiner does not provide any other finding that cures the deficiencies discussed above. See Final Act. 5–14, 16–17, 20, 24–25, 28, 32, 36. Appeal 2020-002043 Application 15/283,553 11 Claims 21, 24–27, 29–32, 35–38, 41–44, 46–49, and 52–54 Unlike the claims discussed above, claims 21, 24–27, 29–32, 35–38, 41–44, 46–49, and 52–54 do not recite a “writing” or “migrating” limitation. See Appeal Br. 25–34 (Claims App.). Accordingly, Appellant’s arguments for these limitations (Appeal Br. 14–16; Reply Br. 4–5) are not commensurate with the scope of these claims and, thus, do not persuade us of Examiner error. Nor are we persuaded by Appellant’s remaining arguments, which we address in turn below. Appellant argues the Examiner erred in finding Youngworth discloses “relaying information associated with the I/O request to the relevant storage system back end,” as recited in claim 21, because Youngworth’s “messages to the out-of-date clients” flow in the wrong direction: they are sent from Youngworth’s SSBLC controller 1102 to Youngworth’s Disk 1104—not to Youngworth’s SSBLC controller 1102. Reply Br. 4; Appeal Br. 14, 17. “A reference may be read for all that it teaches, including uses beyond its primary purpose.” In re Mouttet, 686 F.3d 1322, 1331 (Fed. Cir. 2012). Appellant’s argument is not persuasive because it focuses on a single teaching of Youngworth and does not consider all that Youngworth discloses. In particular, while it is true that Youngworth describes relaying information from a storage system back end because its back-end node sends messages to out-of-date clients, this occurs when “a back-end node discovers that it must upgrade,” which indicates the relaying of information to a storage system back end. See Final Act. 4 (citing Youngworth ¶ 187). In addition, Youngworth describes that once an I/O request is made by a client, a particular mapping involving storage policy data may be made, and the appropriate chunk ID may be found to meet the request at the back end. Appeal 2020-002043 Application 15/283,553 12 Youngworth ¶¶ 146–147. And, as cited by the Examiner, Youngworth describes sending data to the back-end, whose storage that has been segmented into chunks with IDs hashed to spread the locations to which the data of the chunks will be assigned across multiple disks and chassis according to LUN policy. See Ans. 40 (citing Youngworth ¶ 243). These descriptions also show that information associated with a request has been relayed to the back end so it can determine which portion of the storage to write to or read from to satisfy the request. See Ans. 40 (explaining that in Youngworth, “mapping is used to determine distributed storage locations and route data to those locations accordingly.”). Claim 21 further recites that “the plurality of storage system back ends are organized into a plurality of erasure-coded stripes.” Appeal Br. 21 (Claims App.). The Examiner finds that the prior art anticipates this limitation and offers the following explanation: Youngworth discloses that each node operates as a back end and a front end (page 3, paragraph [0037]). Youngworth also states that depending on the policy of the LUN they are associated with they may map to the physical storage of more than one disk on more than one physical node for mirroring or some other form of RAID (page 11, paragraph [0137]). This citation clearly suggest that the LUN may span a plurality of nodes. As explained previously, “some other form of RAID” would include RAID 5, which would have been well known at the time, which was believed to be an example of erasure coded stripes. Furthermore, as extrinsic evidence, Examiner has referred to a techtarget online article about erasure coding. On the 6th page it is stated that, “just about every IT shop uses RAID 5 and RAID 6, which are very commonly used types of erasure coding.” The use of RAID, as mentioned in Youngworth, is believed to include RAID 5 which would teach an erasure-coded organization of data. Appeal 2020-002043 Application 15/283,553 13 Final Act. 3–4, 37–38 (citing Youngworth ¶¶ 37, 137; Carol Sliwa, Erasure coding definition: RAID 5, RAID 6 are most common forms, SearchStorage, TechTarget (Sept. 2013), http://web.archive.org/web/20131006063050/ https://searchstorage.techtarget.com/podcast/Erasure-coding-definition- RAID-5-RAID-6-are-most-common-forms.).2 Appellant argues the claim limitation is not met because the Examiner “states that the alleged ‘erasure-coded stripes’ are formed by Youngworth’s mirrors 408 and 418 (the alleged ‘SSDs’) – not by Youngworth’s SSBLC controllers 404 (the alleged ‘storage system back ends’).” Reply Br. 3; see Appeal Br. 11–13. Appellant argues further that “the publically available information about RAID 5 does not equate disks/drives with storage system back ends,” nor does the Examiner show how the disks in a RAID 5 arrangement “could reasonably be considered ‘storage system back ends.’” Appeal Br. 13. In response to Appellant’s arguments, the Examiner finds Youngworth describes a storage system including a plurality of physical nodes operating as back ends, LUN mirror data that resides across clustered nodes, chunks of data that “may map to the physical storage of more than one disk on more than one physical node for mirroring or some other form of RAID,” and disks that may be presented under some form of RAID. Ans. 39 (citing Youngworth ¶¶ 35, 37, 137, 140). The Examiner “believes that other forms of RAID implemented by Youngworth would similarly be 2 The Examiner first cited Sliwa in the Non-Final Office Action. Non-Final Office Action 37–38 (mailed February 21, 2019). The cited version of Sliwa, however, did not include page numbers. We identify the pages of this version of Sliwa as if they were numbered consecutively, starting with “Sliwa 1” and ending with “Sliwa 11.” Appeal 2020-002043 Application 15/283,553 14 spread across clustered nodes” and “that the citations of Youngworth . . . explain that the other forms of RAID would be done across nodes . . . operating as back ends.” See id. The Examiner further finds Youngworth “anticipates the use of RAID 5 when it refers to ‘other forms of RAID’ because RAID 5 is conventionally used in the art” and “known as an example of erasure coded stripes, as described by the techtarget online article previously cited by Examiner as extrinsic evidence.” Id. Also, the Examiner finds that “mapping . . . data chunks to more than one disk on more than one physical node for some other form of RAID [such as RAID 5] would create an erasure-coded stripe.” Id. at 39–40. We are not persuaded of Examiner error. As an initial matter, Appellant does not meaningfully rebut the Examiner’s finding that erasure coding striping is anticipated by the prior art’s disclosure of other forms of RAID, which the Examiner finds includes RAID 5. Appellant generally denies this finding, but does not offer any persuasive evidence to support its denial. See Reply Br. 3; Appeal Br. 12–13. “Arguments of counsel cannot take the place of evidence lacking in the record.” In re Jones, 10 F. App’x 822, 828 (Fed. Cir. 2001) (citation omitted). Appellant argues that the applied prior art does not disclose “storage system back ends organized into erasure coded stripes,” but Appellant cannot require more detail from the prior art than that of its own Specification. See In re Epstein, 32 F. 3d 1559, 1568 (Fed. Cir. 1994) (upholding decision of the Board where the Board observed that appellant did not provide the type of detail in his specification that he argued was necessary in prior art references). According to the Appeal Brief’s Summary of Claimed Subject Matter, paragraphs 21 and 24 of the Specification describe “[t]he plurality of Appeal 2020-002043 Application 15/283,553 15 storage system back ends are organized into a plurality of erasure-coded stripes” as claimed. Appeal Br. 2 (citing Spec. ¶¶ 21, 24). While these paragraphs generally describe the storage system as using erasure coding, including erasure-coded stripes, they do not mention the storage system’s “back ends” at all, much less describe how the “back ends” themselves would be organized into erasure coded stripes. See Spec. ¶¶ 21, 24; see also id. ¶¶ 8, 10, 23. The Specification’s lack of detail about how storage system back ends are organized into erasure-coded stripes indicates that a person of ordinary skill would have understood this technique was among the known applications of the disclosed data storage technology, which is consistent with the Examiner’s rationale. See Epstein, 32 F.3d at 1568. Moreover, a preponderance of the evidence supports the Examiner’s determination that “storage system back ends are organized into a plurality of erasure-coded stripes” is anticipated by the prior art. See, e.g., Youngworth ¶¶ 35, 37, 137, 140, Figs. 4, 6; Sliwa 6. As the Examiner finds, the cited descriptions of Youngworth show the use of RAID 0 and RAID 1—or “some other form of RAID”—logic as part of a storage system that maps or spreads chunks of data on more than one disk across a cluster of nodes that can operate as back ends. See, e.g., Youngworth, Fig. 4 (items 408, 418 (RAID 1)), Fig. 6 (items 604 (RAID 0), 606 (RAID 1)), ¶¶ 35 (“The LUN’s mirror data will reside across the clustered nodes.”), 37 (“Each physical node operates as a back end and a front end. . . . To satisfy the storage request the back end is employed. It is here that the mapping to the local and clustered set of disks is employed to provide the properties of the SSBLC system”), 137 (“Chunks . . . . may map to the physical storage of more than one disk on more than one physical node for mirroring or some Appeal 2020-002043 Application 15/283,553 16 other form of RAID. . . . The chunk ID along with the LUN policy and the LUN ID are used to map the chunk’s storage.”), 140 (“[D]isk controllers expose physical disks to SSBLC; these disks may presented one to one or as collections under some form of RAID.”). Moreover, on this record, we are persuaded by the Examiner’s finding that “some other form of RAID” “anticipates the use of RAID 5” in Youngworth’s storage system at the time of Appellant’s invention—and, thus, back ends “organized into a plurality of erasure coded stripes”—because Sliwa describes that, nearly two years before the filing date of the present application, “[j]ust about every IT shop use[d] RAID 5 and RAID 6, which are very commonly used types of erasure coding.” Final Act. 38; Sliwa 6. It bears noting here that Appellant did not cite any persuasive evidence to rebut this finding of the Examiner. See Reply Br. 3. In view of the foregoing, we are not persuaded the Examiner erred in finding “storage system back ends are organized into a plurality of erasure-coded stripes” is anticipated by the prior art. For the foregoing reasons, we sustain the Examiner’s rejection of claim 21 under 35 U.S.C. § 102(a)(1) for anticipation based on Youngworth. For those same reasons, we also sustain the Examiner’s § 102(a)(1) rejection of claims 24–27, 29–32, 35–38, 41–44, 46–49, and 52–54, which Appellant does not argue separately with particularity. CONCLUSION We affirm the Examiner’s decision to reject claims 21, 24–27, 29–32, 35–38, 41–44, 46–49, and 52–54 under 35 U.S.C. § 102(a)(1). Appeal 2020-002043 Application 15/283,553 17 We reverse the Examiner’s decision to reject claims 1, 3–8, 10–14, 16–20, 22, 23, 28, 33, 34, 39, 40, 45, 50, and 51 under 35 U.S.C. § 102(a)(1). DECISION SUMMARY Claims Rejected 35 U.S.C. § Basis/Reference(s) Affirmed Reversed 21, 24–27, 29– 32, 35–38, 41– 44, 46–49, 52– 54 102(a)(1) Youngworth 21, 24–27, 29–32, 35– 38, 41–44, 46–49, 52– 54 1, 3–8, 10–14, 16–20, 22, 23, 28, 33, 34, 39, 40, 45, 50, 51 102(a)(1) Youngworth 1, 3–8, 10– 14, 16–20, 22, 23, 28, 33, 34, 39, 40, 45, 50, 51 Overall Outcome 21, 24–27, 29–32, 35– 38, 41–44, 46–49, 52– 54 1, 3–8, 10– 14, 16–20, 22, 23, 28, 33, 34, 39, 40, 45, 50, 51 No period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). See 37 C.F.R. § 41.50(f). AFFIRMED IN PART Copy with citationCopy as parenthetical citation