Adoption of Recommendations

Download PDF
Federal RegisterJul 8, 2021
86 Fed. Reg. 36075 (Jul. 8, 2021)

AGENCY:

Administrative Conference of the United States.

ACTION:

Notice.

SUMMARY:

The Administrative Conference of the United States adopted four recommendations at its virtual Seventy-fourth Plenary Session. The appended recommendations are: (a) Managing Mass, Computer-Generated, and Falsely Attributed Comments; (b) Periodic Retrospective Review; (c) Early Input on Regulatory Alternatives; and (d) Virtual Hearings in Agency Adjudication. A fifth proposed recommendation, Clarifying Access to Judicial Review of Agency Action was considered but was remanded to the Committee on Judicial Review for further consideration.

FOR FURTHER INFORMATION CONTACT:

For Recommendation 2021-1, Danielle Schulkin; for Recommendation 2021-2, Gavin Young; for Recommendation 2021-3, Mark Thomson; and for Recommendation 2021-4, Jeremy Graboyes. For each of these actions the address and telephone number are: Administrative Conference of the United States, Suite 706 South, 1120 20th Street NW, Washington, DC 20036; Telephone 202-480-2080.

SUPPLEMENTARY INFORMATION:

The Administrative Conference Act, 5 U.S.C. 591-596, established the Administrative Conference of the United States. The Conference studies the efficiency, adequacy, and fairness of the administrative procedures used by Federal agencies and makes recommendations to agencies, the President, Congress, and the Judicial Conference of the United States for procedural improvements (5 U.S.C. 594(1)). For further information about the Conference and its activities, see www.acus.gov . At its virtual Seventy-fourth Plenary Session on June 17, 2021, the Assembly of the Conference adopted four recommendations.

Recommendation 2021-1, Managing Mass, Computer-Generated, and Falsely Attributed Comments. This recommendation offers agencies best practices for managing mass, computer-generated, and falsely attributed comments in agency rulemakings. It provides guidance for agencies on using technology to process such comments in the most efficient way possible while ensuring that the rulemaking process is transparent to prospective commenters and the public more broadly.

Recommendation 2021-2, Periodic Retrospective Review. This recommendation offers practical suggestions to agencies about how to establish periodic retrospective review plans. It provides guidance for agencies on identifying regulations for review, determining the optimal frequency of review, soliciting public feedback to enhance their review efforts, identifying staff to participate in review, and coordinating review with other agencies.

Recommendation 2021-3, Early Input on Regulatory Alternatives.This recommendation offers guidance about whether, when, and how agencies should solicit input on alternatives to rules under consideration before issuing notices of proposed rulemaking. It identifies specific, targeted measures for obtaining public input on regulatory alternatives from knowledgeable persons in ways that are cost-effective and equitable and that maximize the likelihood of obtaining diverse, useful responses.

Recommendation 2021-4, Virtual Hearings in Agency Adjudication. This recommendation addresses the use of virtual hearings—that is, proceedings in which participants attend remotely using a personal computer or mobile device—in agency adjudications. Drawing heavily on agencies' experiences during the COVID-19 pandemic, the recommendation identifies best practices for improving existing virtual-hearing programs and establishing new ones in accord with principles of fairness and efficiency and with due regard for participant satisfaction.

The Appendix below sets forth the full texts of these four recommendations, as well as three timely filed Separate Statements associated with Recommendation 2021-1, Managing Mass, Computer-Generated, and Falsely Attributed Comments. The Conference will transmit the recommendations to affected agencies, Congress, and the Judicial Conference of the United States, as appropriate. The recommendations are not binding, so the entities to which they are addressed will make decisions on their implementation.

The Conference based these recommendations on research reports that are posted at: https://www.acus.gov/meetings-and-events/plenary-meeting/74th-plenary-session-virtual . Committee-proposed drafts of the recommendations, and public comments received in advance of the plenary session, are also available using the same link.

Dated: July 2, 2021.

Shawne C. McGibbon,

General Counsel.

Appendix—Recommendations of the Administrative Conference of the United States

Administrative Conference Recommendation 2021-1

Managing Mass, Computer-Generated, and Falsely Attributed Comments

Adopted June 17, 2021

Under the Administrative Procedure Act (APA), agencies must give members of the public notice of proposed rules and the opportunity to offer their “data, views, or arguments” for the agencies' consideration. For each proposed rule subject to these notice-and-comment procedures, agencies create and maintain an online public rulemaking docket in which they collect and publish the comments they receive along with other publicly available information about the proposed rule. Agencies must then process, read, and analyze the comments received. The APA requires agencies to consider the “relevant matter presented” in the comments received and to provide a “concise general statement of [the rule's] basis and purpose.” When a rule is challenged on judicial review, courts have required agencies to demonstrate that they have considered and responded to any comment that raises a significant issue. The notice-and-comment process is an important opportunity for the public to provide input on a proposed rule and the agency to “avoid errors and make a more informed decision” on its rulemaking.

5 U.S.C. 553. This requirement is subject to a number of exceptions. See id.

See E-Government Act 206, 44 U.S.C. 3501 note (establishing the eRulemaking Program to create an online system for conducting the notice-and-comment process); see also Admin. Conf. of the U.S., Recommendation 2013-4, Administrative Record in Informal Rulemaking, 78 FR 41358 (July 10, 2013) (distinguishing between “the administrative record for judicial review,” “rulemaking record,” and the “public rulemaking docket”).

5 U.S.C. 553.

Perez v. Mortg. Bankers Ass'n, 575 U.S. 92, 96 (2015) (“An agency must consider and respond to significant comments received during the period for public comment.”).

Azar v. Allina Health Services, 139 S. Ct. 1804, 1816 (2019).

Technological advances have expanded the public's access to agencies' online rulemaking dockets and made it easier for the public to comment on proposed rules in ways that the Administrative Conference has encouraged. At the same time, in recent high-profile rulemakings, members of the public have submitted comments in new ways or in numbers that can challenge agencies' current approaches to processing these comments or managing their online rulemaking dockets.

See Admin. Conf. of the U.S., Recommendation 2018-7, Public Engagement in Rulemaking, 84 FR 2146 (Feb. 6, 2019); Admin. Conf. of the U.S., Recommendation 2013-5, Social Media in Rulemaking, 78 FR 76269 (Dec. 17, 2013); Admin. Conf. of the U.S., Recommendation 2011-8, Agency Innovations in eRulemaking, 77 FR 2264 (Jan. 17, 2012); Admin. Conf. of the U.S., Recommendation 2011-2, Rulemaking Comments, 76 FR 48791 (Aug. 9, 2011).

Agencies have confronted three types of comments that present distinctive management challenges: (1) Mass comments, (2) computer-generated comments, and (3) falsely attributed comments. For the purposes of this Recommendation, mass comments are comments submitted in large volumes by members of the public, including the organized submission of identical or substantively identical comments. Computer-generated comments are comments whose substantive content has been generated by computer software rather than by humans. Falsely attributed comments are comments attributed to people who did not submit them.

The ability to automate the generation of comment content may also remove human interaction with the agency and facilitate the submission of large volumes of comments in cases in which software can repeatedly submit comments via Regulations.gov.

These three types of comments, which have been the subject of recent reports by both federal and state authorities, can raise challenges for agencies in processing, reading, and analyzing the comments they receive in some rulemakings. If not managed well, the processing of these comments can contribute to rulemaking delays or can raise other practical or legal concerns for agencies to consider.

See Permanent Subcommittee on Investigations, U.S. Senate Comm. on Homeland Security and Gov't Affairs, Staff Report, Abuses of the Federal Notice-and-Comment Rulemaking Process (2019); U.S. Gov't Accountability Off., GAO-20-413T, Selected Agencies Should Clearly Communicate How They Post Public Comments and Associated Identity Information (2020); U.S. Gov't Accountability Off., GAO-19-483, Selected Agencies Should Clearly Communicate Practices Associated with Identity Information in the Public Comment Process (2019).

N.Y. State Off. of the Att'y Gen., Fake Comments: How U.S. Companies & Partisans Hack Democracy to Undermine Your Voice (2021).

In addressing the three types of comments in a single recommendation, the Conference does not mean to suggest that agencies should treat these comments in the same way. Rather, the Conference is addressing these comments in the same Recommendation because, despite their differences, they can present similar or even overlapping management concerns during the rulemaking process. In some cases, agencies may also confront all three types of comments in the same rulemaking.

The challenges presented by these three types of comments are by no means identical. With mass comments, agencies may encounter processing or cataloging challenges simply as a result of the volume as well as the identical or substantively identical content of some comments they receive. Without the requisite tools, agencies may also find it difficult or time-consuming to digest or analyze the overall content of all comments they receive.

In contrast with mass comments, computer-generated comments and falsely attributed comments may mislead an agency or raise issues under the APA and other statutes. One particular problem that agencies may encounter is distinguishing computer-generated comments from comments written by humans. Computer-generated comments may also raise potential issues for agencies as a result of the APA's provision for the submission of comments by “interested persons.” Falsely attributed comments can harm people whose identities are appropriated and may create the possibility of prosecution under state or federal criminal law. False attribution may also deceive agencies or diminish the informational value of a comment, especially when the commenter claims to have situational knowledge or the identity of the commenter is otherwise relevant. The informational value that both of these types of comments provide to agencies is likely to be limited or at least different from comments that have been neither computer-generated nor falsely attributed.

This Recommendation is limited to how agencies can better manage the processing challenges associated with mass, computer-generated, and falsely attributed comments. By addressing these processing challenges, the Recommendation is not intended to imply that widespread participation in the rulemaking process, including via mass comments, is problematic. Indeed, the Conference has explicitly endorsed widespread public participation on multiple occasions, and this Recommendation should help agencies cast a wide net when seeking input from all individuals and groups affected by a rule. The Recommendation aims to enhance agencies' ability to process comments they receive in the most efficient way possible and to ensure that the rulemaking process is transparent to prospective commenters and the public more broadly.

This Recommendation does not address what role particular types of comments should play in agency decision making or what consideration, if any, agencies should give to the number of comments in support of a particular position.

See Recommendation 2018-7, supra note 6; Admin. Conf. of the U.S., Recommendation 2017-3, Plain Language in Regulatory Drafting, 82 FR 61728 (Dec. 29, 2017); Admin. Conf. of the U.S., Recommendation 2017-2, Negotiated Rulemaking and Other Options for Public Engagement, 82 FR 31040 (July 5, 2017); Admin. Conf. of the U.S., Recommendation 2014-6, Petitions for Rulemaking, 79 FR 75117 (Dec. 17, 2014); Recommendation 2013-5, supra note 6; Recommendation 2011-8, supra note 6; Admin. Conf. of the U.S., Recommendation 2011-7, Federal Advisory Committee Act: Issues and Proposed Reforms, 77 FR 2261 (Jan. 17, 2012); Recommendation 2011-2, supra note 6.

Agencies can advance the goals of public participation by being transparent about their comment policies or practices and by providing educational information about public involvement in the rulemaking process. Agencies' ability to process comments can also be enhanced by digital technologies. As part of its eRulemaking Program, for example, the General Services Administration (GSA) has implemented technologies on the Regulations.gov platform that make it easier for agencies to verify that a commenter is a human being. GSA's Regulations.gov platform also includes an application programming interface (API)—a feature of a computer system that enables different systems to communicate with it—to facilitate mass comment submission. This technology platform allows partner agencies to better manage comments from identifiable entities that submit large volumes of comments. Some federal agencies also use a tool, sometimes referred to as de-duplication software, to identify and group identical or substantively identical comments.

For an example of educational information on rulemaking participation, see the “Commenter's Checklist” that the eRulemaking Program currently displays in a pop-up window for every rulemaking web page that offers the public the opportunity to comment. See Commenter's Checklist, Gen. Servs. Admin., https://www.Regulations.gov (last visited May 24, 2021) (navigate to any rulemaking with an open comment period; click comment button; then click “Commenter's Checklist”). In addition, the text of this checklist appears on the project page for this Recommendation on the ACUS website.

This software is distinct from identity validation technologies that force commenters to prove their identities.

See Regulations.gov API, Gen. Servs. Admin., https://open.gsa.gov/api/regulationsgov/ (last visited May 24, 2021).

New software and technologies to manage public comments will likely emerge in the future, and agencies will need to keep apprised of them. Agencies might also consider adopting alternative methods for encouraging public participation that augment the notice-and-comment process, particularly to the extent that doing so ameliorates some of the management challenges described above. Because technology is rapidly changing, agencies will need to stay apprised of new developments that could enhance public participation in rulemaking.

See Steve Balla, Reeve Bull, Bridget Dooling, Emily Hammond, Michael Herz, Michael Livermore, & Beth Simone Noveck, Mass, Computer-Generated, and Fraudulent Comments 43-48 (June 1, 2021) (report to the Admin. Conf. of the U.S.).

Not all agencies will encounter mass, computer-generated, or falsely attributed comments. But some agencies have confronted all three, sometimes in the same rulemaking. In offering the best practices that follow, the Conference recognizes that agency needs and resources will vary. For this reason, agencies should tailor the best practices in this Recommendation to their particular rulemaking programs and the types of comments they receive or expect to receive.

Recommendation

Managing Mass Comments

1. The General Services Administration's (GSA) eRulemaking Program should provide a common de-duplication tool for agencies to use, although GSA should allow agencies to modify the de-duplication tool to fit their needs or to use another tool, as appropriate. When agencies find it helpful to use other software tools to perform de-duplication or extract information from a large number of comments, they should use reliable and appropriate software. Such software should provide agencies with enhanced search options to identify the unique content of comments, such as the technologies used by commercial legal databases like Westlaw or LexisNexis.

2. To enable easier public navigation through online rulemaking dockets, agencies may welcome any person or entity organizing mass comments to submit comments with multiple signatures rather than separate identical or substantively identical comments.

3. Agencies may wish to consider alternative approaches to managing the display of comments online, such as by posting only a single representative example of identical comments in the online rulemaking docket or by breaking out and posting only non-identical content in the docket, taking into consideration the importance to members of the public to be able to verify that their comments were received and placed in the agency record. When agencies decide not to display all identical comments online, they should provide publicly available explanations of their actions and the criteria for verifying the receipt of individual comments or locating identical comments in the docket and for deciding what comments to display.

4. When an agency decides not to include all identical or substantively identical comments in its online rulemaking docket to improve the navigability of the docket, it should ensure that any reported total number of comments (such as in Regulations.gov or in the preambles to final rules) includes the number of identical or substantively identical comments. If resources permit, agencies should separately report the total number of identical or substantively identical comments they receive. Agencies should also consider providing an opportunity for interested members of the public to obtain or access all comments received.

Managing Computer-Generated Comments

5. To the extent feasible, agencies should flag any comments they have identified as computer-generated or display or store them separately from other comments. If an agency flags a comment as computer-generated, or displays or stores it separately from the online rulemaking docket, the agency should note its action in the docket. The agency may also choose to notify the submitter directly if doing so does not violate any relevant policy prohibiting direct contact with senders of “spam” or similar communications.

6. Agencies that operate their own commenting platforms should consider using technology that verifies that a commenter is a human being, such as reCAPTCHA or another similar identity proofing tool. The eRulemaking Program should continue to retain this functionality.

7. When publishing a final rule, agencies should note any comments on which they rely that they know are computer-generated and state whether they removed from the docket any comments they identified as computer-generated.

Managing Falsely Attributed Comments

8. Agencies should provide opportunities (including after the comment deadline) for individuals whose names or identifying information have been attached to comments they did not submit to identify such comments and to request that the comment be anonymized or removed from the online rulemaking docket.

9. If an agency flags a comment as falsely attributed or removes such a comment from the online rulemaking docket, it should note its action in the docket. Agencies may also choose to notify the purported submitter directly if doing so does not violate any agency policy.

10. If an agency relies on a comment it knows is falsely attributed, it should include an anonymized version of that comment in its online rulemaking docket. When publishing a final rule, agencies should note any comments on which they rely that are falsely attributed and should state whether they removed from the docket any falsely attributed comments.

Enhancing Agency Transparency in the Comment Process

11. Agencies should inform the public about their policies concerning the posting and use of mass, computer-generated, and falsely attributed comments. These policies should take into account the meaningfulness of the public's opportunity to participate in the rulemaking process and should balance goals such as user-friendliness, transparency, and informational completeness. In their policies, agencies may provide for exceptions in appropriate circumstances.

12. Agencies and relevant coordinating bodies (such as GSA's eRulemaking Program, the Office of Information and Regulatory Affairs, and any other governmental bodies that address common rulemaking issues) should consider providing publicly available materials that explain to prospective commenters what types of responses they anticipate would be most useful, while also welcoming any other comments that members of the public wish to submit and remaining open to learning from them. These materials could be presented in various formats—such as videos or FAQs—to reach different audiences. These materials may also include statements within the notice of proposed rulemaking for a given agency rule or on agencies' websites to explain the purpose of the comment process and explain that agencies seriously consider any relevant public comment from a person or organization.

13. To encourage the most relevant submissions, agencies that have specific questions or are aware of specific information that may be useful should identify those questions or such information in their notices of proposed rulemaking.

Additional Opportunities for Public Participation

14. Agencies and relevant coordinating bodies should stay abreast of new technologies for facilitating informative public participation in rulemakings. These technologies may help agencies to process mass comments or identify and process computer-generated and falsely attributed comments. In addition, new technologies may offer new opportunities to engage the public, both as part of or as a supplement to the notice-and-comment process. Such opportunities may help ensure that agencies receive input from communities that may not otherwise have an opportunity to participate in the conventional comment process.

Coordination and Training

15. Agencies should work closely with relevant coordinating bodies to improve existing technologies and develop new technologies to address issues associated with mass, computer-generated, and falsely attributed comments. Agencies and relevant coordinating bodies should share best practices and relevant innovations for addressing challenges related to these comments.

16. Agencies should develop and offer opportunities for ongoing training and staff development to respond to the rapidly evolving nature of technologies related to mass, computer-generated, and falsely attributed comments and to public participation more generally.

17. As authorized by 5 U.S.C. 594(2), the Conference's Office of the Chairman should provide for the “interchange among administrative agencies of information potentially useful in improving” agency comment processing systems. The subjects of interchange might include technological and procedural innovations, common management challenges, and legal concerns under the Administrative Procedure Act and other relevant statutes.

Separate Statement for Administrative Conference Recommendation 2021-1 by Senior Fellow Randolph J. May

Filed June 18, 2021

I attended several of the Committee meetings that considered the preparation of this Recommendation. So, I have a good sense of the hard work that went into the preparation of the Recommendation by the Consultants, the Rulemaking Committee Chair Cary Coglianese, the Committee members, and the ACUS staff, and I am grateful for their dedication.

I support adoption of the Recommendation in the context of the express limitation of the scope of the project as stated: “This Recommendation does not address what role particular types of comments should play in agency decision making or what consideration, if any, agencies should give to the number of comments in support of a particular position.”

I wish to associate myself generally with the Comment of Senior Fellow Richard Pierce, dated May 25, 2021, especially his concern that the ACUS Recommendation not be misconstrued to foster “the widespread but mistaken public belief that notice and comment rulemaking can and should be considered a plebiscite in which the number of comments filed for or against a proposed rule is an accurate measure of public opinion that should influence the agency's decision whether to adopt the proposed rule.”

I have submitted comments and/or reply comments in every “net neutrality” proceeding, however denominated, the Federal Communications Commission has conducted over the last fifteen years—and, yes, the back-and-forth battle over various “net neutrality” proposals has been going on that long and there have been at least a dozen comment cycles. However, especially in the last two “net neutrality” rulemaking cycles, in 2014-2015 and 2017, there has been a major escalation—you could call it exercising the “nuclear option”—in the effort, by both opposing sides, to generate as many mass, computer-generated form comments as possible. By “form comments” I mean comments that concededly contain little or no information beyond cursorily stating a “pro” or “con” position.

The startling results of going nuclear, in terms of generating the sheer number of mass, computer-generated form comments in the latest “net neutrality” round are now well-known. The phenomenon has been the subject of federal and state studies cited in the Recommendation's Preamble, with some of the most significant details cited in Professor Pierce's separate statement. Aside from any other concerns, I can personally testify that the deluge of approximately 22 million mass, computer-generated form comments often overwhelmed the FCC's ability to keep its electronic filing system operating properly and often rendered the ability to search for comments that might possibly contain relevant data and information well-nigh impossible.

And, of course, the huge costs expended by private parties engaging in the effort that led to the submission of approximately 22 million mass, computer-generated form comments (including the 18 million “fake” comments) were enormous, not to mention the direct and indirect costs imposed on the government merely to compile, process, and review the comments.

It is blinking reality not to recognize that the pro- and con- net neutrality interests responsible for generating 22 million comments assumed, in some significant way, that the outcome of the rulemaking would be impacted by which side “won” the comment battle. In other words, it must have been assumed that, in some meaningful sense, the rulemaking would be decided on the basis of a plebiscite, “counting comments,” not on the basis of the quality of the data, evidence, and arguments submitted.

So, while I accept the constraints imposed by the parameters of this Recommendation—which, on its own terms, contains useful guidance to assist agencies—I hope that, going forward, ACUS will initiate a project that considers the appropriateness of curbing the submission of mass, computer-generated form comments, and, if so, how best to accomplish this. Certainly public education, including by government officials, and especially the pertinent agency officials, regarding the objectives of the rulemaking process in general, and specific rulemakings in particular, can play an important role.

I wish to make clear that I recognize the value of widespread participation by “interested persons,” as the Administrative Procedure Act puts it, in the rulemaking process, not only because of the value of the evidence put on the record through such participation, but because of the instrumental value bestowed upon interested persons by the opportunity to participate in government decision-making processes that affect them.

With due deliberation, with recognition of the need to exercise care in drawing relevant distinctions among various types of rulemaking proceedings and their objectives, there ought to be a proper way to discourage the type of “comment war” that occurred in the two most recent FCC net neutrality proceedings, while, at the same time, encouraging the type of widespread public participation that is most helpful to agencies in promulgating sound public policies.

Separate Statement for Administrative Conference Recommendation 2021-1 by Senior Fellow Nina A. Mendelson

Filed June 27, 2021 (This Is an Abbreviated Version of a Statement That Is Available on the ACUS Website.)

This Recommendation, the product of much hard work, will help guide agencies managing mass comments and addressing falsely attributed and computer-generated comments. But these rulemaking-related challenges raise very different concerns. Comments from ordinary individuals, whatever their volume, and whether they supply situated knowledge or views, can be relevant, useful, and even important to many rulemakings. The Recommendation correctly does not imply otherwise. The Conference should address the proper agency response to such comments separately, and soon.

First, public comment's function encompasses more than the purely “technical,” whether that is supplying data or critiquing an agency's economic analysis. For some statutory issues, certainly, public comments transmitting views are less relevant. Under the Endangered Species Act, for example, an agency determining whether an animal is endangered must assess its habitat and likelihood of continued existence. Public affection for a species is not directly relevant.

But agencies address numerous issues that, by statute, extend far beyond technocratic questions, encompassing value-laden issues. An agency deciding what best serves public-regarding statutory goals must balance all such considerations.

Nonexclusive examples relevant to agency statutory mandates include:

  • The importance of nearby accessible bathrooms to the dignity of wheelchair users, at issue in a 2010 Americans with Disabilities Act regulation.
  • Weighing potential public resource uses. For multiple-use public lands, the Bureau of Land Management must, by regulation, balance recreation and “scenic, scientific and historical values” with resource extraction uses, including timbering and mining.
  • Potential public resistance to an action, such as the Coast Guard's ultimately abandoned decision creating live-fire zones in the Great Lakes for weapons practice in the early 2000s. Had the agency seriously sought out public comment, it would have detected substantial public resistance to this action, which, without the benefit of participation, the agency considered justified and minimally risky.
  • Public resistance to a possible mandate as unduly paternalistic, burdensome, or exclusionary, whether ignition interlock or a vaccine passport requirement. Justice Rehnquist identified this issue in Motor Vehicles Mfg. Ass'n v. State Farm Mutual Auto Ins., 463 U.S. 29 (1983). Though Justice Rehnquist's dissent linked the issue to presidential elections, he underscored its relevance to rulemaking.
  • Environmental justice/quality of life matters. In a 2020 rule implementing the National Environmental Policy Act, the Council on Environmental Quality decided that an agency need no longer assess a proposed action's cumulative impacts in its environmental impact analysis. This decision will especially impact low-income communities and communities of color, including Southwest Detroit, where multiple polluting sources adjoin residential neighborhoods. Whether to require cumulative impacts analysis is not a technical issue. It is a policy decision whether community environmental and quality of life concerns are important enough to justify lengthier environmental analyses. The comment process enables communities to express directly the importance of these issues.

Rulemaking is certainly not a plebiscite. Besides representativeness concerns, that is mainly because statutes typically require agencies to consider multiple factors, not only public views. But ordinary people's views and preferences are nonetheless relevant and thus appropriately communicated to the agency. The text of 5 U.S.C. 553(c) is express here: “interested persons” are entitled to submit “data, views, or arguments.”

Second, the identity of individual commenters may provide critical context. That a comment on a proposed ADA regulation's importance is from a wheelchair user should matter. The same is true for religious group members describing potential interference with their practices, residents near a pipeline addressing safety or public notice requirements, or Native American tribal members speaking to spiritual values and historical significance of public lands.

Third, a meaningfully open comment process supports broader public engagement by otherwise underrepresented individuals and communities, whether because of race, ethnicity, gender identity, or something else. Studies consistently show that industry groups and regulated entities, with disproportionate resources, access to agency meetings, and ability to exert political pressure, punch above their weight in the comment process. Suggesting that agencies can appropriately ignore comments from individuals would simply reinforce this disparate influence. It would also undercut the Conference's position in Recommendation 2018-7, Public Engagement in Rulemaking, that agencies should act to broaden and enhance public participation.

Moreover, while groups can support participation, agencies should not assume that group action sufficiently conveys individual views. Many individual interests—even important ones—are underrepresented. With respect to employees such as truck drivers, for example, unions represent only 10% of U.S. wage workers.

Where groups do support individual comment submission, their involvement should not be understood to taint participation. Well-funded regulated entities typically hire attorneys to draft their comments. We nonetheless attribute those views to the commenters. We should treat individual comments similarly even if they incorporate group-suggested language.

Fourth, although mass comments in certain rulemakings may have encouraged computer-generated and falsely attributed comments, agencies should directly tackle these latter problems. And while comments from individuals vary in usefulness and sophistication, that is true of all comments. In short, agencies should respond to large volumes of individual comments not by attempting to deter them but instead, following Recommendation paragraphs 11-13, by providing clear, visible public information on how to draft a valuable comment.

Finally, the most difficult issue is how, exactly, agencies should respond to individual comments that convey views as well as, or instead of, specific information regarding a rule's need or impacts. Large comment volumes, most pragmatically, may signal an agency regarding the rule's political context, including potential congressional concern. Further, large comment quantities can alert agencies to underappreciated or undercommunicated issues or reveal potential public resistance. Such comments might constitute a yellow flag for an agency to investigate, including by reaching out to particular communities to assess the basis and intensity of their views.

At a minimum, an agency should acknowledge and answer such comments, even briefly. The agency might judge that particular public views are outweighed by other considerations. But an answer will communicate, importantly, that individuals have been heard. The Federal Communication Commission's responses to large comment volumes in recent net neutrality proceedings are reasonable examples.

I urge the Conference to consider these issues soon and provide guidance to rulemaking agencies.

Separate Statement for Administrative Conference Recommendation 2021-1 by Senior Fellow Richard J. Pierce, Jr.

Filed June 29, 2021 (This Is an Abbreviated Version of a Statement That Is Available on the ACUS Website.)

These three phenomena and the many problems that they create have only one source—the widespread but mistaken public belief that notice and comment rulemaking can and should be considered a plebiscite in which the number of comments filed for or against a proposed rule is an accurate measure of public opinion that should influence the agency's decision whether to adopt the proposed rule. I believe that ACUS can and should assist agencies in explaining to the public why the notice and comment process is not, and cannot be, a plebiscite, and why the number of comments filed in support of, or in opposition to, a proposed rule should not, and cannot, be a factor in an agency's decision making process.

The Notice and Comment Process Allows Agencies To Issue Rules That Are Based on Evidence

The notice and comment process is an extraordinarily valuable tool that allows agencies to issue rules that are based on evidence. It begins with the issuance of a notice of proposed rulemaking in which an agency describes a problem and proposes one or more ways in which the agency can address the problem by issuing a rule.

The agency then solicits comments from interested members of the public. The comments that assist the agency in evaluating its proposed rule are rich in data and analysis. Some support the agency's views with additional evidence, while others purport to undermine the evidentiary basis for the proposed rule. The agency then makes a decision whether to adopt the proposed rule or some variant of the proposed rule in light of its evaluation of all of the evidence in the record, including both the studies that the agency relied on in its notice and the data and analysis in the comments submitted in response to the notice. Courts require agencies to address all of the issues that were raised in all well-supported substantive comments and to explain adequately why the agency issued, or declined to issue, the rule it proposed or some variation of that rule in light of all of the evidence the agency had before it. If the agency fails to fulfill that duty, the court rejects the rule as arbitrary and capricious.

ACUS has long supported efforts to assist the intended beneficiaries of rules in their efforts to overcome the obstacles to their ability to participate effectively in rulemakings. ACUS should continue to help members of the public file comments that assist an agency in crafting a rule that addresses a problem effectively.

Mass Comments Are Not Helpful to Agency Decision Making and Create Major Problems

Sometimes the companies and advocacy organizations that support or oppose a proposed rule organize campaigns in which they induce members of the public to file purely conclusory comments in which they merely state their support for or opposition to a proposed rule. The proponents or opponents then argue that the large number of such comments prove that there is strong public support for the position taken in those comments. Comments of that type have no value in an agency's decision-making process. Every scholar who has studied the issue has concluded that the number of comments filed for or against a proposed rule is not, and cannot be, a reliable measure of the public's views with respect to the proposed rule.

Mass comment campaigns create major problems in the notice and comment process. Many of those problems were evident in the 2017 net neutrality rulemaking. The New York Attorney General documented the results of the well-orchestrated mass comment campaign in that rulemaking in the report that she issued on May 6, 2021. She labeled as “fake” 18 million of the 22 million comments that were filed in the docket. The number of “fake” comments filed in support of net neutrality were approximately equal to the number of “fake” comments filed by the opponents of net neutrality. One college student filed 7.7 million comments in support of net neutrality, while ISPs paid consulting firms 8.2 million dollars to generate comments against net neutrality.

Two things are easy to predict if the public continues to believe that the number of comments for or against a proposed rule is an important factor in an agency's decision-making process. First, the next net neutrality rulemaking will elicit even more millions of comments as the warring parties on both sides escalate their efforts to maximize the “vote” on each side of the issue. Second, the firms that have a lot of money at stake in other rulemakings will begin to replicate the behavior of the firms that are on each side of the net neutrality debate. The results will be massive, unmanageable dockets in which the “noise” created by the mass comments will make it increasingly difficult for agencies and reviewing courts to focus their attention on the substantive comments that provide the evidence that should be the basis for the agency's decision.

ACUS Should Initiate Another Project To Address Mass Comments in Rulemakings

I think that ACUS should initiate a new project in which it decides whether to discourage mass comments, computer-generated comments and fraudulent comments and, if so, how best to accomplish that. I believe that ACUS can and should discourage these practices by, for instance, encouraging agencies to assist in educating the public about the types of comments that can assist agencies in making evidence-based decisions and the types of comments that are not helpful to agencies and that instead create a variety of problems in managing the notice and comment process.

Administrative Conference Recommendation 2021-2

Periodic Retrospective Review

Adopted June 17, 2021

Retrospective review is the process by which agencies assess existing regulations and decide whether they need to be revisited. Consistent with longstanding executive-branch policy, the Administrative Conference has endorsed the practice of retrospective review of agency regulations and has urged agencies to consider conducting retrospective review under a specific timeframe, which is often known as “periodic retrospective review.” Agencies may conduct periodic retrospective review in different ways. One common way is for an agency to undertake review of some or all of its regulations on a pre-set schedule (e.g., every ten years). Another way is for the agency to set a one-time date for reviewing a regulation and, when that review is performed, set a new date for the next review, and so on. This latter method enables the agency to adjust the frequency of a regulation's periodic retrospective review in light of experience.

See Exec. Order No. 12866, 58 FR 51735, 51739-51740 (Sept. 30, 1993); see also Joseph E. Aldy, Learning from Experience: An Assessment of the Retrospective Reviews of Agency Rules and the Evidence for Improving the Design and Implementation of Regulatory Policy 27 (Nov. 17, 2014) (report to the Admin. Conf. of the U.S.) (“The systematic review of existing regulations across the executive branch dates back, in one form or another, to the Carter Administration.”).

See Admin. Conf. of the U.S., Recommendation 2017-6, Learning from Regulatory Experience, 82 FR 61738 (Dec. 29, 2017); Admin. Conf. of the U.S., Recommendation 2014-5, Retrospective Review of Agency Rules, 79 FR 75114 (Dec. 17, 2014); Admin. Conf. of the U.S., Recommendation 95-3, Review of Existing Agency Regulations, 60 FR 43108 (Aug. 18, 1995).

Recommendation 95-3, supra note 2.

Periodic retrospective review may occur because a statute requires it or because an agency chooses to do it on its own initiative. Statutes requiring periodic retrospective review may specify a time interval over which review should be conducted or leave the frequency up to the agency. The Clean Air Act, for example, requires the Environmental Protection Agency to review certain ambient air quality regulations every five years. On the other hand, the Transportation Recall Enhancement, Accountability, and Documentation (TREAD) Act provides that the Department of Transportation must “specify procedures for the periodic review and update” of its rule on early warning reporting requirements for manufacturers of motor vehicles without specifying how often that review must occur. Even when periodic retrospective review is not mandated by statute, agencies have sometimes voluntarily implemented periodic retrospective review programs.

42 U.S.C. 7309(d)(1).

49 U.S.C. 30166(m)(5).

See Lori S. Bennear & Jonathan B. Wiener, Periodic Review of Agency Regulation 33-38 (June 7, 2021) (report to the Admin. Conf. of the U.S.) (discussing periodic retrospective review plans issued by several agencies, including the Department of Transportation, the Securities and Exchange Commission, and the Federal Emergency Management Agency).

Periodic retrospective review can enhance the quality of agencies' regulations by helping agencies determine whether regulations continue to meet their statutory objectives. Such review can also help agencies evaluate regulatory performance (e.g., the benefits, costs, ancillary impacts, and distributional impacts of regulations), assess whether and how a regulation should be revised in a new rulemaking, determine the accuracy of the assessments they made before issuing their regulations (including assessments regarding forecasts of benefits, costs, ancillary impacts, and distributional impacts), and identify ways to improve the accuracy of the underlying assessment methodologies. Agencies that have incorporated standards by reference in their regulations also can—and, indeed, should—arrange to be notified by the adopting standards organizations of relevant revisions to those standards and consider adopting those revisions, thus ensuring that regulations remain current.

An ancillary impact is an “impact of the rule that is typically unrelated or secondary to the statutory purpose of the rulemaking . . . .” Off. of Mgmt. & Budget, Exec. Off. of the President, Circular A-4, Regulatory Analysis 26 (2003).

A distributional impact is an “impact of a regulatory action across the population and economy, divided up in various ways (e.g., by income groups, race, sex, industrial sector, geography).” Id. at 14.

Id. at 8.

But there can also be drawbacks associated with periodic retrospective review. Some regulations may not be strong candidates for such review because the need for the regulations is unlikely to change and the benefits associated with periodically revisiting them are likely to be small. There are also costs associated with collecting and analyzing data, and time spent reviewing existing regulations may come at the cost of other important regulatory activities. For this reason, agencies might reasonably decide to limit periodic retrospective review to certain types of regulations, such as important regulations that affect large numbers of people or that have particularly pronounced effects on specific groups. Periodic retrospective review can also generate uncertainty regarding whether a regulation will be retained or modified. Agencies, therefore, should tailor their periodic retrospective review plans carefully to account for these drawbacks.

See, e.g., Recommendation 2014-5, supra note 2, ¶ 5 (providing a list of factors for agencies to consider when prioritizing some regulations as important).

Mindful of both the value of periodic retrospective review and the tradeoffs associated with it, this Recommendation offers practical suggestions to agencies about how to establish periodic retrospective review plans. It does so by, among other things, identifying the types of regulations that lend themselves well to periodic retrospective review, proposing factors for agencies to consider in deciding the optimal review frequency when they have such discretion, and identifying different models for staffing periodic retrospective review. In doing so, it builds upon the Conference's longstanding endorsement of public participation in all aspects of the rulemaking process, including retrospective review, by encouraging agencies to seek public input both to help identify the types of regulations that lend themselves well to periodic retrospective review and to inform that review.

See, e.g., Admin. Conf. of the U.S., Recommendation 2018-7, Public Engagement in Rulemaking, 84 FR 2146 (Feb. 6, 2019); Admin. Conf. of the U.S., Recommendation 2017-2, Negotiated Rulemaking and Other Options for Public Engagement, 82 FR 31040 (July 5, 2017).

See supra note 2.

This Recommendation also recognizes the important role that the Office of Management and Budget (OMB) plays in agencies' periodic retrospective review efforts as well as the significance of the Foundations for Evidence-Based Policymaking Act (the Evidence Act) and associated OMB-issued guidance. It encourages agencies to work with OMB to help facilitate data collection relevant to reviewing regulations. It also calls attention to the Evidence Act's requirements that certain agencies create Learning Agendas, which identify questions for agencies to address regarding their regulatory missions, and Annual Evaluation Plans, which lay out specific measures agencies will take to answer those questions. Consistent with the Evidence Act, the Recommendation provides that agencies can incorporate periodic retrospective review in their Learning Agendas and Annual Evaluation Plans by undertaking and documenting certain activities as they carry out their review.

See Bennear & Wiener, supra note 6.

5 U.S.C. 312(a)-(b); Off. of Mgmt. & Budget, Exec. Off. of the President, Memorandum M-19-23, Phase 1 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Learning Agendas, Personnel, and Planning Guidance (2019); Off. of Mgmt. & Budget, Exec. Off. of the President, Memorandum M-20-12, Phase 4 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Program Evaluation Standards and Practices (2020).

In issuing this Recommendation, the Conference recognizes that agencies will need to consider available resources in deciding whether a periodic retrospective review program should be implemented and, if so, what form it should take. The recommendations offered below are subject to that qualification.

Recommendation

Selecting the Types of Regulations to Subject to Periodic Retrospective Review and the Frequency of Review

1. Agencies should identify any specific regulations or categories of regulations that are subject to statutory periodic retrospective review requirements.

2. For regulations not subject to statutory periodic retrospective review requirements, agencies should establish a periodic retrospective review plan. In deciding which regulations, if any, should be subject to such a review plan, agencies should consider the public benefits of periodic retrospective review, including potential gains from learning more about regulatory performance, and the costs, including the administrative burden associated with performing the review and any disruptions to reliance interests and investment-backed expectations. When agencies adopt new regulations for which plans regarding periodic retrospective review have not been established, agencies should, as part of the process of developing such regulations, decide whether those regulations should be subject to periodic retrospective review.

3. When agencies plan for periodic retrospective review, they should not limit themselves to reviewing a specific final regulation when a review of a larger regulatory program would be more constructive.

4. When agencies decide to subject regulations to periodic retrospective review, they should decide whether to subject some or all of the regulations to a pre-set schedule of review or whether, for some or all of the regulations, it is preferable to set only an initial date for review and decide, as part of that review, when to undertake the next review. In selecting the frequency of review or setting the first or any subsequent date of review, agencies should consider, among others, the following factors:

a. The pace of change of the technology, science, sector of the economy, or part of society affected by the regulation. A higher pace of change may warrant more frequent review;

b. The degree of uncertainty about the accuracy of the initial estimates of regulatory benefits, costs, ancillary impacts, and distributional impacts. Greater uncertainty may warrant more frequent review;

c. Changes in the statutory framework under which the regulation was issued. More changes may warrant more frequent review;

d. Comments, complaints, requests for waivers or exemptions, petitions for the modification or repeal of existing rules, or suggestions received from interested persons. The level of public interest or amount of new evidence regarding changing the regulation may warrant more frequent review;

e. The difficulties arising from implementation of the regulation, as demonstrated by poor compliance rates, requests for waivers or exemptions, the amount of clarifying guidance issued, remands from the courts, or other factors. Greater difficulties may warrant more frequent review;

f. The administrative burden in conducting periodic retrospective review. Larger burdens, such as greater staff time, involved in reviewing the regulation may warrant less frequent review; and

g. Reliance interests and investment-backed expectations connected with the regulation. Steps taken by persons in reliance on a particular regulation or with the expectation that it will remain unaltered may favor less frequent review.

5. In making the decisions outlined in Paragraphs 1 through 4, public input can help agencies identify which regulations should be subject to periodic retrospective review and with what frequency. Agencies should consider soliciting public input by means such as convening meetings of interested persons, engaging in targeted outreach efforts to historically underrepresented or under-resourced groups that may be affected by the agencies' regulations, and posting requests for information.

6. Agencies should publicly disclose their periodic retrospective review plans, which should cover issues such as which regulations are subject to periodic retrospective review, how frequently those regulations are reviewed, what the review entails, and whether the review is conducted pursuant to a legal requirement or the agencies' own initiative. Agencies should include these notifications on their websites and consider publishing them in the Federal Register , even if the law does not require it.

7. With respect to regulations subject to a pre-set schedule of periodic retrospective review, agencies should periodically reassess the regulations that should be subject to periodic retrospective review and the optimal frequency of review.

Publishing Results of Periodic Retrospective Review and Soliciting Public Feedback on Regulations Subject to Review

8. Agencies should publish in a prominent, easy-to-find place on the portion of their websites dealing with rulemaking matters, a document or set of documents explaining how they conducted a given periodic retrospective review, what information they considered, and what public outreach they undertook. They should also include this document or set of documents on Regulations.gov. To the extent appropriate, agencies should organize the data in the document or set of documents in ways that allow private parties to re-create the agencies' work and run additional analyses concerning existing regulations' effectiveness. When feasible, agencies should also explain in plain language the significance of their data and how they used the data to shape their review.

9. Agencies should seek input from relevant parties when conducting periodic retrospective review. Possible outreach methods include convening meetings of interested persons; engaging in targeted outreach efforts, such as proactively bringing the regulation to the attention of historically underrepresented or under-resourced groups; and posting requests for information regarding the regulation. Agencies should integrate relevant information from the public into their periodic retrospective reviews.

10. Agencies should work with the Office of Management and Budget (OMB) to properly invoke any flexibilities within the Paperwork Reduction Act that would enable them to gather relevant data expeditiously.

Ensuring Adequate Resources and Staffing

11. Agencies should decide how best to structure their staffing of periodic retrospective reviews to foster a culture of retrospective review and ongoing learning. Below are examples of some staffing models, which may be used in tandem or separately:

a. Assigning the same staff the same regulation, or category of regulation, each time it is reviewed. This approach allows staff to gain expertise in a particular kind of regulation, thereby potentially improving the efficiency of the review;

b. Assigning different staff the same regulation, or category of regulation, each time it is reviewed. This approach promotes objectivity by allowing differing viewpoints to enter into the analysis;

c. Engaging or cooperating with agency or non-agency subject matter experts to review regulations; and

d. Pairing subject matter experts, such as engineers, economists, sociologists, and scientists, with other agency employees in conducting the review. This approach maximizes the likelihood that both substantive considerations, such as the net benefits and distributional and ancillary impacts of the regulation, and procedural considerations, such as whether the regulation conflicts with other regulations or complies with plain language requirements, will enter into the review.

Using Evidence Act Processes

12. Consistent with the Evidence Act, agencies should incorporate periodic retrospective reviews in their Learning Agendas and Annual Evaluation Plans. In doing so, agencies should ensure that they include:

a. The precise questions they intend to answer using periodic retrospective review. Those questions should include how frequently particular regulations should be reviewed and should otherwise be keyed to the factors set forth in Section 5 of Executive Order 12866 for periodic retrospective review of existing significant regulations;

b. The information needed to adequately review the regulations subject to the periodic retrospective reviews. Agencies should state whether they will undertake new information collection requests or use existing information to conduct the reviews;

c. The methods the agencies will use in conducting their reviews, which should comport with the federal program evaluation standards set forth by OMB ;

d. The anticipated challenges the agencies anticipate encountering during the reviews, if any, such as obstacles to collecting relevant data; and

e. The ways the agencies will use the results of the reviews to inform policymaking.

Interagency Coordination

13. Agencies that are responsible for coordinating activities among other agencies, such as the Office of Information and Regulatory Affairs, should, as feasible, regularly convene agencies to identify and share best practices on periodic retrospective review. These agencies should address questions such as how to improve timeliness and analytic quality of review and the optimal frequency of discretionary review.

14. To promote a coherent regulatory scheme, agencies should coordinate their periodic retrospective reviews with other agencies that have issued related regulations.

Administrative Conference Recommendation 2021-3

Early Input on Regulatory Alternatives

Adopted June 17, 2021

Agency development of and outreach concerning regulatory alternatives prior to issuing a notice of proposed rulemaking (NPRM) on important issues often results in a better-informed notice-and-comment process, facilitates decision making, and improves rules. In this context, the term “regulatory alternative” is used broadly and could mean, among other things, a different method of regulating, a different level of stringency in the rule, or not regulating at all. Several statutes and executive orders, including the National Environmental Policy Act (NEPA), the Regulatory Flexibility Act (RFA), and Executive Order 12866, require federal agencies to identify and consider alternative regulatory approaches before proposing certain new rules. This Recommendation suggests best practices for soliciting early input during the process of developing regulatory alternatives, whether or not it is required by law or executive order, before publishing an NPRM. It also provides best practices for publicizing the alternatives considered when agencies are promulgating important rules.

See Christopher Carrigan & Stuart Shapiro, Developing Regulatory Alternatives Through Early Input 8 (June 4, 2021) (report to the Admin. Conf. of the U.S.).

42 U.S.C. 4332(C)(iii) (requiring agencies to consider alternatives in environmental impact statements under NEPA).

5 U.S.C. 603(c) (requiring agencies to consider alternatives in regulatory flexibility analyses conducted under the RFA, as amended by the Small Business Regulatory Enforcement Fairness Act).

Exec. Order No. 12866, § 1, 58 FR 51735, 51735-36 (Sept. 30, 1993).

See Admin. Conf. of the U.S., Recommendation 2014-5, Retrospective Review of Agency Rules, ¶ 6, 79 FR 75114, 75116-17 (Dec. 17, 2014).

The Administrative Conference has previously recommended that agencies engage with the public throughout the rulemaking process, including by seeking input while agencies are still in the early stages of shaping a rule. Agencies might conduct this outreach while developing their regulatory priorities, including in the proposed regulatory plans agencies are required to prepare under Executive Order 12866. Seeking early input before issuing a notice of proposed rulemaking can help agencies identify alternatives and learn more about the benefits, costs, distributional impacts, and technical feasibility of alternatives to the proposal they are considering. Doing so is particularly important, even if not required by law or executive order, for a proposal likely to draw significant attention for its economic impact or other significance. It can also be especially valuable for agencies seeking early input on regulatory alternatives to reach out to a wide range of interested persons, including affected groups that often are underrepresented in the administrative process and may suffer disproportionate harms from a proposed rule.

See Admin. Conf. of the U.S., Recommendation 2018-7, Public Engagement in Rulemaking, ¶ 5, 84 FR 2146, 2148 (Feb. 6, 2019); see also, e.g., Admin. Conf. of the U.S., Recommendation 2017-6, Learning from Regulatory Experience, 82 FR 61728 (Dec. 29, 2017); Admin. Conf. of the U.S., Recommendation 2017-2, Negotiated Rulemaking and Other Options for Public Engagement, 82 FR 31040 (July 5, 2017); Admin. Conf. of the U.S., Recommendation 85-2, Agency Procedures for Performing Regulatory Analysis of Rules, 50 FR 28364 (July 12, 1985); Michael Sant'Ambrogio & Glen Staszewski, Public Engagement with Agency Rulemaking 62-77 (Nov. 19, 2018) (report to the Admin. Conf. of the U.S.).

See Exec. Order No. 12866, supra note 4, § 4(c).

A distributional impact is an “impact of a regulatory action across the population and economy, divided up in various ways (e.g., income groups, race, sex, industrial sector, geography).”

See Exec. Order. No. 13985, 86 FR 7009 (Jan. 25, 2021) (directing the Office of Management and Budget, in partnership with agencies, to ensure that agency policies and actions are equitable with respect to race, ethnicity, religion, income, geography, gender identity, sexual orientation, and disability); Memorandum on Modernizing Regulatory Review, 86 FR 7223 (Jan. 26, 2021) (requiring the Office of Management and Budget to produce recommendations regarding improving regulatory review that, among other things, “propose procedures that take into account the distributional consequences of regulations . . . to ensure that regulatory initiatives appropriately benefit and do not inappropriately burden disadvantaged, vulnerable, or marginalized communities”).

When seeking early input on regulatory alternatives, agencies might consider approaches modeled on practices that other agencies already use. In so doing, they might look at agency practices that are required by statute (e.g., the Small Business Regulatory Enforcement Fairness Act) or agency rules (e.g., the Department of Energy's “Process Rule”), or practices that agencies have voluntarily undertaken in the absence of any legal requirement.

5 U.S.C. 609.

10 CFR 430, subpart C, app. A.

Nevertheless, seeking early input on alternatives may not be appropriate in all cases and may trigger certain procedural requirements. In some instances, the alternatives may be obvious. In others, the subject matter may be so obscure that public input is unlikely to prove useful. And in all cases, agencies face resource constraints and competing priorities, so agencies may wish to limit early public input to a subclass of rules such as those with substantial impact. Agencies will need to consider whether the benefits of early outreach outweigh the costs, including the resources required to conduct the outreach and any delays entailed. When agencies do solicit early input, they will still want to tailor their outreach to ensure that they are soliciting input in a way that is cost-effective, is equitable, and maximizes the likelihood of obtaining diverse, useful responses.

See, e.g., Federal Advisory Committee Act, 5 U.S.C. app. 2 1-16.

Recommendation

1. When determining whether to seek early input from knowledgeable persons to identify potential regulatory alternatives or respond to alternatives an agency has already identified, the agency should consider factors such as:

a. The extent of the agency's familiarity with the policy issues and key alternatives;

b. The extent to which the conduct being regulated or any of the alternatives suggested are novel;

c. The degree to which potential alternatives implicate specialized technical or technological expertise;

d. The complexity of the underlying policy question and the proposed alternatives;

e. The potential magnitude of the costs and benefits of the alternatives proposed;

f. The likelihood that the selection of an alternative will be controversial;

g. The time and resources that conducting such outreach would require;

h. The extent of the agency's discretion to select among alternatives, given the statutory language being implemented;

i. The deadlines the agency faces, if any, and the harms that might occur from the delay required to solicit and consider early feedback;

j. The extent to which certain groups that are affected by the proposed regulation and have otherwise been underrepresented in the agency's administrative process may suffer adverse distributional effects from generally beneficial proposals; and

k. The extent to which experts in other agencies may have valuable input on alternatives.

2. In determining what outreach to undertake concerning possible regulatory alternatives, an agency should consider using, consistent with available resources and feasibility, methods of soliciting public input including:

a. Meetings with interested persons held episodically or as-needed based on rulemaking activities;

b. Listening sessions;

c. Internet and social media forums;

d. Focus groups;

e. Advisory committees, including those tasked with conducting negotiated rulemaking;

f. Advance notices of proposed rulemakings; and

g. Requests for information.

The agency should also consider how to ensure that its interactions with outside persons are transparent, to the maximum extent permitted by law.

3. An agency should consider whether the methods it uses to facilitate early outreach in its rulemaking process will engage a wide range of interested persons, including individuals and groups that are affected by the rule and are traditionally underrepresented in the agency's rulemaking processes. The agency should consider which methods would best facilitate such outreach, including providing materials designed for the target participants. For example, highly technical language may be appropriate for some, but not all, audiences. The agency should endeavor to make participation by interested persons who have less time and fewer resources as easy as possible, particularly when those potential participants do not have experience in the rulemaking process. The agency should explain possible consequences of the potential rulemaking to help potential participants understand the importance of their input and to encourage their participation in the outreach.

4. If an agency is unsure what methods of soliciting public input will best meet its needs and budget, it should consider testing different methods to generate alternatives or receive input on the regulatory alternatives it is considering before issuing notices of proposed rulemaking (NPRMs). As appropriate, the agency should describe the outcomes of using these different methods in the NPRMs for rules in which they are used.

5. An agency should ensure that all of its relevant officials, including economists, scientists, and other experts, have an opportunity to identify potential regulatory alternatives during the early input process. As appropriate, the agency should also reach out to select experts in other agencies for input on alternatives.

6. An agency should consider providing in the NPRM a discussion of the reasonable regulatory alternatives it has considered or that have been suggested to it, including alternatives it is not proposing to adopt, together with the reasons it is not proposing to adopt those alternatives. To the extent the agency is concerned about revealing the identity of the individuals or groups offering proposed alternatives due to privacy or confidentiality concerns, it should consider characterizing the identity (e.g., industry representative, environmental organization, etc.) or listing the alternatives without ascribing them to any particular person.

7. When an agency discusses regulatory alternatives in the preamble of a proposed or final rule, it should also consider including a discussion of any reasonable alternatives suggested or considered through early public input, but which the agency believes are precluded by statute. The discussion should also include an explanation of the agency's views on the legality of those alternatives.

8. To help other agencies craft best practices for early engagement with the public, an agency should, when feasible, share data and other information about the effectiveness of its efforts to solicit early input on regulatory alternatives.

Administrative Conference Recommendation 2021-4

Virtual Hearings in Agency Adjudication

Adopted June 17, 2021

The use of video teleconferencing (VTC) to conduct administrative hearings and other adjudicative proceedings has become increasingly prevalent over the past few decades due to rapid advances in technology and telecommunications coupled with reduced personnel, increased travel costs, and the challenges of the COVID-19 pandemic. As the Administrative Conference has recognized, “[s]ome applaud the use of VTC by administrative agencies because it offers potential efficiency benefits, such as reducing the need for travel and the costs associated with it, reducing caseload backlog, and increasing scheduling flexibility for agencies and attorneys as well as increasing access for parties.” At the same time, as the Conference has acknowledged, critics have suggested that the use of VTC may “hamper communication” among participants—including parties, their representatives, and the decision maker—or “hamper a decision-maker's ability to make credibility determinations.”

Admin. Conf. of the U.S., Recommendation 2011-4, Agency Use of Video Hearings: Best Practices and Possibilities for Expansion, 76 FR 48795, 48795-96 (Aug. 9, 2011).

Id.

The Conference has encouraged agencies, particularly those with high-volume caseloads, to consider “whether the use of VTC would be beneficial as a way to improve efficiency and/or reduce costs while also preserving the fairness and participant satisfaction of proceedings.” Recognizing that the use of VTC may not be appropriate in all circumstances and must be legally permissible, the Conference has identified factors for agencies to consider when determining whether to use VTC to conduct hearings. They include whether the nature and type of adjudicative hearings conducted by an agency are conducive to the use of VTC; whether VTC can be used without adversely affecting case outcomes or representation of parties; and whether the use of VTC would affect costs, productivity, wait times, or access to justice. The Conference has also set forth best practices and practical guidelines for conducting video hearings.

Id.

Id. ¶ 2.

Admin. Conf. of the U.S., Recommendation 2014-7, Best Practices for Using Video Teleconferencing for Hearings, 79 FR 75114 (Dec. 17, 2014); Recommendation 2011-4, supra note 1; see also Martin E. Gruen & Christine R. Williams, Admin. Conf. of the U.S., Handbook on Best Practices for Using Video Teleconferencing in Adjudicatory Hearings (2015).

When the Conference issued these recommendations, most video participants appeared in formal hearing rooms equipped with professional-grade video screens, cameras, microphones, speakers, and recording systems. Because these hearing rooms were usually located in government facilities, agencies could ensure that staff were on site to maintain and operate VTC equipment, assist participants, and troubleshoot any technological issues. This setup, which this Recommendation calls a “traditional video hearing,” gives agencies a high degree of control over VTC equipment, telecommunications connections, and hearing rooms.

Videoconferencing technology continues to evolve, with rapid developments in internet-based videoconferencing software, telecommunications infrastructure, and personal devices. Recently, many agencies have also allowed, or in some cases required, participants to appear remotely using internet-based videoconferencing software. Because individual participants can run these software applications on personal computers, tablets, or smartphones, they can appear from a location of their choosing, such as a home or office, rather than needing to travel to a video-equipped hearing site. This Recommendation uses the term “virtual hearings” to refer to proceedings in which individuals appear in this manner. This term includes proceedings in which all participants appear virtually, as well as hybrid proceedings in which some participants appear virtually while others participate by alternative remote means or in person.

For example, some tribunals around the world are now exploring the use of telepresence systems, which rely on high-quality video and audio equipment to give participants at different, specially equipped sites the experience of meeting in the same physical space. See Fredric I. Lederer, The Evolving Technology-Augmented Courtroom Before, During, and After the Pandemic, 23 Vand. J. Ent. & Tech. L. 301, 326 (2021).

See Jeremy Graboyes, Legal Considerations for Remote Hearings in Agency Adjudications 3 (June 16, 2020) (report to the Admin. Conf. of the U.S.).

Although some agencies used virtual hearings before 2020, their use expanded dramatically during the COVID-19 pandemic, when agencies maximized telework, closed government facilities to the public and employees, and required social distancing. Agencies gained considerable experience conducting virtual hearings during this period, and this Recommendation draws heavily on these experiences.

Id. at 1.

See Fredric I. Lederer & the Ctr. for Legal & Ct. Tech., Analysis of Administrative Agency Adjudicatory Hearing Use of Remote Appearances and Virtual Hearings 7 (June 3, 2021) (report to the Admin. Conf. of the U.S.).

Virtual hearings can offer several benefits to agencies and parties compared with traditional video hearings. Participants may be able to appear from their home using their own personal equipment, from an attorney's office, or from another location such as a public library or other conveniently located governmental facility, without the need to travel to a video-equipped hearing site. As a result, virtual hearings can simplify scheduling for parties and representatives and may facilitate the involvement of other participants such as interpreters, court reporters, witnesses, staff or contractors who provide administrative or technical support, and other interested persons. Given this flexibility, virtual hearings may be especially convenient for short and relatively informal adjudicative proceedings, such as pre-hearing and settlement conferences.

See id. at 3.

Because virtual hearings allow participants to appear from a location of their choosing without needing to travel to a facility suitable for conducting an in-person or traditional video hearing, they have the potential to expand access to justice for individuals who belong to certain underserved communities. Virtual hearings may be especially beneficial for individuals whose disabilities make it difficult to travel to hearing facilities or participate in public settings; individuals who live in rural areas and may need to travel great distances to hearing facilities; and low-income individuals for whom it may be difficult to secure transportation to hearing facilities or take time off work or arrange for childcare to participate in in-person or traditional video hearings. The use of virtual hearings may also expand access to representation, especially for individuals who live in areas far from legal aid organizations.

See Alicia Bannon & Janna Adelstein, Brennan Ctr. for Justice, The Impact of Video Proceedings on Fairness and Access to Justice in Court 9-10 (2020); Nat'l Ctr. for State Cts., Call to Action: Achieving Civil Justice for All 37-38 (2016); Lederer, supra note 6, at 338; Susan A. Bandes & Neal Feigenson, Virtual Trials: Necessity, Invention, and the Evolution of the Courtroom, 68 Buff. L. Rev. 1275, 1313-14 (2020).

But virtual hearings can pose significant challenges as well. The effectiveness of virtual hearings depends on individuals' access to a suitable internet connection, a personal device, and a space from which to participate, as well as their ability to effectively participate in an adjudicative proceeding by remote means while operating a personal device and videoconferencing software. As a result, virtual hearings may create a barrier to access for individuals who belong to underserved communities, such as low-income individuals for whom it may be difficult to obtain access to high-quality personal devices or private internet services, individuals whose disabilities prevent effective engagement in virtual hearings or make it difficult to set up and manage the necessary technology, and individuals with limited English proficiency. Some individuals may have difficulty, feel uncomfortable, or lack experience using a personal device or internet-based videoconferencing software to participate in an adjudicative proceeding. Some critics have also raised concerns that virtual participation can negatively affect parties' satisfaction, engagement with the adjudicative process, or perception of justice.

See Lederer, supra note 9, at 8-12, 18.

Agencies have devised several methods to address these concerns. The Board of Veterans' Appeals conducts virtual hearings using the same videoconferencing application that veterans use to access agency telehealth services. To enhance the formality of virtual hearings, many adjudicators use a photographic backdrop that depicts a hearing room, seal, or flag. Many agencies use pre-hearing notices and online guides to explain virtual hearings to participants. Several agencies provide general or pre-hearing training sessions at which agency staff, often attorneys, can familiarize participants with the procedures and standards of conduct for virtual hearings. Though highly effective, these sessions require staff time and availability.

See id. at 12, 16-17.

Virtual hearings can also pose practical and logistical challenges. They can suffer from technical glitches, often related to short-term, internet bandwidth issues. Virtual hearings may sometimes require agencies to take special measures to ensure the integrity of adjudicative proceedings. Such measures may be necessary, for example, to safeguard classified, legally protected, confidential, or other sensitive information, or to monitor or sequester witnesses to ensure third parties do not interfere with their testimony. Agencies may also need to take special measures to ensure that interested members of the public can observe virtual hearings in appropriate circumstances by, for example, streaming live audio or video of a virtual hearing or providing access to a recording afterward.

See id. at 12, 17.

For evidentiary hearings not required by the Administrative Procedure Act (APA), the Conference has recommended that agencies “adopt the presumption that their hearings are open to the public, while retaining the ability to close the hearings in particular cases, including when the public interest in open proceedings is outweighed by the need to protect: (a) National security; (b) Law enforcement; (c) Confidentiality of business documents; and (d) Privacy of the parties to the hearing.” Admin. Conf. of the U.S., Recommendation 2016-4, Evidentiary Hearings Not Required by the Administrative Procedure Act, ¶ 18, 81 FR 94312, 94316 (Dec. 23, 2016). Similar principles may also apply in other proceedings, including those conducted under the APA's formal-hearing provisions. See Graboyes, supra note 7, at 22-23.

Recording virtual hearings may raise additional legal, policy, and practical concerns. To the extent that such recordings become part of the administrative record or serve as the official record of the proceeding, agencies may need to consider whether and for what purposes appellate reviewers may consider and rely on them. Creating recordings may trigger obligations under federal information and record-keeping laws and policies, including the Freedom of Information Act, Privacy Act, and Federal Records Act. Agencies may need to review contract terms when considering the use of videoconferencing software applications to determine whether any other entities own or can access or use recordings made through the applications, or whether an agency may obtain ownership and possession of the recording. Steps may be necessary to ensure that agencies do not inadvertently disclose classified, protected, or sensitive information or make it easy for people to use publicly available recordings for improper purposes. Practically, unless agencies store recordings on external servers, such as in the cloud, agencies would need sufficient technological capacity to store the volume of recordings associated with virtual hearings. Agencies would also need personnel qualified and available to manage and, as appropriate, prepare recordings for public access.

5 U.S.C. 552.

Id. § 552a.

44 U.S.C. 3101 et seq.

This Recommendation builds on Recommendation 2011-4, Agency Use of Video Hearings: Best Practices and Possibilities for Expansion, and Recommendation 2014-7, Best Practices for Using Video Teleconferencing for Hearings, by identifying factors for agencies to consider as they determine when and how to conduct virtual hearings. Specifically, this Recommendation provides best practices for conducting virtual hearings in appropriate circumstances and encourages agencies to monitor technological and procedural developments that may facilitate remote participation in appropriate circumstances.

As emphasized in Recommendation 2014-7, the Conference is committed to the principles of fairness, efficiency, and participant satisfaction in the conduct of adjudicative proceedings. When virtual hearings are used, they should be used in a manner that promotes these principles, which form the cornerstones of adjudicative legitimacy. The Conference recognizes that the use of virtual hearings is not suitable for every kind of adjudicative proceeding but believes greater familiarity with existing agency practices and awareness of the improvements in technology will encourage broader use of such technology in appropriate circumstances. This Recommendation aims to ensure that, when agencies choose to offer virtual hearings, they are able to provide a participant experience that meets or even exceeds the in-person hearing experience.

This Recommendation does not take a position on when parties should be entitled to, or may request, an in-person hearing.

Recommendation

Procedural Practices

1. If legally permissible, agencies should offer virtual hearings consistent with their needs, in accord with principles of fairness and efficiency, and with due regard for participant satisfaction. In developing policies regarding virtual hearings, agencies should consider, at a minimum, the following:

a. Whether the nature and type of adjudicative proceedings are conducive to the use of virtual hearings and whether virtual hearings can be used without affecting the procedural fairness or substantive outcomes of cases;

b. Whether virtual hearings are likely to result in significant benefits for agency and non-agency participants, including improved access to justice, more efficient use of time for adjudicators and staff, reduced travel costs and delays, and reduced wait times and caseload backlogs;

c. Whether virtual hearings are likely to result in significant costs for agency and non-agency participants, including those associated with purchasing, installing, and maintaining equipment and software, obtaining and using administrative and technical support, and providing training;

d. Whether the use of virtual hearings would affect the representation of parties;

e. Whether the use of virtual hearings would affect communication between hearing participants (including adjudicators, parties, representatives, witnesses, interpreters, agency staff, and others);

f. Whether the use of virtual hearings would create a potential barrier to access for individuals who belong to underserved communities, such as low-income individuals for whom it may be difficult to obtain access to high-quality personal devices or private internet services, individuals whose disabilities prevent effective engagement in virtual hearings or make it difficult to set up and manage the necessary technology, and individuals with limited English proficiency, or for other individuals who may have difficulty using a personal device or internet-based videoconferencing software to participate in adjudicative proceedings;

g. Whether the use of virtual hearings would affect adjudicators' ability to make credibility determinations; and

h. Whether there is a reasonable concern that the use of virtual hearings would enable someone to improperly interfere with participants' testimony.

2. Agencies should revise any provisions of their codified rules of practice that unintentionally restrict adjudicators' discretion to allow individuals to participate virtually, when such participation would otherwise satisfy the principles in Paragraph 1.

3. Agencies should adopt the presumption that virtual hearings are open to the public, while retaining the ability to close the hearings in particular cases, including when the public interest in open proceedings is outweighed by the need to protect:

a. National security;

b. Law enforcement;

c. Confidentiality of business documents; or

d. Privacy of hearing participants.

For virtual hearings that are open to the public, agencies should provide a means for interested persons to attend or view the hearing.

4. If agencies record virtual hearings, they should consider the legal, practical, and technical implications of doing so and establish guidelines to seek to ensure, at a minimum, compliance with applicable information and recordkeeping laws and policies and guard against misuse of recordings.

5. Agencies should work with information technology and data security professionals to develop protocols to properly safeguard classified, legally protected, confidential, and other sensitive information during virtual hearings and also to ensure the integrity of the hearing process.

6. Agencies that offer virtual hearings should develop guidelines for conducting them, make those guidelines publicly available prominently on their websites, and consider which of those guidelines to include in their codified rules of practice. Such guidelines should address, as applicable:

a. Any process by which parties, representatives, and other participants can request to participate virtually;

b. Circumstances in which an individual's virtual participation may be inappropriate;

c. Any process by which parties, representatives, and other participants can, as appropriate, object to or express concerns about participating virtually;

d. Technological requirements for virtual hearings, including those relating to access to the internet-based videoconferencing software used for virtual hearings and any technical suggestions for participants who appear virtually;

e. Standards of conduct for participants during virtual hearings, such as those requiring participants to disclose whether they are joined or assisted by any silent, off-camera individuals;

f. The availability of or requirement to attend a general training session or pre-hearing conference to discuss technological requirements, procedural rules, and standards of conduct for virtual hearings;

g. Any protocols or best practices for participating in virtual hearings, such as those addressing:

i. When and how to join virtual hearings using either a personal device or equipment available at another location, such as a public library or other governmental facility;

ii. How to submit exhibits before or during virtual hearings;

iii. Whether and how to use screen sharing or annotation tools available in the videoconferencing software;

iv. How to make motions, raise objections, or otherwise indicate that a participant would like to speak;

v. How to participate effectively in a virtual setting (e.g., recommending that participants not appear while operating a moving vehicle and, to account for audio delays, that they wait several seconds after others finish talking before speaking);

vi. How to indicate that there is a technical problem or request technical support;

vii. When adjudicators will stop or postpone virtual hearings due to technical problems and what actions will be taken to attempt to remedy the problems while preserving participants' hearing rights;

viii. How to examine witnesses who participate virtually and monitor or sequester them, as necessary;

ix. How parties and their representatives can consult privately with each other;

x. When participants should have their microphones or cameras on or off;

xi. Whether participants may communicate with each other using a videoconferencing software's chat feature or other channels of communication, and, if so, how;

xii. How to properly safeguard classified, legally protected, confidential, or other sensitive information;

xiii. Whether participants or interested persons may record proceedings;

xiv. Whether and how other interested persons can attend or view streaming video; and

xv. Whether and how participants or interested persons may access recordings of virtual hearings maintained by the agency.

7. Agencies should provide information on virtual hearings in pre-hearing notices to participants. Such notices should include or direct participants to the guidelines described in Paragraph 6.

Facilities and Equipment

8. When feasible, agencies should provide adjudicators with spaces, such as offices or hearing rooms, that are equipped and maintained for the purpose of conducting hearings that involve one or more remote participants. When designing such a space, agencies should provide for:

a. Dedicated cameras, lighting, and microphones to capture and transmit audio and video of the adjudicator to remote participants;

b. Adjudicators' access to a computer and a minimum of two monitors—one for viewing remote participants and another for viewing the record—and potentially a third for performing other tasks or accessing other information during proceedings; and

c. High-quality bandwidth.

9. Agencies should provide adjudicators who appear from a location other than a space described in Paragraph 8 with a digital or physical backdrop that simulates a physical hearing room or other official space.

Training and Support

10. Agencies should provide training for adjudicators on conducting virtual hearings.

11. Agencies should provide adjudicators with adequate technical and administrative support so that adjudicators are not responsible for managing remote participants (e.g., admitting or removing participants, muting and unmuting participants, managing breakout rooms) or troubleshooting technical issues for themselves or other participants before or during proceedings. Agencies should provide advanced training for administrative and technical support staff to ensure they are equipped to manage virtual hearings and troubleshoot technical problems that may arise before or during proceedings.

12. Agencies should consider providing general training sessions or pre-hearing conferences at which staff can explain expectations, technological requirements, and procedural rules for virtual hearings to parties and representatives.

Assessment and Continuing Development

13. Agencies should try to measure how virtual hearings compare with proceedings conducted using other formats, including whether the use of virtual hearings affects procedural fairness or produces different substantive outcomes. Agencies should recognize the methodological challenges in measuring procedural fairness and comparing substantive outcomes to determine whether different hearing formats, apart from other relevant factors and case-specific circumstances, produce comparable results.

14. Agencies should collect anonymous feedback from participants (e.g., using post-hearing surveys) to determine and assess participants' satisfaction with the virtual format and identify any concerns. Agencies should also maintain open lines of communication with representatives in order to receive feedback about the use of virtual hearings. Agencies should collect feedback in a manner that complies with the Paperwork Reduction Act and review this feedback on a regular basis to determine whether any previously unrecognized deficiencies exist.

15. Agencies should monitor technological and procedural developments to seek to ensure that options for individuals to participate remotely in adjudicative proceedings remain current and that those options reasonably comport with participants' expectations.

16. Agencies should share information with each other to reduce costs, increase efficiency, and provide a hearing experience that seeks to ensure fairness and participant satisfaction. To help carry out this Recommendation, the Conference's Office of the Chairman should provide, as authorized by 5 U.S.C. 594(2), for the “interchange among administrative agencies of information potentially useful in improving” virtual hearings and other forms of remote participation in agency adjudicative proceedings.

[FR Doc. 2021-14597 Filed 7-7-21; 8:45 am]

BILLING CODE 6110-01-P