Colo. Rev. Stat. § 6-1-1703

Current through Chapter 492 of the 2024 Legislative Session
Section 6-1-1703 - Deployer duty to avoid algorithmic discrimination - risk management policy and program
(1) On and after February 1, 2026, a deployer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after February 1, 2026, by the attorney general pursuant to section 6-1-1706, there is a rebuttable presumption that a deployer of a high-risk artificial intelligence system used reasonable care as required under this section if the deployer complied with this section and any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 6-1-1707.
(2)
(a) On and after February 1, 2026, and except as provided in subsection (6) of this section, a deployer of a high-risk artificial intelligence system shall implement a risk management policy and program to govern the deployer's deployment of the high-risk artificial intelligence system. The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management policy and program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates. A risk management policy and program implemented and maintained pursuant to this subsection (2) must be reasonable considering:
(I)
(A) The guidance and standards set forth in the latest version of the "Artificial Intelligence Risk Management Framework" published by the national institute of standards and technology in the United States department of commerce, standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of this part 17; or
(B) Any risk management framework for artificial intelligence systems that the attorney general, in the attorney general's discretion, may designate;
(II) The size and complexity of the deployer;
(III) The nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the High-risk artificial intelligence systems; and
(IV) The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer.
(b) A risk management policy and program implemented pursuant to subsection (2)(a) of this section may cover multiple high-risk artificial intelligence systems deployed by the deployer.
(3)
(a) Except as provided in subsections (3)(d), (3)(e), and (6) of this section:
(I) A deployer, or a third party contracted by the deployer, that deploys a high-risk artificial intelligence system on or after February 1, 2026, shall complete an impact assessment for the high-risk artificial intelligence system; and
(II) On and after February 1, 2026, A deployer, or a third party contracted by the deployer, shall complete an impact assessment for a deployed high-risk artificial intelligence system at least annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.
(b) An impact assessment completed pursuant to this subsection (3) must include, at a minimum, and to the extent reasonably known by or available to the deployer:
(I) A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, The high-risk Artificial intelligence system;
(II) An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks;
(III) A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces;
(IV) If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system;
(V) Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system;
(VI) A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and
(VII) A description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the High-risk artificial intelligence system.
(c) In addition to the information required under subsection (3)(b) of this section, an impact assessment completed pursuant to this subsection (3) following an intentional and substantial modification to a high-risk artificial intelligence system on or after February 1, 2026, must include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with, or varied from, the developer's intended uses of the high-risk artificial intelligence system.
(d) A single impact assessment may address a comparable set of High-risk artificial intelligence systems deployed by a deployer.
(e) If a deployer, or a third party contracted by the deployer, completes an impact assessment for the purpose of complying with another applicable law or regulation, the impact assessment satisfies the requirements established in this subsection (3) if the impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection (3).
(f) A deployer shall maintain the most recently completed impact assessment for a high-risk artificial intelligence system as required under this subsection (3), all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system.
(g) On or before February 1, 2026, and at least annually thereafter, a deployer, or a third party contracted by the deployer, must review the deployment of each high-risk artificial intelligence system deployed by the deployer to ensure that the high-risk artificial intelligence system is not causing algorithmic discrimination.
(4)
(a) On and after February 1, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:
(I) Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made;
(II) Provide to the consumer a statement disclosing the purpose of the High-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement required by subsection (5)(a) of this section; and
(III) Provide to the consumer information, if applicable, regarding the Consumer's right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer under section 6-1-1306 (1)(a)(I)(C).
(b) On and after February 1, 2026, a deployer that has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer shall, if the consequential decision is adverse to the consumer, provide to the consumer:
(I) A statement disclosing the principal reason or reasons for the consequential decision, including:
(A) The degree to which, and manner in which, the high-risk artificial intelligence system contributed to the consequential decision;
(B) The type of data that was processed by the high-risk artificial intelligence system in making the consequential decision; and
(C) The source or sources of the data described in subsection (4)(b)(I)(B) of this section;
(II) An opportunity to correct any incorrect personal data that the High-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and
(III) An opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.
(c)
(I) Except as provided in subsection (4)(c)(II) of this section, a deployer shall provide the notice, statement, contact information, and description required by subsections (4)(a) and (4)(b) of this section:
(A) Directly to the consumer;
(B) In plain language;
(C) In all languages in which the deployer, in the ordinary course of the deployer's business, provides contracts, disclaimers, sale announcements, and other information to consumers; and
(D) In a format that is accessible to consumers with disabilities.
(II) If the deployer is unable to provide the notice, statement, contact information, and description required by subsections (4)(a) and (4)(b) of this section directly to the consumer, the deployer shall make the notice, statement, contact information, and description available in a manner that is reasonably calculated to ensure that the consumer receives the notice, statement, contact information, and description.
(5)
(a) On and after February 1, 2026, and except as provided in subsection (6) of this section, a deployer shall make available, in a manner that is clear and readily available on the deployer's website, a statement summarizing:
(I) The types of high-risk artificial intelligence systems that are currently deployed by the deployer;
(II) How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system described pursuant to subsection (5)(a)(I) of this section; and
(III) In detail, the nature, source, and extent of the information collected and used by the deployer.
(b) A deployer shall periodically update the statement described in subsection (5)(a) of this section.
(6) Subsections (2), (3), and (5) of this section do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed:
(a) The deployer:
(I) Employs fewer than fifty full-time equivalent employees; and
(II) Does not use the deployer's own data to train the high-risk Artificial intelligence system;
(b) The high-risk artificial intelligence system:
(I) Is used for the intended uses that are disclosed to the deployer as required by section 6-1-1702 (2)(a); and
(II) Continues learning based on data derived from sources other than the deployer's own data; and
(c) The deployer makes available to consumers any impact assessment that:
(I) The developer of the high-risk artificial intelligence system has completed and provided to the deployer; and
(II) Includes information that is substantially similar to the information in the impact assessment required under subsection (3)(b) of this section.
(7) If a deployer deploys a high-risk artificial intelligence system on or after February 1, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the attorney general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.
(8) Nothing in subsections (2) to (5) and (7) of this section requires a deployer to disclose a trade secret or information protected from disclosure by state or federal law. To the extent that a deployer withholds information pursuant to this subsection (8) or section 6-1-1705 (5), the deployer shall notify the consumer and provide a basis for the withholding.
(9) On and after February 1, 2026, the attorney general may require that a deployer, or a third party contracted by the deployer, disclose to the attorney general, no later than ninety days after the request and in a form and manner prescribed by the attorney general, the risk management policy implemented pursuant to subsection (2) of this section, the impact assessment completed pursuant to subsection (3) of this section, or the records maintained pursuant to subsection (3)(f) of this section. The attorney general may evaluate the risk management policy, impact assessment, or records to ensure compliance with this part 17, and the risk management policy, impact assessment, and records are not subject to disclosure under the "Colorado Open Records Act", part 2 of article 72 of title 24. In a disclosure pursuant to this subsection (9), a deployer may designate the statement or documentation as including proprietary information or a trade secret. To the extent that any information contained in the risk management policy, impact assessment, or records include information subject to attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.

C.R.S. § 6-1-1703

Added by 2024 Ch. 198,§ 1, eff. 5/17/2024.