AGENCY:
Centers for Medicare & Medicaid Services (CMS), HHS.
ACTION:
Request for information.
SUMMARY:
Section 101 of the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) repeals the Medicare sustainable growth rate (SGR) methodology for updates to the physician fee schedule (PFS) and replaces it with a new Merit-based Incentive Payment System (MIPS) for MIPS eligible professionals (MIPS EPs) under the PFS. Section 101 of the MACRA sunsets payment adjustments under the current Physician Quality Reporting System (PQRS), the Value-Based Payment Modifier (VM), and the Electronic Health Records (EHR) Incentive Program. It also consolidates aspects of the PQRS, VM, and EHR Incentive Program into the new MIPS. Additionally, section 101 of the MACRA promotes the development of Alternative Payment Models (APMs) by providing incentive payments for certain eligible professionals (EPs) who participate in APMs, by exempting EPs from MIPS if they participate in APMs, and by encouraging the creation of physician-focused payment models (PFPMs). In this request for information (RFI), we seek public and stakeholder input to inform our implementation of these provisions.
DATES:
To be assured consideration, written or electronic comments must be received at one of the addresses provided below, no later than 5 p.m. on November 2, 2015.
ADDRESSES:
In commenting, refer to file code CMS-3321-NC. Because of staff and resource limitations, we cannot accept comments by facsimile (FAX) transmission.
You may submit comments in one of four ways (please choose only one of the ways listed):
1. Electronically. You may submit electronic comments on this regulation to http://www.regulations.gov. Follow the “Submit a comment” instructions.
2. By regular mail. You may mail written comments to the following address ONLY:
Centers for Medicare & Medicaid Services, Department of Health and Human Services, Attention: CMS-3321-NC, P.O. Box 8016, Baltimore, MD 21244-8016.
Please allow sufficient time for mailed comments to be received before the close of the comment period.
3. By express or overnight mail. You may send written comments to the following address ONLY:
Centers for Medicare & Medicaid Services, Department of Health and Human Services, Attention: CMS-3321-NC, Mail Stop C4-26-05, 7500 Security Boulevard, Baltimore, MD 21244-1850.
4. By hand or courier. Alternatively, you may deliver (by hand or courier) your written comments ONLY to the following addresses:
a. For delivery in Washington, DC—
Centers for Medicare & Medicaid Services, Department of Health and Human Services, Room 445-G, Hubert H. Humphrey Building, 200 Independence Avenue SW., Washington, DC 20201
(Because access to the interior of the Hubert H. Humphrey Building is not readily available to persons without Federal government identification, commenters are encouraged to leave their comments in the CMS drop slots located in the main lobby of the building. A stamp-in clock is available for persons wishing to retain a proof of filing by stamping in and retaining an extra copy of the comments being filed.)
b. For delivery in Baltimore, MD—
Centers for Medicare & Medicaid Services, Department of Health and Human Services, 7500 Security Boulevard, Baltimore, MD 21244-1850.
If you intend to deliver your comments to the Baltimore address, call telephone number (410) 786-7195 in advance to schedule your arrival with one of our staff members.
Comments erroneously mailed to the addresses indicated as appropriate for hand or courier delivery may be delayed and received after the comment period.
FOR FURTHER INFORMATION CONTACT:
Molly MacHarris, (410) 786-4461.
Alison Falb, (410) 786-1169.
SUPPLEMENTARY INFORMATION:
Inspection of Public Comments: All comments received before the close of the comment period are available for viewing by the public, including any personally identifiable or confidential business information that is included in a comment. We post all comments received before the close of the comment period on the following Web site as soon as possible after they have been received: http://www.regulations.gov. Follow the search instructions on that Web site to view public comments.
Comments received timely will also be available for public inspection as they are received, generally beginning approximately 3 weeks after publication of a document, at the headquarters of the Centers for Medicare & Medicaid Services, 7500 Security Boulevard, Baltimore, Maryland 21244, Monday through Friday of each week from 8:30 a.m. to 4 p.m. To schedule an appointment to view public comments, phone 1-800-743-3951.
I. Background
Section 101 of the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) (Pub. L. 114-10, enacted April 16, 2015) amended sections 1848(d) and (f) of the Social Security Act (the Act) to repeal the sustainable growth rate (SGR) formula for updating Medicare physician fee schedule (PFS) payment rates and substitute a series of specified annual update percentages. It establishes a new methodology that ties annual PFS payment adjustments to value through a Merit-Based Incentive Payment System (MIPS) for MIPS eligible professionals (MIPS EPs). Section 101 of the MACRA also creates an incentive program to encourage participation by eligible professionals (EPs) in Alternative Payment Models (APMs). In the “Medicare Program; Revisions to Payment Policies under the Physician Fee Schedule and Other Revisions to Part B for CY 2016; Proposed Rule” (80 FR 41686) (hereinafter referred to as the CY 2016 PFS proposed rule), the Secretary of Health and Human Services (the Secretary) solicited comments regarding implementation of certain aspects of the MIPS and broadly sought public comments on the topics in section 101 of the MACRA, including the incentive payments for participation in APMs and increasing transparency of physician-focused payment models. As we move forward with the implementation of these provisions, there are additional areas on which we would like to receive public and stakeholder input and feedback.
A. The Merit-Based Incentive Payment System (MIPS)
Section 1848(q) of the Act, as added by section 101(c) of the MACRA, requires establishment of the MIPS, applicable beginning with payments for items and services furnished on or after January 1, 2019, under which the Secretary is required to: (1) Develop a methodology for assessing the total performance of each MIPS EP according to performance standards for a performance period for a year; (2) using the methodology, provide for a composite performance score for each MIPS EP for each performance period; and (3) use the composite performance score of the MIPS EP for a performance period for a year to determine and apply a MIPS adjustment factor (and, as applicable, an additional MIPS adjustment factor) to the MIPS EP for the year. Under section 1848(q)(2)(A) of the Act, a MIPS EP's composite performance score is determined using four performance categories: Quality, resource use, clinical practice improvement activities, and meaningful use of certified EHR technology (CEHRT). Section 1848(q)(10) of the Act requires the Secretary to consult with stakeholders (through a request for information (RFI) or other appropriate means) in carrying out the MIPS, including for the identification of measures and activities for each of the four performance categories under the MIPS, the methodology to assess each MIPS EP's total performance to determine their MIPS composite performance score, the methodology to specify the MIPS adjustment factor for each MIPS EP for a year, and regarding the use of qualified clinical data registries (QCDRs) for purposes of the MIPS. We intend to use the feedback we receive on the CY 2016 PFS proposed rule and on this RFI as we develop our proposed policies for the MIPS.
B. Alternative Payment Models
Section 101(e) of the MACRA promotes the development of, and participation in, APMs for physicians and certain practitioners. The statutory amendments made by this section have payment implications for EPs beginning in 2019. Specifically, this section: (1) Creates a payment incentive program that applies to EPs who are qualifying APM participants (QPs) for years from 2019 through 2024; (2) requires the establishment of a process for stakeholders to propose PFPMs to an independent “Physician-Focused Payment Model Technical Advisory Committee” (the Committee) that will review, comment on, and provide recommendations to the Secretary on the proposed PFPMs; and (3) requires the establishment of criteria for PFPMs for use by the Committee for making comments and recommendations to the Secretary. Section 1868(c)(2)(A) of the Act requires the use of an RFI in establishing criteria for PFPMs that could be used by the Committee. Additionally, Section 101(c) of the MACRA exempts QPs from MIPS.
We are issuing this RFI to obtain input on policy considerations for APMs and for PFPMs. Topics of particular interest include: (1) Requirements to be considered an eligible alternative payment entity and QP; (2) the relationship between APMs and the MIPS; and (3) criteria for the Committee to use to provide comments and recommendations on PFPMs.
C. Technical Assistance to Small Practices and Practices in Health Professional Shortage Areas
Section 1848(q)(11) of the Act, as added by section 101(c) of the MACRA, provides for technical assistance to MIPS EPs in small practices and practices in health professional shortage areas (HPSAs). In general, the section requires the Secretary to enter into contracts or agreements with appropriate entities (such as quality improvement organizations, regional extension centers (as described in section 3012(c) of the Public Health Service Act (PHSA)), or regional health collaboratives) to offer guidance and assistance to MIPS EPs in practices of 15 or fewer professionals (with priority given to such practices located in rural areas, HPSAs (as designated under section 332(a)(1)(A) of the PHSA), and medically underserved areas, and practices with low composite scores) with respect to the MIPS performance categories or in transitioning to the implementation of, and participation in, an APM. As we continue to develop our policies and approach for this support, we seek input on a few areas on what best practices should be utilized while providing this technical assistance.
II. Solicitation of Comments
A. The Merit-Based Incentive Payment System (MIPS)
We are soliciting public input as we move forward with the planning and implementation of the MIPS. We are requesting information regarding the following areas:
1. MIPS EP Identifier and Exclusions
Section 1848(q)(1)(C) of the Act defines a MIPS EP for the first 2 years for which the MIPS applies to payments (and the performance periods for such years) as a physician (as defined in section 1861(r) of the Act), a physician assistant (PA), nurse practitioner (NP) and clinical nurse specialist (CNS) (as those are defined in section 1861(aa)(5) of the Act), a certified registered nurse anesthetist (CRNA) (as defined in section 1861(bb)(2) of the Act), and a group that includes such professionals. Beginning with the third year of the program and for succeeding years, the statute defines a MIPS EP to include all the types of professionals identified for the first 2 years. It also gives the Secretary discretion to specify additional EPs, as that term is defined in section 1848(k)(3)(B) of the Act, which could include a certified nurse midwife (as defined in section 1861(gg)(2) of the Act), a clinical social worker (as defined in section 1861(hh)(1) of the Act), a clinical psychologist (as defined by the Secretary for purposes of section 1861(ii) of the Act), a registered dietician or nutrition professional, a physical or occupational therapist, a qualified speech-language pathologist, or a qualified audiologist (as defined in section 1861(ll)(3)(B) of the Act).
Section 1848(q)(5)(I)(ii) of the Act requires that the Secretary establish a process to allow individual MIPS EPs and group practices of not more than 10 MIPS EPs to elect, with respect to a performance period for a year, to be a virtual group with at least one other individual MIPS EP or group practice. Section 1848(q)(5)(I)(iii)(III)) of the Act requires that the process provide that a virtual group be a combination of Tax Identification Numbers (TINs).
CMS currently uses a variety of identifiers to associate an EP under different programs. For example, under the PQRS for individual reporting, CMS uses a combination of a TIN and National Provider Identifier (NPI) to assess eligibility and participation, where each unique TIN and NPI combination is treated as a distinct EP and is separately assessed for purposes of the program. Under the Group Practice Reporting Option (GPRO) under PQRS, eligibility and participation are assessed at the TIN level. Under the EHR Incentive Program, CMS utilizes the NPI to assess eligibility and participation. And under the VM, performance and payment adjustments are assessed at the TIN level. Additionally, under certain models such as the Pioneer Accountable Care Organization (ACO) Model, CMS also assigns a program-specific identifier (in the case of the Pioneer ACO Model, an ACO ID) to the organization(s), and associates that identifier with individual EPs that are, in turn, identified through a combination of a TIN and an NPI. CMS will need to select and operationalize a specific identifier to associate with an individual MIPS EP or a group practice.
We seek comment on what specific identifier(s) should be used to appropriately identify MIPS EPs for purposes of determining eligibility, participation, and performance under the MIPS performance categories. Specifically, we seek comment on the following questions:
- Should we use a MIPS EP's TIN, NPI or a combination thereof? Should we create a distinct MIPS Identifier?
- What are the advantages/disadvantages associated with using existing identifiers, either individually or in combination?
- What are the advantages/disadvantages associated with creating a distinct MIPS identifier?
- Should a different identifier be used to reflect eligibility, participation, or performance as a group practice vs. as an individual MIPS EP? If so, should CMS use an existing identifier or create a distinct identifier?
- How should we calculate performance for MIPS EPs that practice under multiple TINs?
- Should practitioners in a virtual group and virtual group practices have a unique virtual group identifier that is used in addition to the TIN?
- How often should we require an EP or group practice to update any such identifier(s) within the Medicare Provider Enrollment, Chain, and Ownership System (PECOS)? For example, should EPs be required to update their information in PECOS or a similar system that would pertain to the MIPS on an annual basis?
Additionally, we note that depending upon the identifier(s) chosen for MIPS EPs, there could be situations where a given MIPS EP may be part of a “split TIN”. For example, in the scenario where the identifier chosen for MIPS EPs is a TIN (as is utilized by the VM currently), and a portion of that TIN is exempt from MIPS due to being part of a qualifying APM, we will have a split TIN.
In the above scenario, what safeguards should be in place to ensure that we are appropriately assessing MIPS EPs and exempting only those EPs that are not eligible for MIPS?
We also recognize that depending upon the identifier(s) chosen for MIPS EPs, there could be situations where a given MIPS EP would be assessed under the MIPS using multiple identifiers. For example, as noted above, individual EPs are assessed under the PQRS based on unique TIN/NPI combinations. Therefore, individual EPs (each with a unique NPI) who practice under multiple TINs are assessed under the PQRS as a distinct EP for each TIN/NPI combination. For example, under PQRS an EP could receive a negative payment adjustment under one unique TIN/NPI combination, but not receive it under another unique TIN/NPI combination.
- What safeguards should be in place to ensure that MIPS EPs do not switch identifiers if they are considered “poor-performing”?
- What safeguards should be in place to address any unintended consequences, if the chosen identifier is a unique TIN/NPI combination, to ensure an appropriate assessment of the MIPS EPs performance?
2. Virtual Groups
Section 1848(q)(5)(I) of the Act requires the Secretary to establish a process to allow an individual MIPS EP or a group practice of not more than 10 MIPS EPs to elect for a performance period for a year to be a virtual group with other such MIPS EPs or group practices. CMS quality programs, such as the PQRS, have used common identifiers such as a group practice's TIN to assess individual EPs' quality together as a group practice. The virtual group option under the MIPS allows a group's performance to be tied together even if the EPs in the group do not share the same TIN. CMS seeks comment on what parameters should be established for these virtual groups. We seek comment on the following questions:
- How should eligibility, participation, and performance be assessed under the MIPS for voluntary virtual groups?
- Assuming that some, but not all, members of a TIN could elect to join a virtual group, how should remaining members of the TIN be treated under the MIPS, if we allow TINs to split?
- Should there be a maximum or a minimum size for virtual groups? For example, should there be limitations on the size of a virtual group, such as a minimum of 10 MIPS EPs, or no more than 100 MIPS EPs that can elect to be in a given virtual group?
- Should there be a limit placed on the number of virtual group elections that can be made for a particular performance period for a year as this provision is rolled out? We are considering limiting the number of voluntary virtual groups to no more than 100 for the first year this provision is implemented in order for CMS to gain experience with this new reporting configuration. Are there other criteria we should consider? Should we limit for virtual groups the mechanisms by which data can be reported under the quality performance category to specific methods such as QCDRs or utilizing the Web interface?
- If a limit is placed on the number of virtual group elections within a performance period, should this be done on a first-come, first-served basis? Should limits be placed on the size of virtual groups or the number of groups?
- Under the voluntary virtual group election process, what type of information should be required in order to make the election for a performance period for a year? What other requirements would be appropriate for the voluntary virtual group election process?
Section 1848(q)(5)(I)(ii) of the Act provides that a virtual group may be based on appropriate classifications of providers, such as by specialty designations or by geographic areas. We seek comment on the following questions:
- Should there be limitations, such as that MIPS EPs electing a virtual group must be located within a specific 50 mile radius or within close proximity of each other and be part of the same specialty?
3. Quality Performance Category
Section 1848(q)(2)(B)(i) of the Act describes the measures and activities for the quality performance category under the MIPS. Under section 1848(q)(2)(D) of the Act, the Secretary must, through notice and comment rulemaking by November 1 of the year before the first day of each performance period under the MIPS, establish the list of quality measures from which MIPS EPs may choose for purposes of assessment for a performance period for a year. CMS' experience under other quality programs, namely the PQRS and the VM, will help shape processes and policies for this performance category. We seek comment on the following areas:
a. Reporting Mechanisms Available for Quality Performance Category
There are two ways EPs can report under the PQRS, as either an individual EP or as part of a group practice, and for reporting periods that occur during 2015, there are collectively 7 available mechanisms to report data to CMS as an individual EP and as a group practice participating in the PQRS GPRO. They are: Claims-based reporting; qualified registry reporting; QCDR reporting; direct EHR products; EHR data submission vendor products; Consumer Assessment of Healthcare Providers and Systems (CAHPS) for PQRS; and the GPRO Web Interface. Generally, to avoid the PQRS payment adjustment, EPs and group practices are required to report for the applicable reporting period on a specified number of measures covering a specified number of National Quality Strategy domains. (See 42 CFR 414.90 for more information regarding the PQRS reporting criteria.) If data is submitted on fewer measures than required, an EP is subject to a Measure Applicability Validation (MAV) process, which looks across an EP's services to determine if other quality measures could have been reported. We seek comment on the following questions related to these reporting mechanisms and criteria:
- Should we maintain all PQRS reporting mechanisms noted above under MIPS?
- If so, what policies should be in place for determining which data should be used to calculate a MIPS EP's quality score if data are received via multiple methods of submission? What considerations should be made to ensure a patient's data is not counted multiple times? For example, if the same measure is reported through different reporting mechanisms, the same patient could be reported multiple times.
- Should we maintain the same or similar reporting criteria under MIPS as under the PQRS? What is the appropriate number of measures on which a MIPS EP's performance should be based?
- Should we maintain the policy that measures cover a specified number of National Quality Strategy domains?
- Should we require that certain types of measures be reported? For example, should a minimum number of measures be outcomes-based? Should more weight be assigned to outcomes-based measures?
- Should we require that reporting mechanisms include the ability to stratify the data by demographic characteristics such as race, ethnicity, and gender?
- For the CAHPS for PQRS reporting option specifically, should this still be considered as part of the quality performance category or as part of the clinical practice improvement activities performance category? What considerations should be made as we further implement CAHPS for all practice sizes? How can we leverage existing CAHPS reporting by physician groups?
- How do we apply the quality performance category to MIPS EPs that are in specialties that may not have enough measures to meet our defined criteria? Should we maintain a Measure-Applicability Verification Process? If we customize the performance requirements for certain types of MIPS EPs, how should we go about identifying the MIPS EPs to whom specific requirements apply?
- What are the potential barriers to successfully meeting the MIPS quality performance category?
b. Data Accuracy
CMS' experience under the PQRS has shown that data quality is related to the mechanism selected for reporting. Some potential data quality issues specific to reporting via a qualified registry, QCDR, and/or certified EHR technology include: Inaccurate TIN and/or NPI, inaccurate or incomplete calculations of quality measures, missing data elements, etc. Since accuracy of the data is critical to the accurate calculation of a MIPS composite score, we seek comment on what additional data integrity requirements should be in place for the reporting mechanisms referenced above. Specifically:
- What should CMS require in terms of testing of the qualified registry, QCDR, or direct EHR product, or EHR data submission vendor product? How can testing be enhanced to improve data integrity?
- Should registries and qualified clinical data registries be required to submit data to CMS using certain standards, such as the Quality Reporting Document Architecture (QRDA) standard, which certified EHRs are required to support?
- Should CMS require that qualified registries, QCDRs, and health IT systems undergo review and qualification by CMS to ensure that CMS' form and manner are met? For example, CMS uses a specific file format for qualified registry reporting. The current version is available at: https://www.qualitynet.org/imageserver/pqrs/registry2015/index.htm. What should be involved in the testing to ensure CMS' form and manner requirements are met?
- What feedback from CMS during testing would be beneficial to these stakeholders?
- What thresholds for data integrity should CMS have in place for accuracy, completeness, and reliability of the data? For example, if a QCDR's calculated performance rate does not equate to the distinct performance values, such as the numerator exceeding the value of the denominator, should CMS re-calculate the data based on the numerator and denominator values provided? Should CMS not require MIPS EPs to submit a calculated performance rate (and instead have CMS calculate all rates)? Alternatively, for example, if a QCDR omits data elements that make validation of the reported data infeasible, should the data be discarded? What threshold of errors in submitted data should be acceptable?
- If CMS determines that the MIPS EP (participating as an individual EP or as part of a group practice or virtual group) has used a data reporting mechanism that does not meet our data integrity standards, how should CMS assess the MIPS EP when calculating their quality performance category score? Should there be any consequences for the qualified registry, QCDR or EHR vendor in order to correct future practices? Should the qualified registry, QCDR or EHR vendor be disqualified or unable to participate in future performance periods? What consequences should there be for MIPS EPs?
c. Use of Certified EHR Technology (CEHRT) Under the Quality Performance Category
Currently under the PQRS, the reporting mechanisms that use CEHRT require that the quality measures be derived from CEHRT and must be transmitted in specific file formats. For example, EHR technology that meets the CEHRT definition must be able to record, calculate, report, import, and export clinical quality measure (CQM) data using the standards that the Office of the National Coordinator for Health Information Technology (ONC) has specified, including use of the Quality Reporting Data Architecture (QRDA) Category I and III standards. We seek input on the following questions:
- Under the MIPS, what should constitute use of CEHRT for purposes of reporting quality data?
- Instead of requiring that the EHR be utilized to transmit the data, should it be sufficient to use the EHR to capture and/or calculate the quality data? What standards should apply for data capture and transmission?
4. Resource Use Performance Category
Section 1848(q)(2)(B)(ii) of the Act describes the resource use performance category under MIPS as “the measurement of resource use for such period under section1848(p)(3) of the Act, using the methodology under section 1848(r) of the Act as appropriate, and, as feasible and applicable, accounting for the cost of drugs under Part D.” Section 1848(p)(3) of the Act specifies that costs shall be evaluated, to the extent practicable, based on a composite of appropriate measures of costs for purposes of the VM under the PFS. Section 1848(r) of the Act (as added by section 101(f) of the MACRA) specifies a series of steps and deliverables for the Secretary to develop “care episode and patient condition groups and classification codes” and “patient relationship categories and codes” for purposes of attribution of patients to practitioners, and provides for the use of these in a specified methodology for measurement of resource use. Under the MIPS, the Secretary must evaluate costs based on a composite of appropriate measures of costs using the methodology for resource use analysis specified in section 1848(r)(5) of the Act that involves the use of certain codes and claims data and condition and episode groups, as appropriate. CMS' experience under the VM will help shape this performance category. Currently under the VM, we use the following cost measures: (1) Total Per Capita Costs for All Attributed Beneficiaries measure; (2) Total Per Capita Costs for Beneficiaries with Specific Conditions (Diabetes, Coronary artery disease, Chronic obstructive pulmonary disease, and Heart failure); and (3) Medicare Spending per Beneficiary (MSPB) measure. We seek comment on the following questions:
- Apart from the cost measures noted above, are there additional cost or resource use measures (such as measures associated with services that are potentially harmful or over-used, including those identified by the Choosing Wisely initiative) that should be considered? If so, what data sources would be required to calculate the measures?
- How should we apply the resource use category to MIPS EPs for whom there may not be applicable resource use measures?
- What role should episode-based costs play in calculating resource use and/or providing feedback reports to MIPS EPs under section 1848(q)(12) of the Act?
- How should CMS consider aligning measures used under the MIPS resource use performance category with resource use based measures used in other parts of the Medicare program?
- How should we incorporate Part D drug costs into MIPS? How should this be measured and calculated?
- What peer groups or benchmarks should be used when assessing performance under the resource use performance category?
- CMS has received stakeholder feedback encouraging us to align resource use measures with clinical quality measures. How could the MIPS methodology, which includes domains for clinical quality and resource use, be designed to achieve such alignment?
We also note that there will be forthcoming opportunities to comment on further development of care episode and patient condition groups and classification codes, and patient relationship categories and groups, as required by section 1848(r) of the Act.
5. Clinical Practice Improvement Activities Performance Category
Section 1848(q)(2)(B)(iii) of the Act specifies that the measures and activities for the clinical practice improvement activities performance category must include at least the following subcategories of activities: Expanded practice access, population management, care coordination, beneficiary engagement, patient safety and practice assessment, and participation in an APM. The Secretary has discretion under this provision to add other subcategories of activities as well. The term “clinical practice improvement activity” is defined under section 1848(q)(2)(C)(v)(III) of the Act as an activity that relevant eligible professional organizations and other relevant stakeholders identify as improving clinical practice or care delivery and that the Secretary determines, when effectively executed, is likely to result in improved outcomes. Under section 1848(q)(2)(C)(v) of the Act, we are required to use an RFI to solicit recommendations from stakeholders to identify and specify criteria for clinical practice improvement activities. In the CY 2016 PFS proposed rule (80 FR 41879), the Secretary sought comment on what activities could be classified as clinical practice improvement activities under the subcategories specified in section 1848(q)(2)(B)(iii) of the Act. In this RFI, we seek comment on other potential clinical practice improvement activities (and subcategories of activities), and on the criteria that should be applicable for all clinical practice improvement activities. We also seek comment on the following subcategories, in particular how measures or other demonstrations of activity may be validated and evaluated:
- A subcategory of Promoting Health Equity and Continuity, including (a) serving Medicaid beneficiaries, including individuals dually eligible for Medicaid and Medicare, (b) accepting new Medicaid beneficiaries, (c) participating in the network of plans in the Federally-facilitated Marketplace or state exchanges, and (d) maintaining adequate equipment and other accommodations (for example, wheelchair access, accessible exam tables, lifts, scales, etc.) to provide comprehensive care for patients with disabilities.
- A subcategory of Social and Community Involvement, such as measuring completed referrals to community and social services or evidence of partnerships and collaboration with the community and social services.
- A subcategory of Achieving Health Equity, as its own category or as a multiplier where the achievement of high quality in traditional areas is rewarded at a more favorable rate for EPs that achieve high quality for underserved populations, including persons with behavioral health conditions, racial and ethnic minorities, sexual and gender minorities, people with disabilities, and people living in rural areas, and people in HPSAs.
- A subcategory of emergency preparedness and response, such as measuring EP participation in the Medical Reserve Corps, measuring registration in the Emergency System for Advance Registration of Volunteer Health Professionals, measuring relevant reserve and active duty military EP activities, and measuring EP volunteer participation in humanitarian medical relief work.
- A subcategory of integration of primary care and behavioral health, such as measuring or evaluating such practices as: Co-location of behavioral health and primary care services; shared/integrated behavioral health and primary care records; cross-training of EPs;
We also seek comment on what mechanisms should be used for the Secretary to receive data related to clinical practice improvement activities. Specifically, we seek comment on the following:
- Should EPs be required to attest directly to CMS through a registration system, Web portal or other means that they have met the required activities and to specify which activities on the list they have met? Or alternatively, should qualified registries, QCDRs, EHRs, or other health IT systems be able to transmit results of the activities to CMS?
- What information should be reported and what quality checks and/or data validation should occur to ensure successful completion of these activities?
- How often providers should report or attest that they have met the required activities?
Additionally, we seek comment on the following areas of how we should assess performance on the clinical practice improvement activities category. Specifically:
- What threshold or quantity of activities should be established under the clinical practice improvement activities performance category? For example, should performance in this category be based on completion of a specific number of clinical practice improvement activities, or, for some categories, a specific number of hours? If so, what is the minimum number of activities or hours that should be completed? How many activities or hours would be needed to earn the maximum possible score for the clinical practice improvement activities in each performance subcategory? Should the threshold or quantity of activities increase over time? Should performance in this category be based on demonstrated availability of specific functions and capabilities?
- How should the various subcategories be weighted? Should each subcategory have equal weight, or should certain subcategories be weighted more than others?
- How should we define the subcategory of participation in an APM?
Lastly, section 1848(q)(2)(B)(iii) of the Act requires the Secretary, in establishing the clinical practice improvement activities, to give consideration to the circumstances of small practices (15 or fewer professionals) and practices located in rural areas and in HPSAs (as designated under section 332(a)(1)(A) of the PHSA). We seek comment on the following questions relating to this requirement:
- How should the clinical practice improvement activities performance category be applied to EPs practicing in these types of small practices or rural areas?
- Should a lower performance threshold or different measures be established that will better allow those EPs to reach the payment threshold?
- What methods should be leveraged to appropriately identify these practices?
- What best practices should be considered to develop flexible and adaptable clinical practice improvement activities based on the needs of the community and its population?
6. Meaningful Use of Certified EHR Technology Performance Category
Section 1848(q)(2)(B)(iv) of the Act specifies that the measures and activities for the meaningful use of certified EHR technology performance category under the MIPS are the requirements established under section 1848(o)(2) of the Act for determining whether an eligible professional is a meaningful EHR user of CEHRT. Under section 1848(q)(5)(E)(i)(IV) of the Act, 25 percent of the composite performance score under the MIPS must be determined based on performance in the meaningful use of certified EHR technology performance category. Section 1848(q)(5)(E)(ii) of the Act gives the Secretary discretion to reduce the percentage weight for this performance category (but not below 15 percent) in any year in which the Secretary estimates that the proportion of eligible professionals who are meaningful EHR users is 75 percent or greater, resulting in an increase in the applicable percentage weights of the other performance categories. We seek comment on the methodology for assessing performance in this performance category. Additionally, we note that we are only seeking comments on the meaningful use performance category under the MIPS; we are not seeking comments on the Medicare and Medicaid EHR Incentive Programs.
- Should the performance score for this category be based be based solely on full achievement of meaningful use? For example, an EP might receive full credit (for example, 100 percent of the allotted 25 percentage points of the composite performance score) under this performance category for meeting or exceeding the thresholds of all meaningful use objectives and measures; however, failing to meet or exceed all objectives and measures would result in the EP receiving no credit (for example, zero percent of the allotted 25 percentage points of the composite performance score) for this performance category. We seek comment on this approach to scoring.
- Should CMS use a tiered methodology for determining levels of achievement in this performance category that would allow EPs to receive a higher or lower score based on their performance relative to the thresholds established in the Medicare EHR Incentive program's meaningful use objectives and measures? For example, an EP who scores significantly higher than the threshold and higher than their peer group might receive a higher score than the median performer. How should such a methodology be developed? Should scoring in this category be based on an EP's under- or over-performance relative to the required thresholds of the objectives and measures, or should the scoring methodology of this category be based on an EP's performance relative to the performance of his or her peers?
- What alternate methodologies should CMS consider for this performance category?
- How should hardship exemptions be treated?
7. Other Measures
Section 1848(q)(2)(C)(ii) of the Act allows the Secretary to use measures that are used for a payment system other than the PFS, such as measures for inpatient hospitals, for the purposes of the quality and resource use performance categories (but not measures for hospital outpatient departments, except in the case of items and services furnished by emergency physicians, radiologists, and anesthesiologists). We seek comment on how we could best use this authority, including the following specific questions:
- What types of measures (that is, process, outcomes, populations, etc.) used for other payment systems should be included for the quality and resource use performance categories under the MIPS?
- How could we leverage measures that are used under the Hospital Inpatient Quality Reporting Program, the Hospital Value-Based Purchasing Program, or other quality reporting or incentive payment programs? How should we attribute the performance on the measures that are used under other quality reporting or value-based purchasing programs to the EP?
- To which types of EPs should these be applied? Should this option be available to all EPs or only to those EPs who have limited measure options under the quality and resource use performance categories?
- How should CMS link an EP to a facility in order to use measures from other payment systems? For example, should the EP be allowed to elect to be analyzed based on the performance on measures for the facility of his or her choosing? If not, what criteria should CMS use to attribute a facility's performance on a given measure to the EP or group practice?
Additionally, section 1848(q)(2)(C)(iii) of the Act allows and encourages the Secretary to use global measures and population-based measures for the purposes of the quality performance category. We seek comment on the following questions:
- What types of global and population-based measures should be included under MIPS? How should we define these types of measures?
- What data sources are available, and what mechanisms exist to collect data on these types of measures?
Lastly, section 1848(q)(2)(C)(iv) of the Act requires the Secretary, for the measures and activities specified for the MIPS performance categories, to give consideration to the circumstances of professional types (or subcategories of those types based on practice characteristics) who typically furnish services that do not involve face-to-face interaction with patients when defining MIPS performance categories. For example, EPs practicing in certain specialties such as pathologists and certain types of radiologists do not typically have face-to-face interactions with patients. If measures and activities for the MIPS performance categories focus on face-to-face encounters, these specialists may have more limited opportunities to be assessed, which could negatively affect their MIPS composite performance scores as compared to other specialties. We seek comment on the following questions:
- How should we define the professional types that typically do not have face-to-face interactions with patients?
- What criteria should we use to identify these types of EPs?
- Should we base this designation on their specialty codes in PECOS, use encounter codes that are billed to Medicare, or use an alternate criterion?
- How should we apply the four MIPS performance categories to non-patient-facing EPs?
- What types of measures and/or clinical practice improvement activities (new or from other payments systems) would be appropriate for these EPs?
8. Development of Performance Standards
Section 1848(q)(3)(B) of the Act requires the Secretary, in establishing performance standards with respect to measures and activities for the MIPS performance categories, to consider: historical performance standards, improvement, and the opportunity for continued improvement. We seek comment on the following questions:
- Which specific historical performance standards should be used? For example, for the quality and resource use performance categories, how should CMS select quality and cost benchmarks? Should CMS use providers' historical quality and cost performance benchmarks and/or thresholds from the most recent year feasible prior to the commencement of MIPS? Should performance standards be stratified by group size or other criteria? Should we use a model similar to the performance standards established under the VM?
- For the clinical practice improvement activities performance category, what, if any, historical data sources should be leveraged?
- How should we define improvement and the opportunity for continued improvement? For example, section 1848(q)(5)(D) of the Act requires the Secretary, beginning in the second year of the MIPS, if there are available data sufficient to measure improvement, to take into account improvement of the MIPS EP in calculating the performance score for the quality and resource use performance categories.
- How should CMS incorporate improvement into the scoring system or design an improvement formula?
- What should be the threshold(s) for measuring improvement?
- How would different approaches to defining the baseline period for measuring improvement affect EPs' incentives to increase quality performance? Would periodically updating the baseline period penalize EPs who increase performance by holding them to a higher standard in future performance periods, thereby undermining the incentive to improve? Could assessing improvement relative to a fixed baseline period avoid this problem? If so, would this approach have other consequences CMS should consider?
- Should CMS use the same approach for assessing improvement as is used for the Hospital Value-Based Purchasing Program? What are the advantages and disadvantages of this approach?
- Should CMS consider improvement at the measure level, performance category level (that is, quality, clinical practice improvement activity, resource use, and meaningful use of certified EHR technology), or at the composite performance score level?
- Should improvements in health equity and the reductions of health disparities be considered in the definition of improvement? If so, how should CMS incorporate health equity into the formula?
- In the CY 2016 PFS proposed rule (80 FR 41812), the Secretary proposed to publicly report on Physician Compare an item-level benchmark derived using the Achievable Benchmark of Care (ABCTM) methodology. We seek comment on using this methodology for determining the MIPS performance standards for one or more performance categories.
9. Flexibility in Weighting Performance Categories
Section 1848(q)(5)(F) of the Act requires the Secretary, if there are not sufficient measures and activities applicable and available to each type of EP, to assign different scoring weights (including a weight of zero) from those that apply generally under the MIPS. We seek comment on the following questions:
- Are there situations where certain EPs could not be assessed at all for purposes of a particular performance category? If so, how should we account for the percentage weight that is otherwise applicable for that category? Should it be evenly distributed across the remaining performance categories? Or should the weights be increased for one or more specific performance categories, such as the quality performance category?
- Generally, what methodologies should be used as we determine whether there are not sufficient measures and activities applicable and available to types of EPs such that the weight for a given performance category should be modified or should not apply to an EP? Should this be based on an EP's specialty? Should this determination occur at the measure or activity level, or separately at the specialty level?
- What case minimum threshold should CMS consider for the different performance categories?
- What safeguards should we have in place to ensure statistical significance when establishing performance thresholds? For example, under the VM one standard deviation is used. Should we apply a similar threshold under MIPS?
10. MIPS Composite Performance Score and Performance Threshold
- Section 1848(q)(5)(A) of the Act requires the Secretary to develop a methodology for assessing the total performance of each MIPS EP based on performance standards with respect to applicable measures and activities in each of the four performance categories. The methodology is to provide for a composite assessment for each MIPS EP for the performance period for the year using a scoring scale of 0 to 100. Section 1848(q)(6)(D) of the Act requires the Secretary to compute a performance threshold to which the MIPS EP's composite performance score is compared for purposes of determining the MIPS adjustment factor for a year. The performance threshold must be either the mean or median of the composite performance scores for all MIPS EPs with respect to a prior period specified by the Secretary. Section 1848(q)(6)(D)(iii) of the Act requires the Secretary for the first 2 years of the MIPS, prior to the performance period for those years, to establish a performance threshold that is based on a period prior to the performance periods for those years. Additionally, the act requires the Secretary to take into account available data with respect to performance on measures and activities that may be used under the MIPS performance categories and other factors deemed appropriate. From our experience with the PQRS, VM, and the Medicare EHR Incentive Program, there is information available for prior periods for all MIPS performance categories except for clinical practice improvement activities. We are requesting information from the public on the following:
- How should we assess performance on each of the 4 performance categories and combine the assessments to determine a composite performance score?
- For the quality and resource use performance categories, should we use a methodology (for example, equal weighting of quality and resource use measures across National Quality Strategy domains) similar to what is currently used for the VM?
- How should we use the existing data on quality measures and resource use measures to translate the data into a performance threshold for the first two years of the program?
- What minimum case size thresholds should be utilized? For example, should we leverage all data that is reported even if the denominators are small? Or should we employ a minimum patient threshold, such as a minimum of 20 patients, for each measure?
- How can we establish a base threshold for the clinical practice improvement activities? How should this be incorporated into the overall performance threshold?
- What other considerations should be made as we determine the performance threshold for the total composite performance score? For example, should we link performance under one category to another?
11. Public Reporting
We also seek comment on what should be the minimum threshold used for publicly reporting MIPS measures and activities for all of the MIPS performance categories on the Physician Compare Web site.
In the CY 2016 PFS proposed rule (80 FR 41809), we indicated that we will continue using a minimum 20 patient threshold for public reporting through Physician Compare of quality measures (in addition to assessing the reliability, validity and accuracy of the measures). An alternative to a minimum patient threshold for public reporting would be to use a minimum reliability threshold. We seek comment on both concepts in regard to public reporting of MIPS quality measures on the Physician Compare Web site. We additionally seek comment on the following:
- Should CMS include individual EP and group practice-level quality measure data stratified by race, ethnicity and gender in public reporting (if statistically appropriate)?
12. Feedback Reports
Section 1848(q)(12)(A) of the Act requires the Secretary, beginning July 1, 2017, to provide confidential feedback on performance to MIPS EPs. Specifically, we are required to make available timely confidential feedback to MIPS EPs on their performance in the quality and resource use performance categories, and we have discretion to make available confidential feedback to MIPS EPs on their performance in the clinical practice improvement activities and meaningful use of certified EHR technology performance categories. This feedback can be provided through various mechanisms, including the use of a web-based portal or other mechanisms determined appropriate by the Secretary. We seek comment on the following questions:
- What types of information should we provide to EPs about their practice's performance within the feedback report? For example, what level of detail on performance within the performance categories will be beneficial to practices?
- Would it be beneficial for EPs to receive feedback information related to the clinical practice improvement activities and meaningful use of certified EHR technology performance categories? If so, what types of feedback?
- What other mechanisms should be leveraged to make feedback reports available? Currently, CMS provides feedback reports for the PQRS, VM, and the Physician Feedback Program through a web-based portal. Should CMS continue to make feedback available through this portal? What other entities and vehicles could CMS partner with to make feedback reports available? How should CMS work with partners to enable feedback reporting to incorporate information from other payers, and what types of information should be incorporated?
- Who within the EP's practice should be able to access the reports? For example, currently under the VM, only the authorized group practice representative and/or their designees can access the feedback reports. Should other entities be able to access the feedback reports, such as an organization providing MIPS-focused technical assistance, another provider participating in the same virtual group, or a third party data intermediary who is submits data to CMS on behalf of the EP, group practice, or virtual group?
- With what frequency is it beneficial for an EP to receive feedback? Currently, CMS provides Annual Quality and Resource Use Reports (QRUR), mid-year QRURs and supplemental QRURs. Should we continue to provide feedback to MIPS EPs on this cycle? Would there be value in receiving interim reports based on rolling performance periods to make illustrative calculations about the EP's performance? Are there certain performance categories on which it would be more important to receive interim feedback than others? What information that is currently contained within the QRURs should be included? More information on what is available within the QRURs is at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/2014-QRUR.html.
- Should the reports include data that is stratified by race, ethnicity and gender to monitor trends and address gaps towards health equity?
- What types of information about items and services furnished to the EP's patients by other providers would be useful? In what format and with what frequency?
B. Alternative Payment Models
We are requesting information regarding the following areas:
1. Information Regarding APMs
Section 1833(z)(1) of the Act, as added by section 101(e)(2) of the MACRA, establishes incentive payments for EPs who are QPs with respect to a year. The term “qualifying APM participant” is defined under section 1833(z)(2) of the Act, and provides in part that a specified percent (which differs depending on the year) of an EP's payments during the most recent period for which data are available must be attributable to services furnished through an “eligible alternative payment entity” (EAPM entity) as that term is defined under section 1833(z)(3)(D) of the Act.
The term APM, as defined in section 1833(z)(3)(C) of the Act, includes: Models under section 1115A of the Act (other than health care innovation awards); the Shared Savings Program under section 1899 of the Act; demonstrations under section 1866C of the Act (the Health Care Quality Demonstration Program); and demonstrations required by federal law.
Under section 1833(z)(3)(D) of the Act, an EAPM entity is an entity that: (1) Participates in an APM that requires participants to use certified EHR technology and provides for payment for covered professional services based on quality measures comparable to the MIPS quality measures established under section 1848(q)(2)(B)(i) of the Act and (2) either bears financial risk for monetary losses under the APM that are in excess of a nominal amount or is a medical home expanded under section 1115A(c) of the Act.
For the years 2019 through 2024, EPs who are QPs for a given year will receive an incentive payment equal to 5 percent of the estimated aggregate Part B Medicare payment amounts for covered professional services for the preceding year. Under section 1833(z)(1)(A), the estimated aggregate Medicare Part B payment amount for the preceding year may be based on a period of the preceding year that is less than the full year.
a. QPs and Partial Qualifying APM Participants (Partial QPs)
Under section 1833(z)(2) of the Act, an EP may be determined to be a QP through: (1) Beginning for 2019, a Medicare payment threshold option that assesses the percent of Medicare Part B payments for covered professional services in the most recent period that is attributable to services furnished through an EAPM entity; or (2) beginning for 2021, either a Medicare payment threshold option or a combination all-payer and Medicare payment threshold option. The combination all-payer and Medicare payment threshold option assesses both: (1) The percent of Medicare payments for covered professional services in the most recent period that is attributable to services furnished through an EAPM entity; and (2) the percent of the combined Part B Medicare payments for covered professional services attributable to an EAPM entity and all other payments made by other payers made under similarly defined arrangements (except payments made by the Department of Defense or Veterans Affairs and payments made under Title XIX in a state in which no medical home or alternative payment model is available under the State program under that title). These arrangements must be arrangements in which: (1) Quality measures comparable to those used under the MIPS apply; (2) certified EHR technology is used; and (3) either the entity bears more than nominal financial risk if actual expenditures exceed expected expenditures or the entity is a medical home under Title XIX that meets criteria comparable to medical homes expanded under section 1115A(c) of the Act. For the combined all-payer and Medicare payment threshold option, the EP is required to provide to the Secretary the necessary information to make a determination as to whether the EP meets the all-payer portion of the threshold.
For 2019 and 2020, the Medicare-only payment threshold requires that at least 25 percent of all Medicare payments be attributable to services furnished through an EAPM entity. This threshold increases to 50 percent for 2021 and 2022, and 75 percent for 2023 and later years. The combination all-payer and Medicare payment threshold option is available beginning in 2021. The combined all-payer and Medicare payment thresholds are, respectively, 50 percent of all-payer payments and 25 percent of Medicare payments in 2021 and 2022, and 75 percent of all-payer payments and 25 percent of Medicare payments in 2023 and later years.
Under section 1848(q)(1)(C)(ii) of the Act, the statute specifies that partial QPs are those who would be QPs if the threshold payment percentages under section 1833(z)(2) of the Act for the year were lower. For partial QPs, the Medicare-only payment thresholds are 20 percent (instead of 25 percent) for 2019 and 2020, 40 percent (instead of 50 percent) for 2021 and 2022, and 50 percent (instead of 75 percent) for 2023 and later years. For partial QPs, the combination all-payer and Medicare payment thresholds are, respectively, 40 percent (instead of 50 percent) all-payer and 20 percent (instead of 25 percent) Medicare in 2021 and 2022, and 50 percent (instead of 75 percent) all-payer and 20 percent (instead of 25 percent) Medicare in 2023 and later years.
Partial QPs are not eligible for incentive payments for APM participation under section 1833(z) of the Act. Partial QPs who, for the MIPS performance period for the year, do not report applicable MIPS measures and activities are not considered MIPS EPs. Partial QPs who choose to participate in MIPS are considered MIPS EPs. These partial QPs will be subject to payment adjustments under MIPS.
b. Payment Incentive for APM Participation
To help us establish criteria and a process for determining whether an EP is a QP or partial QP, this RFI requests information on the following issues.
- How should CMS define “services furnished under this part through an EAPM entity”?
- What policies should the Secretary consider for calculating incentive payments for APM participation when the prior period payments were made to an EAPM entity rather than directly to a QP, for example, if payments were made to a physician group practice or an ACO? What are the advantages and disadvantages of those policies? What are the effects of those policies on different types of EPs (that is, those in physician-focused APMs versus hospital-focused APMs, etc.)? How should CMS consider payments made to EPs who participate in more than one APM?
- What policies should the Secretary consider related to estimating the aggregate payment amounts when payments are made on a basis other than fee-for-service (that is, if payments were made on a capitated basis)? What are the advantages and disadvantages of those policies? What are their effects on different types of EPs (that is, those in physician-focused APMs versus hospital-focused APMs, etc.)?
- What types of data and information can EPs submit to CMS for purposes of determining whether they meet the non-Medicare share of the Combination All-Payer and Medicare Payment Threshold, and how can they be securely shared with the federal government?
c. Patient Approach
Under section 1833(z)(2)(D) of the Act, the Secretary can use percentages of patient counts in lieu of percentages of payments to determine whether an EP is a QP or partial QP.
- What are examples of methodologies for attributing and counting patients in lieu of using payments to determine whether an EP is a QP or partial QP?
- Should this option be used in all or only some circumstances? If only in some circumstances, which ones and why?
d. Nominal Financial Risk
- What is the appropriate type or types of “financial risk” under section 1833(z)(3)(D)(ii)(I) of the Act to be considered an EAPM entity?
- What is the appropriate level of financial risk “in excess of a nominal amount” under section 1833(z)(3)(D)(ii)(I) of the Act to be considered an EAPM entity?
- What is the appropriate level of “more than nominal financial risk if actual aggregate expenditures exceed expected aggregate expenditures” that should be required by a non-Medicare payer for purposes of the Combination All-Payer and Medicare Payment Threshold under sections 1833(z)(2)(B)(iii)(II)(cc)(AA) and 1833(z)(2)(C)(iii)(II)(cc)(AA) of the Act?
- What are some points of reference that should be considered when establishing criteria for the appropriate type or level of financial risk, e.g., the MIPS or private-payer models?
e. Medicaid Medical Homes or Other APMs Available Under State Medicaid Programs
EPs may meet the criteria to be QPs or partial QPs under the Combination All-Payer and Medicare Payment Threshold Option based, in part, on payments from non-Medicare payers attributable to services furnished through an entity that, with respect to beneficiaries under Title XIX, is a medical home that meets criteria comparable to medical homes expanded under section 1115A(c) of the Act. In addition, payments made under some State Medicaid programs, not associated with Medicaid medical homes, may meet the criteria to be included in the calculation of the combination all-payer and Medicare payment threshold option.
- What criteria could the Secretary consider for determining comparability of state Medicaid medical home models to medical home models expanded under section 1115A(c) of the Act?
- Which states' Medicaid medical home models might meet criteria comparable to medical homes expanded under section 1115A(c) of the Act?
- Which current Medicaid alternative payment models—besides Medicaid medical homes are likely to meet the criteria for comparability of state Medicaid medical homes to medical homes expanded under section 1115A(c) of the Act and should be considered when determining the all-payer portion of the Combination All-Payer and Medicare Payment Threshold Option?
f. Regarding EAPM Entity Requirements
An EAPM entity is defined as an entity that (1) participates in an APM that requires participants to use certified EHR technology (as defined in section 1848(o)(4) of the Act) and provides for payment for covered professional services based on quality measures comparable to measures under the performance category described in section 1848(q)(2)(B)(i) of the Act (the quality performance category); and (2) bears financial risk for monetary losses under the APM that are in excess of a nominal amount or is a medical home expanded under section 1115A(c) of the Act.
(1) Definition
- What entities should be considered EAPM entities?
(2) Quality Measures
- What criteria could be considered when determining “comparability” to MIPS of quality measures used to identify an EAPM entity? Please provide specific examples for measures, measure types (for example, structure, process, outcome, and other types), data source for measures (for example, patients/caregivers, medical records, billing claims, etc.), measure domains, standards, and comparable methodology.
- What criteria could be considered when determining “comparability” to MIPS of quality measures required by a non-Medicare payer to qualify for the Combination All-Payer and Medicare Payment Threshold? Please provide specific examples for measures, measure types, (for example, structure, process, outcome, and other types), recommended data sources for measures (for example, patients/caregivers, medical records, billing claims, etc.), measure domains, and comparable methodology.
(3) Use of Certified EHR Technology
- What components of certified EHR technology as defined in section 1848(o)(4) of the Act should APM participants be required to use? Should APM participants be required to use the same certified EHR technology currently required for the Medicare and Medicaid EHR Incentive Programs or should CMS other consider requirements around certified health IT capabilities?
- What are the core health IT functions that providers need to manage patient populations, coordinate care, engage patients and monitor and report quality? Would certification of additional functions or interoperability requirements in health IT products (for example, referral management or population health management functions) help providers succeed within APMs?
- How should CMS define “use” of certified EHR technology as defined in section 1848(o)(4) of the Act by participants in an APM? For example, should the APM require participants to report quality measures to all payers using certified EHR technology or only payers who require EHR reported measures? Should all professionals in the APM in which an eligible alternative payment entity participates be required to use certified EHR technology or a particular subset?
2. Information Regarding Physician-Focused Payment Models
Section 101(e)(1) of the MACRA, adds a new subsection 1868(c) to the Act entitled, “Increasing the Transparency of Physician-Focused Payment Models.” This section establishes an independent “Physician-focused Payment Model Technical Advisory Committee” (the Committee). The Committee will review and provide comments and recommendations to the Secretary on PFPMs submitted by stakeholders. Section 1868(c)(2)(A) of the Act requires the Secretary to establish, through notice and comment rulemaking following an RFI, criteria for PFPMs, including models for specialist physicians, that could be used by the Committee for making its comments and recommendations. In this RFI, we are seeking input on potential criteria that the Committee could use for making comments and recommendations to the Secretary on PFPMs proposed by stakeholders. CMS published an RFI requesting information on Specialty Practitioner Payment Model Opportunities on February 11, 2014, available at http://innovation.cms.gov/files/x/specialtypractmodelsrfi.pdf. The comments received in response to that RFI will also be considered in developing the proposed rule for the criteria for PFPMs.
PFPMs are not required by the MACRA to meet the criteria to be considered APMs as defined under section 1833(z)(3)(C) of the Act or to involve an EAPM entity as defined under section 1833(z)(3)(D) of the Act. However, we are interested in encouraging model proposals from stakeholders that will provide EPs the opportunity to become QPs and receive incentive payments (in other words, model proposals that would involve EAPM entities as defined in section 1833(z)(3)(D) of the Act). PFPMs proposed by stakeholders and selected for implementation by CMS will take time and resources to implement after being reviewed by the Committee and the Secretary. To expedite our ability to implement such models, we are interested in receiving comments now on criteria that would support development of PFPMs that involve EAPM entities.
a. Definition of Physician-Focused Payment Models
- How should “physician-focused payment model” be defined?
b. Criteria for Physician-Focused Payment Models
We are required by section 1868(c)(2)(A) of the Act to establish by November 1, 2016, through rulemaking and following an RFI, criteria for PFPMs, including models for specialist physicians, that could be used by the Committee for making comments and recommendations to the Secretary. We intend to establish criteria that promote robust and well-developed proposals to facilitate implementation of PFPMs. To assist us with establishing criteria, this RFI requests information on the following fundamental issues.
- What criteria should be used by the Committee for assessing PFPM proposals submitted by stakeholders? We are interested in hearing suggestions related to the criteria discussed in this RFI as well as other criteria.
- Are there additional or different criteria that the Committee should use for assessing PFPMs that are specialist models? What criteria would promote development of new specialist models?
- What existing criteria, procedures, or standards are currently used by private or public insurance plans in testing or establishing new payment models? Should any of these criteria be used by the Committee for assessing PFPM proposals? Why or why not?
c. Required Information on Context of Model Within Delivery System Reform
This RFI seeks feedback on information that could be required of stakeholders proposing models to provide for the consideration of the Committee.
We are considering the following specific criteria for the Committee to use to make comments and recommendations related to model proposals submitted to the Committee. We are seeking feedback on whether these criteria should be included and, if so, whether they should be modified, and whether other criteria should be considered. Each of these criteria is considered for all models tested through the Center for Medicare and Medicaid Innovation (Innovation Center) during internal development. For a list of the factors considered in the Innovation Center's model selection process, see http://innovation.cms.gov/Files/x/rfi-Web sitepreamble.pdf. We seek comment on the following possible criteria:
- We are considering that proposed PFPMs should primarily be focused on the inclusion of participants in their design who have not had the opportunity to participate in another PFPM with CMS because such a model has not been designed to include their specialty.
- Proposals would state why the proposed model should be given priority, and why a model is needed to test the approach.
- Proposals would include a framework for the proposed payment methodology, how it differs from the current Medicare payment methodology, and how it promotes delivery system reforms.
- If a similar model has been tested or researched previously, either by CMS or in the private sector, the stakeholder would include background information and assessments on the performance of the similar model.
- Proposed models would aim to directly solve a current issue in payment policy that CMS is not already addressing in another model or program.
d. Required Information on Model Design
For the Committee to comment and make recommendations on the merits of PFPMs proposed by stakeholders, we are considering a requirement that proposals include the same information that would be required for any model tested through the Innovation Center. For a list of the factors considered in the Innovation Center's model selection process, see http://innovation.cms.gov/Files/x/rfi-Web sitepreamble.pdf. This RFI requests comments on the usefulness of this information, which of the suggested information is appropriate to consider as criteria, and whether other criteria should be considered. The provision of information would not require particular answers in order for a PFPM to meet the criteria. Instead, a proposal would be incomplete if it did not include this information.
- Definition of the target population, how the target population differs from the non-target population and the number of Medicare beneficiaries that would be affected by the model.
- Ways in which the model would impact the quality and efficiency of care for Medicare beneficiaries.
- Whether the model would provide for payment for covered professional services based on quality measures, and if so, whether the measures are comparable to quality measures under the MIPS quality performance category.
- Specific proposed quality measures in the model, their prior validation, and how they would further the model's goals, including measures of beneficiary experience of care, quality of life, and functional status that could be used.
- How the model would affect access to care for Medicare and Medicaid beneficiaries.
- How the model will affect disparities among beneficiaries by race, and ethnicity, gender, and beneficiaries with disabilities, and how the applicant intends to monitor changes in disparities during the model implementation.
- Proposed geographical location(s) of the model.
- Scope of EP participants for the model, including information about what specialty or specialties EP participants would fall under the model.
- The number of EPs expected to participate in the model, information about whether or not EP participants for the model have expressed interest in participating and relevant stakeholder support for the model.
- To what extent participants in the model would be required to use certified EHR technology.
- An assessment of financial opportunities for model participants including a business case for their participation.
- Mechanisms for how the model fits into existing Medicare payment systems, or replaces them in part or in whole and would interact with or complement existing alternative payment models.
- What payment mechanisms would be used in the model, such as incentive payments, performance-based payments, shared savings, or other forms of payment.
- Whether the model would include financial risk for monetary losses for participants in excess of a minimal amount and the type and amount of financial performance risk assumed by model participants.
- Method for attributing beneficiaries to participants.
- Estimated percentage of Medicare spending impacted by the model and expected amount of any new Medicare/Medicaid payments to model participants.
- Mechanism and amount of anticipated savings to Medicare and Medicaid from the model, and any incentive payments, performance-based payments, shared savings, or other payments made from Medicare to model participants.
- Information about any similar models used by private payers, and how the current proposal is similar to or different from private models and whether and how the model could include additional payers other than Medicare, including Medicaid.
- Whether the model engages payers other than Medicare, including Medicaid and/or private payers. If not, why not? If so, what proportion of the model's beneficiaries is covered by Medicare as compared to other payers?
- Potential approaches for CMS to evaluate the proposed model (study design, comparison groups, and key outcome measures).
- Opportunities for potential model expansion if successful.
C. Technical Assistance to Small Practices and Practices in Health Professional Shortage Areas
Section 1848(q)(11) of the Act provides for technical assistance to small practices and practices in HPSAs. In general, under section 1848(q)(11) of the Act, the Secretary is required to enter into contracts or agreements with entities such as quality improvement organizations, regional extension centers and regional health collaboratives beginning in Fiscal Year 2016 to offer guidance and assistance to MIPS EPs in practices of 15 or fewer professionals. Priority is to be given to small practices located in rural areas, HPSAs, and medically underserved areas, and practices with low composite scores. The technical assistance is to focus on the performance categories under MIPS, or how to transition to implementation of and participation in an APM.
For section 1848(q)(11) of the Act—
- What should CMS consider when organizing a program of technical assistance to support clinical practices as they prepare for effective participation in the MIPS and APMs?
- What existing educational and assistance efforts might be examples of “best in class” performance in spreading the tools and resources needed for small practices and practices in HPSAs? What evidence and evaluation results support these efforts?
- What are the most significant clinician challenges and lessons learned related to spreading quality measurement, leveraging CEHRT to make practice improvements, value based payment and APMs in small practices and practices in health shortage areas, and what solutions have been successful in addressing these issues?
- What kind of support should CMS offer in helping providers understand the requirements of MIPS?
- Should such assistance require multi-year provider technical assistance commitment, or should it be provided on a one-time basis?
- Should there be conditions of participation and/or exclusions in the providers eligible to receive such assistance, such as providers participating in delivery system reform initiatives such as the Transforming Clinical Practice Initiative (TCPI; http://innovation.cms.gov/initiatives/Transforming-Clinical-Practices/ ), or having a certain level of need identified?
III. Response to Comments
Because of the large number of public comments we normally receive on Federal Register documents, we are not able to acknowledge or respond to them individually. We will consider all comments we receive by the date and time specified in the DATES section of this document.
Dated: September 10, 2015.
Andrew M. Slavitt,
Acting Administrator, Centers for Medicare & Medicaid Services.
[FR Doc. 2015-24906 Filed 9-28-15; 11:15 am]
BILLING CODE 4120-01-P