Risk Adjustment Audits Are Here!!! Watch Out MAOs!
Risk adjustment is hugely important in Medicare Advantage (MA). Risk adjustment is intended to financially adjust taking into account the underlying severity of beneficiaries’ health conditions and appropriately compensate private insurers with vastly varying expectations for expenditures. In each year, plans receive higher payments in direct proportion to documented risk: A 5 percent increase in documented risk leads to a 5 percent increase in payment. Yet, because MAO have considerable control over the documentation, it is common for insurers to erroneously document patient risk and receive inflated payments from CMS, at least according to several CMS and OIG Reports.
Enter Risk Adjustment Data Validation (RADV) audits.
These are the main corrective action for overpayments made to Medicare Advantage organizations (MAO) when there is a lack of documentation in the medical record to support the diagnoses reported for risk adjustment
CMS has conducted contract-level RADV audits by selecting about 30 contracts for audit annually (roughly 5 percent of MA contracts). CMS then selects samples from each contract of up to 201 beneficiaries divided into three equal strata (low, average, and high risk). Auditors then comb through each beneficiary’s medical records to determine whether diagnoses that the MA plan submitted are supported by documentation in the medical record. From this process, auditors can calculate an error rate for the sample, which can then be extrapolated to the rest of the contract. For instance, if auditors determine that an insurer overcoded a sample’s risk by 5 percent, auditors could infer that plans under that contract were overpaid by 5 percent. Historically, however, CMS has only sought to collect the overpayments identified for the sample of audited beneficiaries. Not any more!
A CMS Final Rule, published February 1, 2023, addresses extrapolation, CMS’ decision to not apply a fee-for-service (FFS Adjuster) in RADV audits, and the payment years in which these policies will apply. Once it goes into effect on April 3, 2023, CMS estimates it will result in the recoupment of $4.7 billion in overpayments from MA insurers over the next decade.
As for extrapolations, CMS will not extrapolate RADV audit findings for PY 2011-2017 and will begin collection of extrapolated overpayment findings for any CMS and OIG audits conducted in PY 2018 and any subsequent payment year.
The improper payment measurements conducted each year by CMS that are included in the HHS Agency Financial Report, as well as audits conducted by the HHS-OIG, have demonstrated that the MA program is at high risk of improper payments. In fiscal year (FY) 2021 (based on calendar year 2019 payments), OIG calculated that CMS made over $15 billion in Part C overpayments, a figure representing nearly 7 percent of total Part C payments.
The HHS-OIG has also released several reports over the past few years that demonstrate a high risk of improper payments in the MA program.
Looking forward – Expect more MAO audits.
P.S. I will be presenting a webinar on Monday, March 20, 2023, via the Assent platform regarding:
FTC ELIMINATING NON-COMPETE AGREEMENTS HOW THAT WILL AFFECT HOSPITALS AND LTC
DATE : MARCH 20, 2023 | EST : 01:00 PM | PST : 10:00 AM | DURATION : 60 MINUTES
Feel free to sign up and listen!!
Family Practice Doctors: Is It CPT 1995 or 1997 Guidance?
Right now, CMS allows physicians to pick to follow the 1995 or 1997 guidelines for determining whether an evaluation and management (“e/m”) visit qualifies for a 99214 versus a 99213. The biggest difference between the two policies is that the 1995 guideline allows you to check by systems, rather than individual organs. Starting January 1, 2023, there are a lot of revisions, including a 2021 guidance that will be used. But, for dates of service before 2021, physicians can pick between 1995 and 1997 guidance.
Why is this an issue?
If you are a family practitioner and get audited by Medicare, Medicaid, or private pay, you better be sure that your auditor audits with the right policy.
According to CPT, 99214 is indicated for an “office or other outpatient visit for the evaluation and management of an established patient, which requires at least two of these three key components: a detailed history, a detailed examination and medical decision making of moderate complexity.”
Think 99214 in any of the following situations:
- If the patient has a new complaint with a potential for significant morbidity if untreated or misdiagnosed,
- If the patient has three or more old problems,
- If the patient has a new problem that requires a prescription,
- If the patient has three stable problems that require medication refills, or one stable problem and one inadequately controlled problem that requires medication refills or adjustments.
The above is simplified and shorthand, so read the 1995 and 1997 guidance carefully.
An insurance company audited a client of mine and clearly used the 1997 guidance. On the audit report, the 1997 guidance was checked as being used. In fact, according to the audit report, the auditors used BOTH the 1997 and 1995 guidance, which, logically, would make a harder, more stringent standard for a 99214 than using one policy.
Now the insurance company claims my client owes money. However, if the insurance company merely applied the 1995 guidance only, then, we believe, that he wouldn’t owe a dime. Now he has to hire me, defend himself to the insurance company, and possibly litigate if the insurance company stands its ground.
Sadly, the above story is not an anomaly. I see auditors misapply policies by using the wrong years all the time, almost daily. Always appeal. Never roll over.
Sometimes it is a smart decision to hire an independent expert to verify that the physician is right, and the auditors are wrong. If the audit is extrapolated, then it is wise to hire an expert statistician. See blog. And blog. The extrapolation rules were recently revised…well, in the last two or three years, so be sure you know the rules, as well. See blog.
District Court Upholds ALJ’s Decision that Extrapolation Was Conducted in Error
Today, I am going to write about a hospital in Tennessee that underwent an audit, and the MAC determined that the hospital owed over $5 million. The hospital challenged both the OIG contractor’s sampling methodology and its determinations on specific claims by requesting a hearing before an ALJ. The District Court decision was published in September 2022. The reason that I want to make you aware of this case, is because there have been numerous Medicare provider appeals unsuccessfully challenging the extrapolation, and the ALJs upholding the extrapolations. In this case, the ALJ found the extrapolation in error, the Council reversed the ALJ on its own motion, and the district court reaffirmed the ALJ, saying the extrapolation was faulty. Whenever good case law is published, we want to analyze the Court’s reasoning so we, as attorneys, can replicate the winning arguments.
One of the main reasons that the district court agreed that the extrapolation was faulty was because no testimony supporting the OIG contractor’s extrapolation process or the implementation of its statistical sampling methodology were submitted to that hearing on June 11, 2020, and the contractor did not appear. It’s the mundane scene with an ALJ level appeal and the auditor failing to appear to prove the audit’s veracity. See blog.
In addition to finding that additional claims satisfied Medicare coverage & payment requirements, the ALJ also found that OIG’s statistical extrapolation process did not comply with § 1893 of the Social Security Act, nor with the MPIM’s guidance on statistical extrapolation.
The ALJ held that HHS policy requires that the OIG’s audit must be able to be recreated and that as the audit’s sampling frame utilized data from outside of the audit, the audit could not be recreated.
The Council subsequently reviewed the ALJ’s decision on its own motion and reversed that decision in part, finding that the ALJ’s determination that the sampling process was invalid was an error of law. The Council then concluded that the OIG contractor’s statistical extrapolation met all applicable Medicare legal and regulatory requirements.
The hospital appealed to the federal district court. The district court’s review consists of determining whether, in light of the record as a whole, the Secretary’s determination is supported by “substantial evidence.”
According to the Court, the hospital amply demonstrated that the Council did not have the authority to overturn the decision of the ALJ on own-motion review. Accordingly, the hospital’s Motion for Summary Judgement was GRANTED and the extrapolation was thrown out.
The Ugly Truth about Medicare Provider Appeals
Extrapolated audits are the worst.
These audits under sample and over extrapolate – almost to the point that some audits allege that you owe more than you were paid. How is that fair in our judicial system? I mean, our country was founded on “due process.” That means you have a right to life, liberty, and the pursuit of happiness. If the government attempts to pursue your reimbursements at all, much less a greater amount than what you received, you are required notice and a hearing.
Not to mention that OIG conducted a Report back in 2020 that identified numerous mistakes in the extrapolations. The Report stated: “CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” I don’t know about you, but that is disconcerting to me. It also stated that “The test was associated with at least $42 million in extrapolated overpayments that were overturned in fiscal years 2017 and 2018. If CMS did not intend that the contractors use this procedure, these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.“
I have undergone hundreds of Medicare and Medicaid audits with extrapolations. You defend against these audits twofold: 1) by hiring an expert statistician to debunk the extrapolation; and 2) by using the provider as an expert clinician to discredit the denials. However, I am always dismayed…maybe that’s not the right word…flabbergasted that no one ever shows up on the other side. It is as if CMS via whatever contractor conducted the extrapolated audit believes that their audit needs no one to prove its veracity. As if we attorneys and providers should just accept their findings as truth, and they get the benefit of NOT hiring a lawyer and NOT showing up to ALJ trials.
In the above picture, the side with the money is CMS. The empty side is the provider.
In normal trials, as you know, there are two opposing sides: a Plaintiff and a Defendant, although in administrative law it’s called a Petitioner and a Respondent. Medicaid provider appeals also have two opponents. However, in Medicare provider appeals, there is only one side: YOU. An ALJ will appear, but no auditor to defend the merits of the alleged overpayment that you, as a provider, are accused of owing.
In normal trials, if a party fails to appear, the Judge will almost automatically rule against the non-appearing party. Why isn’t it the same for Medicare provider appeals? If a Medicare provider appears to dispute an alleged audit, the Judge does not rule automatically in favor of the provider. Quite the opposite quite frankly. The CMS Rules, which apply to all venues under the purview of CMS, which includes the ALJ level and the Medicare Appeals Council level, are crafted against providers, it seems. Regardless the Rules create a procedure in which providers, not the auditors, are forced to retain counsel, which costs money, retain a statistician in cases of extrapolations, which costs money, go through years of appeals through 5 levels, all of which the CMS Rules apply. Real law doesn’t apply until the district court level, which is a 6th level – and 8 years later.
Any providers reading, who retain lobbyists, this Medicare appeal process needs to change legislatively.
Always Challenge the Extrapolation in Medicare Provider Audits!
Always challenge the extrapolation! It is my personal opinion that extrapolation is used too loosely. What I mean is that sample sizes are usually too small to constitute a valid representation of the provider’s claims. Say a provider bills 10,000 claims. Is a sample of 50 adequate?
In a 2020 case, Palmetto audited .0051% of claims by Palm Valley, and Palm Valley challenged CMS’ sample and extrapolation method. Palm Valley Health Care, Inc. v. Azar, No. 18-41067, 2020 BL 14097 (5th Cir., Jan. 15, 2020). As an aside, I had 2 back-to-back extrapolation cases recently. The provider, however, did not hire me until the ALJ level – or the 3rd level of Medicare provider appeals. Unfortunately, no one argued that the extrapolation was faulty at the first 2 levels. We had 2 different ALJs, but both ALJs ruled that the provider could not raise new arguments; i.e., that the extrapolation was erroneous, at the 3rd level. They decided that all arguments should be raised from the beginning. This is just a reminder that: (a) raise all defenses immediately; and (b) don’t try the first two levels without an attorney.
Going back to Palm Valley.
The 5th Circuit held that while the statistical sampling methodology may not be the most precise methodology available, CMS’ selection methodology did represent a valid “complex balance of interests.” Principally, the court noted, quoting the Medicare Appeals Council, that CMS’ methodology was justified by the “real world constraints imposed by conflicting demands on limited public funds” and that Congress clearly envisioned extrapolation being applied to calculate overpayments in instances like this. I disagree with this result. I find it infuriating that auditors, like Palmetto, can scrutinize providers’ claims, yet circumvent similar accountability. They are being allowed to conduct a “hack” job at extrapolating to the financial detriment of the provider.
Interestingly, Palm Valley’s 5th Circuit decision was rendered in 2020. The dates of service of the claims Palmetto audited were July 2006-January 2009. It just shows how long the legal battle can be in Medicare audits. Also, Palm Valley’s error rate was 53.7%. Remember, in 2019, CMS revised the extrapolation rules to allow extrapolations in 50% or higher error rates. If you want to read the extrapolations rules, you can find them in Chapter 8 of the Medicare Program Integrity Manuel (“MPIM”).
On RACMonitor, health care attorney, David Glaser, mentioned that there is a difference in arguments versus evidence. While you cannot admit new evidence at the ALJ level, you can make new arguments. He and I agreed, however, even if you can dispute the extrapolation legally, a statistical report would not allowed as new evidence, which are important to submit.
Lastly, 42 CFR 405.1014(a)(3) requires the provider to assert the reasons the provider disagrees with the extrapolation in the request for ALJ hearing.
CMS Rulings Are Not Law; Yet Followed By ALJs
Lack of medical necessity is one of the leading reasons for denials during RAC, MAC, TPE, and UPIC audits. However, case law dictates that the treating physician should be allowed deference with the decision that medical necessity exists because the Medicare and/or Medicaid auditor never had the privilege to see the recipient.
However, recent ALJ decisions have gone against case law. How is that possible? CMS creates “Rules” – I say that in air quotes – these Rules are not promulgated, but are binding on anyone under CMS’ umbrella. Guess what? That includes the ALJs for Medicare appeals. As an example, the “treating physician” Rule is law based on case law. Juxtapose, CMS’ Ruling 93-l. It states that no presumptive weight should be assigned to a treating physician’s medical opinion in determining the medical necessity of inpatient hospital and skilled nursing facility services. The Ruling adds parenthetically that the Ruling does not “by omission or implication” endorse the application of the treating physician rule to services not addressed in the Ruling. So, we get a decision from an ALJ that dismisses the treating physician rule.
The ALJ decision actually said: Accordingly, I find that the treating physician rule, standing alone, provides no basis for coverage.
This ALJ went against the law but followed CMS Rulings.
CMS Rulings, however, are not binding. CMS Rulings aren’t even law. Yet the CMS Rulings, according to CMS, are binding onto the entities that are under the CMS umbrella. This means that the Medicare appeals process, which include the redeterminations, the reconsiderations, the ALJ decisions, and the Medicare Appeals Councils’ decisions are all dictated by these non-law, CMS Rulings, which fly in the face of actual law. ALJs uphold extrapolations based on CMS Rulings because they have to. But once you get to a federal district court judge, who are not bound by CMS, non-law, rulings, you get a real Judge’s decision, and most extrapolations are thrown out if the error rate is under 50%.
Basically, if you are a Medicare provider, you have to jump through the hoops of 4 levels of appeals that is not dictated by law, but by an administration that is rewarded for taking money from providers on the pretense of FWA. Most providers do not have the financial means to make it to the 5th level of appeal. So, CMS wins by default.
Folks, create a legal fund for your provider entity. You have got to appeal and be able to afford it. That is the only way that we can change the disproportionately unfair Medicare appeal process that providers must endure now.
CMS Rulings Can Devastate a Provider, But Should It?
If you could light a torch to a Molotov Cocktail and a bunch of newspapers, you could not make a bigger explosion in my head than a recent Decision from a Medicare administrative law judge (“ALJ”). The extrapolation was upheld, despite an expert statistician citing its shortcomings, based on a CMS Ruling, which is neither law nor precedent. The Decision reminded me of the new Firestarter movie because everything is up in flames. Drew Barrymore would be proud.
I find it very lazy of the government to rely on sampling and extrapolations, especially in light that no witness testifies to its accuracy.
Because this ALJ relied so heavily on CMS Rulings, I wanted to do a little detective work as to whether CMS Rulings are binding or even law. First, I logged onto Westlaw to search for “CMS Ruling” in any case in any jurisdiction in America. Nothing. Not one case ever mentioned “CMS Ruling.” Ever. (Nor did my law school).
What Is a CMS Ruling?
A CMS Ruling is defined as, “decisions of the Administrator that serve as precedent final opinions and orders and statements of policy and interpretation. They provide clarification and interpretation of complex or ambiguous provisions of the law or regulations relating to Medicare, Medicaid, Utilization and Quality Control Peer Review, private health insurance, and related matters.”
But Are CMS Rulings Law?
No. CMS Rulings are not law. CMS Rulings are not binding on district court judges because district court judges are not part of HHS or CMS. However, the Medicare ALJs are considered part of HHS and CMS; thus the CMS Rulings are binding on Medicare ALJs.
This creates a dichotomy between the “real law” and agency rules. When you read CMS Ruling 86-1, it reads as if there two parties with oppositive views, both presented their arguments, and the Administrator makes a ruling. But the Administrator is not a Judge, but the Ruling reads like a court case. CMS Rulings are not binding on:
- The Supreme Court
- Appellate Courts
- The real world outside of CMS
- District Courts
- The Department of Transportation
- Civil Jurisprudence
- The Department of Education
- Etc. – You get the point.
So why are Medicare providers held subject to penalties based on CMS Rulings, when after the providers appeal their case to district court, that “rule” that was subjected against them (saying they owe $7 million) is rendered moot? Can we say – not fair, equitable, Constitutional, and flies in the face of due process?
The future does not look bright for providers going forward in defending overzealous, erroneous, and misplaced audits. These audits aren’t even backed up by witnesses – seriously, at the ALJ Medicare appeals, there is no statistician testifying to verify the results. Yet some of the ALJs are still upholding these audits.
In the “court case,” which resulted in CMS Ruling 86-1, the provider argued that:
- There is no legal authority in the Medicare statute or regulations for HCFA or its intermediaries to determine overpayments by projecting the findings of a sample of specific claims onto a universe of unspecified beneficiaries and claims.
- Section 1879 of the Social Security Act, 42 U.S.C. 1395pp, contemplates that medical necessity and custodial care coverage determinations will be made only by means of a case-by-case review.
- When sampling is used, providers are not able to bill individual beneficiaries not in the sample group for the services determined to be noncovered.
- Use of a sampling procedure violates the rights of providers to appeal adverse determinations.
- The use of sampling and extrapolation to determine overpayments deprives the provider of due process.
The CMS Ruling 86-1 was decided by Mr. Henry R. Desmarais, Acting Administrator, Health Care Financing Administration in 1986.
Think it should be upheld?
Post-COVID (ish) RAC Audits – Temporary Restrictions
2020 was an odd year for recovery audit contractor (“RAC”) and Medicare Administrative Contractors (“MAC”) audits. Well, it was an odd year for everyone. After trying five virtual trials, each one with up to 23 witnesses, it seems that, slowly but surely, we are getting back to normalcy. A tell-tale sign of fresh normalcy is an in-person defense of health care regulatory audits. I am defending a RAC audit of pediatric facility in Georgia in a couple weeks and the clerk of court said – “The hearing is in person.” Well, that’s new. Even when we specifically requested a virtual trial, we were denied with the explanation that GA is open now. The virtual trials are cheaper and more convenient; clients don’t have to pay for hotels and airlines.
In-person hearings are back – at least in most states. We have similar players and new restrictions.
On March 16, 2021, CMS announced that it will temporarily restrict audits to March 1, 2020, and before. Medicare audits are not yet dipping its metaphoric toes into the shark infested waters of auditing claims with dates of service (“DOS”) March 1 – today. This leaves a year and half time period untouched. Once the temporary hold is lifted, audits of 2020 DOS will be abound. March 26, 2021, CMS awarded Performant Recovery, Inc., the incumbent, the new RAC Region 1 contract.
RAC’s review claims on a post-payment and/or pre-payment basis. (FYI – You would rather a post payment review rather than a pre – I promise).
The RACs were created to detect fraud, waste, and abuse (“FWA”) by reviewing medical records. Any health care provider – not matter how big or small – are subject to audits at the whim of the government. CMS, RACs, MCOs, MACs, TPEs, UPICs, and every other auditing company can implement actions that will prevent future improper payments, as well. As we all know, RACs are paid on a contingency basis. Approximately, 13%. When the RACs were first created, the RACs were compensated based on accusations of overpayments, not the amounts that were truly owed after an independent tribunal. As any human could surmise, the contingency payment creates an overzealousness that can only be demonstrated by my favorite case in my 21 years – in New Mexico against Public Consulting Group (“PCG”). A behavioral health care (“BH”) provider was accused of over $12 million overpayment. After we presented before the administrative law judge (“ALJ”) in NM Administrative Court, the ALJ determined that we owed $896.35. The 99.23% reduction was because of the following:
- Faulty Extrapolation: NM HSD’s contractor PCG reviewed approximately 150 claims out of 15,000 claims between 2009 and 2013. Once the error rate was defined as high as 92%, the base error equaled $9,812.08; however the extrapolated amount equaled over $12 million. Our expert statistician rebutted the error rate being so high. Once the extrapolation is thrown out, we are now dealing with much more reasonable amounts – only $9k
- Attack the Clinical Denials: The underlying, alleged overpayment of $9,812.08 was based on 150 claims. We walked through the 150 claims that PCG claimed were denials and proved PCG wrong. Examples of their errors include denials based on lack of staff credentialing, when in reality, the auditor could not read the signature. Other denials were erroneously denied based the application of the wrong policy year.
The upshot is that we convinced the judge that PCG was wrong in almost every denial PCG made. In the end, the Judge found we owed $896.35, not $12 million. Little bit of a difference! We appealed.
A Study of Contractor Consistency in Reviewing Extrapolated Overpayments
By Frank Cohen, MPA, MBB – my colleague from RACMonitor. He wrote a great article and has permitted me to share it with you. See below.
CMS levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits.
The use of extrapolation in Medicare and private payer audits has been around for quite some time now. And lest you be of the opinion that extrapolation is not appropriate for claims-based audits, there are many, many court cases that have supported its use, both specifically and in general. Arguing that extrapolation should not have been used in a given audit, unless that argument is supported by specific statistical challenges, is mostly a waste of time.
For background purposes, extrapolation, as it is used in statistics, is a “statistical technique aimed at inferring the unknown from the known. It attempts to predict future data by relying on historical data, such as estimating the size of a population a few years in the future on the basis of the current population size and its rate of growth,” according to a definition created by Eurostat, a component of the European Union. For our purposes, extrapolation is used to estimate what the actual overpayment amount might likely be for a population of claims, based on auditing a smaller sample of that population. For example, say a Uniform Program Integrity Contractor (UPIC) pulls 30 claims from a medical practice from a population of 10,000 claims. The audit finds that 10 of those claims had some type of coding error, resulting in an overpayment of $500. To extrapolate this to the entire population of claims, one might take the average overpayment, which is the $500 divided by the 30 claims ($16.67 per claim) and multiply this by the total number of claims in the population. In this case, we would multiply the $16.67 per claim by 10,000 for an extrapolated overpayment estimate of $166,667.
The big question that normally crops up around extrapolation is this: how accurate are the estimates? And the answer is (wait for it …), it depends. It depends on just how well the sample was created, meaning: was the sample size appropriate, were the units pulled properly from the population, was the sample truly random, and was it representative of the population? The last point is particularly important, because if the sample is not representative of the population (in other words, if the sample data does not look like the population data), then it is likely that the extrapolated estimate will be anything but accurate.
To account for this issue, referred to as “sample error,” statisticians will calculate something called a confidence interval (CI), which is a range within which there is some acceptable amount of error. The higher the confidence value, the larger the potential range of error. For example, in the hypothetical audit outlined above, maybe the real average for a 90-percent confidence interval is somewhere between $15 and $18, while, for a 95-percent confidence interval, the true average is somewhere between $14 and $19. And if we were to calculate for a 99-percent confidence interval, the range might be somewhere between $12 and $21. So, the greater the range, the more confident I feel about my average estimate. Some express the confidence interval as a sense of true confidence, like “I am 90 percent confident the real average is somewhere between $15 and $18,” and while this is not necessarily wrong, per se, it does not communicate the real value of the CI. I have found that the best way to define it would be more like “if I were to pull 100 random samples of 30 claims and audit all of them, 90 percent would have a true average of somewhere between $15 and $18,” meaning that the true average for some 1 out of 10 would fall outside of that range – either below the lower boundary or above the upper boundary. The main reason that auditors use this technique is to avoid challenges based on sample error.
To the crux of the issue, the Centers for Medicare & Medicaid Services (CMS) levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits. And while the use of extrapolation is well-established and well-accepted, its use in an audit is not an automatic, and depends upon the creation of a statistically valid and representative sample. Thousands of extrapolation audits are completed each year, and for many of these, the targeted provider or organization will appeal the use of extrapolation. In most cases, the appeal is focused on one or more flaws in the methodology used to create the sample and calculate the extrapolated overpayment estimate. For government audits, such as with UPICs, there is a specific appeal process, as outlined in their Medical Learning Network booklet, titled “Medicare Parts A & B Appeals Process.”
On Aug. 20, 2020, the U.S. Department of Health and Human Services Office of Inspector General (HHS OIG) released a report titled “Medicare Contractors Were Not Consistent in How They Reviewed Extrapolated Overpayments in the Provider Appeals Process.” This report opens with the following statement: “although MACs (Medicare Administrative Contractors) and QICs (Qualified Independent Contractors) generally reviewed appealed extrapolated overpayments in a manner that conforms with existing CMS requirements, CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” These inconsistencies were associated with $42 million in extrapolated payments from fiscal years 2017 and 2018 that were overturned in favor of the provider. It’s important to note that at this point, we are only talking about appeal determinations at the first and second level, known as redetermination and reconsideration, respectively.
Redetermination is the first level of appeal, and is adjudicated by the MAC. And while the staff that review the appeals at this level are supposed to have not been involved in the initial claim determination, I believe that most would agree that this step is mostly a rubber stamp of approval for the extrapolation results. In fact, of the hundreds of post-audit extrapolation mitigation cases in which I have been the statistical expert, not a single one was ever overturned at redetermination.
The second level of appeal, reconsideration, is handled by a QIC. In theory, the QIC is supposed to independently review the administrative records, including the appeal results of redetermination. Continuing with the prior paragraph, I have to date had only several extrapolation appeals reversed at reconsideration; however, all were due to the fact that the auditor failed to provide the practice with the requisite data, and not due to any specific issues with the statistical methodology. In two of those cases, the QIC notified the auditor that if they were to get the required information to them, they would reconsider their decision. And in two other cases, the auditor appealed the decision, and it was reversed again. Only the fifth case held without objection and was adjudicated in favor of the provider.
Maybe this is a good place to note that the entire process for conducting extrapolations in government audits is covered under Chapter 8 of the Medicare Program Integrity Manual (PIM). Altogether, there are only 12 pages within the entire Manual that actually deal with the statistical methodology behind sampling and extrapolation; this is certainly not enough to provide the degree of guidance required to ensure consistency among the different government contractors that perform such audits. And this is what the OIG report is talking about.
Back to the $42 million that was overturned at either redetermination or reconsideration: the OIG report found that this was due to a “type of simulation testing that was performed only by a subset of contractors.” The report goes on to say that “CMS did not intend that the contractors use this procedure, (so) these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.” This was quite confusing for me at first, because this “simulation” testing was not well-defined, and also because it seemed to say that if this procedure was appropriate to use, then more contractors should have used it, which would have resulted in more reversals in favor of the provider.
Interestingly, CMS seems to have written itself an out in Chapter 8, section 126.96.36.199 of the PIM, which states that “[f]ailure by a contractor to follow one or more of the requirements contained herein does not necessarily affect the validity of the statistical sampling that was conducted or the projection of the overpayment.” The use of the term “does not necessarily” leaves wide open the fact that the failure by a contractor to follow one or more of the requirements may affect the validity of the statistical sample, which will affect the validity of the extrapolated overpayment estimate.
Regarding the simulation testing, the report stated that “one MAC performed this type of simulation testing for all extrapolation reviews, and two MACs recently changed their policies to include simulation testing for sample designs that are not well-supported by the program integrity contractor. In contrast, both QICs and three MACs did not perform simulation testing and had no plans to start using it in the future.” And even though it was referenced some 20 times, with the exception of an example given as Figure 2 on page 10, the report never did describe in any detail the type of simulation testing that went on. From the example, it was evident to me that the MACs and QICs involved were using what is known as a Monte Carlo simulation. In statistics, simulation is used to assess the performance of a method, typically when there is a lack of theoretical background. With simulations, the statistician knows and controls the truth. Simulation is used advantageously in a number of situations, including providing the empirical estimation of sampling distributions. Footnote 10 in the report stated that ”reviewers used the specific simulation test referenced here to provide information about whether the lower limit for a given sampling design was likely to achieve the target confidence level.” If you are really interested in learning more about it, there is a great paper called
“The design of simulation studies in medical statistics” by Burton et al. (2006).
Its application in these types of audits is to “simulate” the audit many thousands of times to see if the mean audit results fall within the expected confidence interval range, thereby validating the audit results within what is known as the Central Limit Theorem (CLT).
Often, the sample sizes used in recoupment-type audits are too small, and this is usually due to a conflict between the sample size calculations and the distributions of the data. For example, in RAT-STATS, the statistical program maintained by the OIG, and a favorite of government auditors, sample size estimates are based on an assumption that the data are normally (or near normally) distributed. A normal distribution is defined by the mean and the standard deviation, and includes a bunch of characteristics that make sample size calculations relatively straightforward. But the truth is, because most auditors use the paid amount as the variable of interest, population data are rarely, if ever, normally distributed. Unfortunately, there is simply not enough room or time to get into the details of distributions, but suffice it to say that, because paid data are bounded on the left with zero (meaning that payments are never less than zero), paid data sets are almost always right-skewed. This means that the distribution tail continues on to the right for a very long distance.
In these types of skewed situations, sample size normally has to be much larger in order to meet the CLT requirements. So, what one can do is simulate the random sample over and over again to see whether the sampling results ever end up reporting a normal distribution – and if not, it means that the results of that sample should not be used for extrapolation. And this seems to be what the OIG was talking about in this report. Basically, they said that some but not all of the appeals entities (MACs and QICs) did this type of simulation testing, and others did not. But for those that did perform the tests, the report stated that $41.5 million of the $42 million involved in the reversals of the extrapolations were due to the use of this simulation testing. The OIG seems to be saying this: if this was an unintended consequence, meaning that there wasn’t any guidance in place authorizing this type of testing, then it should not have been done, and those extrapolations should not have been overturned. But if it should have been done, meaning that there should have been some written guidance to authorize that type of testing, then it means that there are likely many other extrapolations that should have been reversed in favor of the provider. A sticky wicket, at best.
Under the heading “Opportunity To Improve Contractor Understanding of Policy Updates,” the report also stated that “the MACs and QICs have interpreted these requirements differently. The MAC that previously used simulation testing to identify the coverage of the lower limit stated that it planned to continue to use that approach. Two MACs that previously did not perform simulation testing indicated that they would start using such testing if they had concerns about a program integrity contractor’s sample design. Two other MACs, which did not use simulation testing, did not plan to change their review procedures.” One QIC indicated that it would defer to the administrative QIC (AdQIC, the central manager for all Medicare fee-for-service claim case files appealed to the QIC) regarding any changes. But it ended this paragraph by stating that “AdQIC did not plan to change the QIC Manual in response to the updated PIM.”
With respect to this issue and this issue alone, the OIG submitted two specific recommendations, as follows:
- Provide additional guidance to MACs and QICs to ensure reasonable consistency in procedures used to review extrapolated overpayments during the first two levels of the Medicare Parts A and B appeals process; and
- Take steps to identify and resolve discrepancies in the procedures that MACs and QICs use to review extrapolations during the appeals process.
In the end, I am not encouraged that we will see any degree of consistency between and within the QIC and MAC appeals in the near future.
Basically, it would appear that the OIG, while having some oversight in the area of recommendations, doesn’t really have any teeth when it comes to enforcing change. I expect that while some reviewers may respond appropriately to the use of simulation testing, most will not, if it means a reversal of the extrapolated findings. In these cases, it is incumbent upon the provider to ensure that these issues are brought up during the Administrative Law Judge (ALJ) appeal.
Programming Note: Listen to Frank Cohen report this story live during the next edition of Monitor Mondays, 10 a.m. Eastern.
CMS Clarifying Medicare Overpayment Rules: The Bar Is Raised (Yet Again) for Health Care Providers
Have you ever watched athletes compete in the high jump? Each time an athlete is successful in pole vaulting over the bar, the bar gets raised…again…and again…until the athlete can no longer vault over the bar. Similarly, the Center for Medicare and Medicaid Services (CMS) continue to raise the bar on health care providers who accept Medicare and Medicaid.
In February, CMS finalized the rule requiring providers to proactively investigate themselves and report any overpayments to CMS for Medicare Part A and B. (The Rule for Medicare Parts C and D were finalized in 2014, and the Rule for Medicaid has not yet been promulgated). The Rule makes it very clear that CMS expects providers and suppliers to enact robust self auditing policies.
We all know that the Affordable Care Act (ACA) was intended to be self-funding. Who is funding it? Doctors, psychiatrists, home care agencies, hospitals, long term care facilities, dentists…anyone who accepts Medicare and Medicaid. The self-funding portion of the ACA is strict; it is infallible, and its fraud, waste, and abuse (FWA) detection tools…oh, how wide that net is cast!
Subsection 1128J(d) was added to Section 6402 of the ACA, which requires that providers report overpayments to CMS “by the later of – (A) the date which is 60 days after the date on which the overpayment was identified; or (B) the date any corresponding cost report is due, if applicable.”
Identification of an overpayment is when the person has, or reasonably should have through the exercise of reasonable diligence, determined that the person received an overpayment. Overpayment includes referrals or those referrals that violate the Anti-Kickback statute.
CMS allows providers to extrapolate their findings, but what provider in their right mind would do so?
There is a six-year look back period, so you don’t have to report overpayments for claims older than six years.
You can get an extension of the 60-day deadline if:
• Office of Inspector General (OIG) acknowledges receipt of a submission to the OIG Self-Disclosure Protocol
• OIG acknowledges receipt of a submission to the OIG Voluntary Self-Referral Protocol
• Provider requests an extension under 42 CFR §401.603
My recommendation? Strap on your pole vaulting shoes and get to jumping!