Monthly Archives: January 2021
A Study of Contractor Consistency in Reviewing Extrapolated Overpayments
By Frank Cohen, MPA, MBB – my colleague from RACMonitor. He wrote a great article and has permitted me to share it with you. See below.
CMS levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits.
The use of extrapolation in Medicare and private payer audits has been around for quite some time now. And lest you be of the opinion that extrapolation is not appropriate for claims-based audits, there are many, many court cases that have supported its use, both specifically and in general. Arguing that extrapolation should not have been used in a given audit, unless that argument is supported by specific statistical challenges, is mostly a waste of time.
For background purposes, extrapolation, as it is used in statistics, is a “statistical technique aimed at inferring the unknown from the known. It attempts to predict future data by relying on historical data, such as estimating the size of a population a few years in the future on the basis of the current population size and its rate of growth,” according to a definition created by Eurostat, a component of the European Union. For our purposes, extrapolation is used to estimate what the actual overpayment amount might likely be for a population of claims, based on auditing a smaller sample of that population. For example, say a Uniform Program Integrity Contractor (UPIC) pulls 30 claims from a medical practice from a population of 10,000 claims. The audit finds that 10 of those claims had some type of coding error, resulting in an overpayment of $500. To extrapolate this to the entire population of claims, one might take the average overpayment, which is the $500 divided by the 30 claims ($16.67 per claim) and multiply this by the total number of claims in the population. In this case, we would multiply the $16.67 per claim by 10,000 for an extrapolated overpayment estimate of $166,667.
The big question that normally crops up around extrapolation is this: how accurate are the estimates? And the answer is (wait for it …), it depends. It depends on just how well the sample was created, meaning: was the sample size appropriate, were the units pulled properly from the population, was the sample truly random, and was it representative of the population? The last point is particularly important, because if the sample is not representative of the population (in other words, if the sample data does not look like the population data), then it is likely that the extrapolated estimate will be anything but accurate.
To account for this issue, referred to as “sample error,” statisticians will calculate something called a confidence interval (CI), which is a range within which there is some acceptable amount of error. The higher the confidence value, the larger the potential range of error. For example, in the hypothetical audit outlined above, maybe the real average for a 90-percent confidence interval is somewhere between $15 and $18, while, for a 95-percent confidence interval, the true average is somewhere between $14 and $19. And if we were to calculate for a 99-percent confidence interval, the range might be somewhere between $12 and $21. So, the greater the range, the more confident I feel about my average estimate. Some express the confidence interval as a sense of true confidence, like “I am 90 percent confident the real average is somewhere between $15 and $18,” and while this is not necessarily wrong, per se, it does not communicate the real value of the CI. I have found that the best way to define it would be more like “if I were to pull 100 random samples of 30 claims and audit all of them, 90 percent would have a true average of somewhere between $15 and $18,” meaning that the true average for some 1 out of 10 would fall outside of that range – either below the lower boundary or above the upper boundary. The main reason that auditors use this technique is to avoid challenges based on sample error.
To the crux of the issue, the Centers for Medicare & Medicaid Services (CMS) levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits. And while the use of extrapolation is well-established and well-accepted, its use in an audit is not an automatic, and depends upon the creation of a statistically valid and representative sample. Thousands of extrapolation audits are completed each year, and for many of these, the targeted provider or organization will appeal the use of extrapolation. In most cases, the appeal is focused on one or more flaws in the methodology used to create the sample and calculate the extrapolated overpayment estimate. For government audits, such as with UPICs, there is a specific appeal process, as outlined in their Medical Learning Network booklet, titled “Medicare Parts A & B Appeals Process.”
On Aug. 20, 2020, the U.S. Department of Health and Human Services Office of Inspector General (HHS OIG) released a report titled “Medicare Contractors Were Not Consistent in How They Reviewed Extrapolated Overpayments in the Provider Appeals Process.” This report opens with the following statement: “although MACs (Medicare Administrative Contractors) and QICs (Qualified Independent Contractors) generally reviewed appealed extrapolated overpayments in a manner that conforms with existing CMS requirements, CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” These inconsistencies were associated with $42 million in extrapolated payments from fiscal years 2017 and 2018 that were overturned in favor of the provider. It’s important to note that at this point, we are only talking about appeal determinations at the first and second level, known as redetermination and reconsideration, respectively.
Redetermination is the first level of appeal, and is adjudicated by the MAC. And while the staff that review the appeals at this level are supposed to have not been involved in the initial claim determination, I believe that most would agree that this step is mostly a rubber stamp of approval for the extrapolation results. In fact, of the hundreds of post-audit extrapolation mitigation cases in which I have been the statistical expert, not a single one was ever overturned at redetermination.
The second level of appeal, reconsideration, is handled by a QIC. In theory, the QIC is supposed to independently review the administrative records, including the appeal results of redetermination. Continuing with the prior paragraph, I have to date had only several extrapolation appeals reversed at reconsideration; however, all were due to the fact that the auditor failed to provide the practice with the requisite data, and not due to any specific issues with the statistical methodology. In two of those cases, the QIC notified the auditor that if they were to get the required information to them, they would reconsider their decision. And in two other cases, the auditor appealed the decision, and it was reversed again. Only the fifth case held without objection and was adjudicated in favor of the provider.
Maybe this is a good place to note that the entire process for conducting extrapolations in government audits is covered under Chapter 8 of the Medicare Program Integrity Manual (PIM). Altogether, there are only 12 pages within the entire Manual that actually deal with the statistical methodology behind sampling and extrapolation; this is certainly not enough to provide the degree of guidance required to ensure consistency among the different government contractors that perform such audits. And this is what the OIG report is talking about.
Back to the $42 million that was overturned at either redetermination or reconsideration: the OIG report found that this was due to a “type of simulation testing that was performed only by a subset of contractors.” The report goes on to say that “CMS did not intend that the contractors use this procedure, (so) these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.” This was quite confusing for me at first, because this “simulation” testing was not well-defined, and also because it seemed to say that if this procedure was appropriate to use, then more contractors should have used it, which would have resulted in more reversals in favor of the provider.
Interestingly, CMS seems to have written itself an out in Chapter 8, section 220.127.116.11 of the PIM, which states that “[f]ailure by a contractor to follow one or more of the requirements contained herein does not necessarily affect the validity of the statistical sampling that was conducted or the projection of the overpayment.” The use of the term “does not necessarily” leaves wide open the fact that the failure by a contractor to follow one or more of the requirements may affect the validity of the statistical sample, which will affect the validity of the extrapolated overpayment estimate.
Regarding the simulation testing, the report stated that “one MAC performed this type of simulation testing for all extrapolation reviews, and two MACs recently changed their policies to include simulation testing for sample designs that are not well-supported by the program integrity contractor. In contrast, both QICs and three MACs did not perform simulation testing and had no plans to start using it in the future.” And even though it was referenced some 20 times, with the exception of an example given as Figure 2 on page 10, the report never did describe in any detail the type of simulation testing that went on. From the example, it was evident to me that the MACs and QICs involved were using what is known as a Monte Carlo simulation. In statistics, simulation is used to assess the performance of a method, typically when there is a lack of theoretical background. With simulations, the statistician knows and controls the truth. Simulation is used advantageously in a number of situations, including providing the empirical estimation of sampling distributions. Footnote 10 in the report stated that ”reviewers used the specific simulation test referenced here to provide information about whether the lower limit for a given sampling design was likely to achieve the target confidence level.” If you are really interested in learning more about it, there is a great paper called
“The design of simulation studies in medical statistics” by Burton et al. (2006).
Its application in these types of audits is to “simulate” the audit many thousands of times to see if the mean audit results fall within the expected confidence interval range, thereby validating the audit results within what is known as the Central Limit Theorem (CLT).
Often, the sample sizes used in recoupment-type audits are too small, and this is usually due to a conflict between the sample size calculations and the distributions of the data. For example, in RAT-STATS, the statistical program maintained by the OIG, and a favorite of government auditors, sample size estimates are based on an assumption that the data are normally (or near normally) distributed. A normal distribution is defined by the mean and the standard deviation, and includes a bunch of characteristics that make sample size calculations relatively straightforward. But the truth is, because most auditors use the paid amount as the variable of interest, population data are rarely, if ever, normally distributed. Unfortunately, there is simply not enough room or time to get into the details of distributions, but suffice it to say that, because paid data are bounded on the left with zero (meaning that payments are never less than zero), paid data sets are almost always right-skewed. This means that the distribution tail continues on to the right for a very long distance.
In these types of skewed situations, sample size normally has to be much larger in order to meet the CLT requirements. So, what one can do is simulate the random sample over and over again to see whether the sampling results ever end up reporting a normal distribution – and if not, it means that the results of that sample should not be used for extrapolation. And this seems to be what the OIG was talking about in this report. Basically, they said that some but not all of the appeals entities (MACs and QICs) did this type of simulation testing, and others did not. But for those that did perform the tests, the report stated that $41.5 million of the $42 million involved in the reversals of the extrapolations were due to the use of this simulation testing. The OIG seems to be saying this: if this was an unintended consequence, meaning that there wasn’t any guidance in place authorizing this type of testing, then it should not have been done, and those extrapolations should not have been overturned. But if it should have been done, meaning that there should have been some written guidance to authorize that type of testing, then it means that there are likely many other extrapolations that should have been reversed in favor of the provider. A sticky wicket, at best.
Under the heading “Opportunity To Improve Contractor Understanding of Policy Updates,” the report also stated that “the MACs and QICs have interpreted these requirements differently. The MAC that previously used simulation testing to identify the coverage of the lower limit stated that it planned to continue to use that approach. Two MACs that previously did not perform simulation testing indicated that they would start using such testing if they had concerns about a program integrity contractor’s sample design. Two other MACs, which did not use simulation testing, did not plan to change their review procedures.” One QIC indicated that it would defer to the administrative QIC (AdQIC, the central manager for all Medicare fee-for-service claim case files appealed to the QIC) regarding any changes. But it ended this paragraph by stating that “AdQIC did not plan to change the QIC Manual in response to the updated PIM.”
With respect to this issue and this issue alone, the OIG submitted two specific recommendations, as follows:
- Provide additional guidance to MACs and QICs to ensure reasonable consistency in procedures used to review extrapolated overpayments during the first two levels of the Medicare Parts A and B appeals process; and
- Take steps to identify and resolve discrepancies in the procedures that MACs and QICs use to review extrapolations during the appeals process.
In the end, I am not encouraged that we will see any degree of consistency between and within the QIC and MAC appeals in the near future.
Basically, it would appear that the OIG, while having some oversight in the area of recommendations, doesn’t really have any teeth when it comes to enforcing change. I expect that while some reviewers may respond appropriately to the use of simulation testing, most will not, if it means a reversal of the extrapolated findings. In these cases, it is incumbent upon the provider to ensure that these issues are brought up during the Administrative Law Judge (ALJ) appeal.
Programming Note: Listen to Frank Cohen report this story live during the next edition of Monitor Mondays, 10 a.m. Eastern.
RAC Report: PET Scans, Helicopter Transportation, and Hospice, Oh My!
The RACs are on attack! The “COVID Pause Button” on RAC audits has been lifted. The COVID Pause Button has been lifted since August 2020. But never have I ever seen CMS spew out so many new RAC topics in one month of a new year. Happy 2021.
Recovery audit contractors (“RACs”) will soon be auditing positron emission tomography (PET) scans for initial treatment strategy in oncologic conditions for compliance with medical necessity and documentation requirements.
Positron emission tomography (“PET”) scans detect early signs of cancer, heart disease and brain disorders. An injectable radioactive tracer detects diseased cells. A combination PET-CT scan produces 3D images for a more accurate diagnosis.
According to CMS’ RAC audit topics, “(PET) for Initial Treatment Strategy in Oncologic Conditions: Medical Necessity and Documentation Requirements,” will be reviewed as of January 5, 2021. The PET scan audits will be for outpatient hospital and professional service reviews. CMS added additional 2021 audit targets to the approved list:
- Air Ambulance: Medical Necessity and Documentation Requirements,. This complex review will be examining rotatory wing (helicopter) aircraft claims to determine if air ambulance transport was reasonable and medically necessary as well as whether or not documentation requirements have been met.
- Hospice Continuous Home Care: Medical Necessity and Documentation Requirements, and
- Ambulance Transport Subject to SNF Consolidated Billing.
Upcoming HHS secretary Xavier Becerra plans to get his new tenure underway quickly.
In False Claims Act (“FCA”) news, Medicare audits of P-Stim have ramped up across the country. A Spinal Clinic in Texas agreed to pay $330,898 to settle FCA allegations for allegedly billing Medicare improperly for electro-acupuncture device neurostimulators. CMS claims that “Medicare does not reimburse for acupuncture or for acupuncture devices such as P-Stim, nor does Medicare reimburse for P-Stim as a neurostimulator or as implantation of neurostimulator electrodes.”
Finally, is your staff getting medical records to consumers requesting their records quickly enough? Right to access to health records is yet another potential risk for all providers, especially hospitals due to their size. A hospital system agreed to pay $200,000 to settle potential violations of the HIPAA Privacy Rule’s right of access standard. This is HHS Office for Civil Rights’ 14th settlement under its Right of Access Initiative. The first person alleged that she requested medical records in December 2017 and did not receive them until May 2018. In the second complaint, the person asked for an electronic copy of his records in September 2019, and they were not sent until February 2020.
Beware of slow document production as slow document production can lead to penalties. And be on the lookout for the next RAC Report.
Remember, never accept the results of a Medicare or Medicaid audit. It is always too high. Believe me, after 21 years of my legal practice, I have yet to agree with the findings if a Tentative notice of Overpayment by any governmental contracted auditor, whether it is PCG, NGS, the MACs, MCOs, or Program Integrity – in any of our 50 States. That is quite a statement about the general, quality of work of auditors. Remember Teambuilders? How did $12 million become $896.35? See blog.
1 CMS, “0200-Air Ambulance: Medical Necessity and Documentation Requirements,” proposed RAC topic, January 5, 2021, http://go.cms.gov/35Jx1co.
2 CMS, “0201-Hospice Continuous Home Care: Medical Necessity and Documentation Requirements,” proposed RAC topic, January 5, 2021, http://go.cms.gov/3oRUyiY.
3 CMS, “0202- Ambulance Transport Subject to SNF Consolidated Billing,” proposed RAC topic, January 5, 2021, http://go.cms.gov/2LOMEbw.
Medicaid Suspension Lifted Because No Evidence of Intent!
Happy 2021! I bring great news and good tidings. I’m fairly sure that everyone reading is educated in what a preliminary injunction is and how important it can be for a health care provider falsely accused of credible allegations of fraud to lift the mandatory suspension of reimbursements. Finally, over the holidays, a Judge found that an indication of intent is required for an accusation of credible allegations of fraud, unlike past cases in which a mere accusation results in suspensions. 42 CFR §455.23 mandates that a health care provider’s reimbursements be suspended based on “credible allegations of fraud.” Which is a low bar. My client, an oral surgeon, had a disgruntled employee complaint and a baseless PCG audit of $6k. A double threat.
For those who are not in the know: An injunction is an extraordinary legal tool that allows the judge to suspend whatever bad action the government or one of its auditors do.
You have to prove:
- Likelihood of success on the merits
- Irreparable harm
- Balance of equities
- Public Interest.
I would guestimate that only 10-20% of requests for TROs and PIs are granted. Last week, we won for the oral surgeon. Everyone can learn from his success. This is how we won. Let me set the stage. We have an oral surgeon who underwent an infamous PCG audit resulting in an alleged $6k overpayment. PCG concurrently sends his data to program integrity, and one month later and without any notice, his reimbursements are suspended based on a “credible allegation of fraud.” Concurrently, he had a disgruntled employee threatening him.
Remember that the bar to demonstrate “credible allegation of fraud” is amazingly low. It is an “indicia of reliability.” An inaccurate PCG audit and a disgruntled employee, in this case, were the catalyst for the oral surgeon’s Medicaid reimbursements. His practice comprised of 80% Medicaid, so the suspension would cause irreparable harm to the practice.
We filed a TRO, PI, and Motion to Stay. The day before Christmas, we had trial.
The Judge ruled that the Department cannot just blindly rely on an anonymous accusation. There has to be some sort of investigation. It is not OK to accept accusations at face value without any sort of independent fact-checking. The Judge created an additional burden for the Department in cases of accusations of fraud that is not present in the regulation. But it is logical and reasonable to expect the Department to explore the accusations. The Judge emphasized that fraud requires intent. He also pointed out that fraud is not defined in the regulations. He emphasized that billing errors are not intentional acts.
The Judge held that, “[i]n light of the large number of Medicaid beneficiaries treated by the Petitioner’s practice, the rarity of the physician’s skills, and the apparent demand for those services, the relatively small amount of money now or formally in controversy, the lack of evidence of actual fraud and the contrary indications, the high probability that good cause exists for not suspending Petitioner’s Medicaid payments, and the near certainty of irreparable harm to the Petitioner if the relief is not granted, a TRO should be granted.”
Even better, the Judge ordered that the surgeon did not have to put up a bond, which is normally required by law. By the stroke of the Judge’s pen, the surgeon could go back to work performing medically necessary services to Medicaid recipients, which, by the way is rare for an oral surgeon to accept Medicaid. This is a success for health care providers. Accusations of fraud should require independent corroboration and evidence of intent.
KNICOLE EMANUEL TO HOST JANUARY WEBCAST ON PRFS AND RAC AUDITS
For healthcare providers looking to avoid any of the traps stemming from PRF (Provider Relief Funds) compliance, RACmonitor is inviting you to sign up for Knicole Emanuel’s upcoming webcast on January 21st, 2021. It is titled: COVID-19 Provider Relief Funds: How to Avoid Audits. You can visit RACmonitor download the order form for the webcast to save yourself a spot.
If your facility accepted Provider Relief Funds (PRFs) as a consequence of the coronavirus pandemic, you need to be aware of the myriad of rules and regulations that are associated with this funding or else face penalties and takebacks. A word of caution: expect to be audited. In Medicare and Medicaid, regulatory audits are as certain as death and taxes. That is why your facility needs to arm itself with the knowledge of how to address documentation requests from the government, especially while the Public Health Emergency (PHE) is in effect.
This exclusive RACmonitor webcast, led by healthcare attorney Knicole Emanuel, discusses the PRF rules that providers must follow and how to prove that funds were appropriately used. There are strict regulations dictating why, how, and how much PRFs can be spent due to the catastrophic, financial impact of COVID-19. Register now to learn how to avoid penalties and takebacks related to PRFs.
- Rules and regulations relative to receiving and spending funds provided by the COVID-19 PRF
- Exceptions to COVID-19 PRF and relevant effective dates
- PRF documentation and reporting requirements
- The importance of the legal dates of PHE
- How to prove your facility’s use of funds is germane to COVID-19
Who Should Attend:
- RAC and appeals specialists
- RAC coordinators
- Compliance officers
- Directors and managers
About Knicole C. Emanuel, Esq.
Healthcare industry expert and Practus partner, Knicole Emanuel, is a regular contributor to the healthcare industry podcast, Monitor Mondays, by RACmonitor. For more than 20 years, Knicole Emanuel has maintained a health care litigation practice, concentrating on Medicare and Medicaid litigation, health care regulatory compliance, administrative law and regulatory law. Knicole has tried over 2,000 administrative cases in over 30 states and has appeared before multiple states’ medical boards.
She has successfully obtained federal injunctions in numerous states. This allowed health care providers to remain in business despite the state or federal laws allegations of health care fraud, abhorrent billings, and data mining. A wealth of knowledge in her industry, Knicole frequently lectures across the country on health care law. This includes the impact of the Affordable Care Act and regulatory compliance for providers, including physicians, home health and hospice, dentists, chiropractors, hospitals and durable medical equipment providers.