Blog Archives

A Study of Contractor Consistency in Reviewing Extrapolated Overpayments

By Frank Cohen, MPA, MBB – my colleague from RACMonitor. He wrote a great article and has permitted me to share it with you. See below.

CMS levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits.

The use of extrapolation in Medicare and private payer audits has been around for quite some time now. And lest you be of the opinion that extrapolation is not appropriate for claims-based audits, there are many, many court cases that have supported its use, both specifically and in general. Arguing that extrapolation should not have been used in a given audit, unless that argument is supported by specific statistical challenges, is mostly a waste of time. 

For background purposes, extrapolation, as it is used in statistics, is a “statistical technique aimed at inferring the unknown from the known. It attempts to predict future data by relying on historical data, such as estimating the size of a population a few years in the future on the basis of the current population size and its rate of growth,” according to a definition created by Eurostat, a component of the European Union. For our purposes, extrapolation is used to estimate what the actual overpayment amount might likely be for a population of claims, based on auditing a smaller sample of that population. For example, say a Uniform Program Integrity Contractor (UPIC) pulls 30 claims from a medical practice from a population of 10,000 claims. The audit finds that 10 of those claims had some type of coding error, resulting in an overpayment of $500. To extrapolate this to the entire population of claims, one might take the average overpayment, which is the $500 divided by the 30 claims ($16.67 per claim) and multiply this by the total number of claims in the population. In this case, we would multiply the $16.67 per claim by 10,000 for an extrapolated overpayment estimate of $166,667. 

The big question that normally crops up around extrapolation is this: how accurate are the estimates? And the answer is (wait for it …), it depends. It depends on just how well the sample was created, meaning: was the sample size appropriate, were the units pulled properly from the population, was the sample truly random, and was it representative of the population? The last point is particularly important, because if the sample is not representative of the population (in other words, if the sample data does not look like the population data), then it is likely that the extrapolated estimate will be anything but accurate.

To account for this issue, referred to as “sample error,” statisticians will calculate something called a confidence interval (CI), which is a range within which there is some acceptable amount of error. The higher the confidence value, the larger the potential range of error. For example, in the hypothetical audit outlined above, maybe the real average for a 90-percent confidence interval is somewhere between $15 and $18, while, for a 95-percent confidence interval, the true average is somewhere between $14 and $19. And if we were to calculate for a 99-percent confidence interval, the range might be somewhere between $12 and $21. So, the greater the range, the more confident I feel about my average estimate. Some express the confidence interval as a sense of true confidence, like “I am 90 percent confident the real average is somewhere between $15 and $18,” and while this is not necessarily wrong, per se, it does not communicate the real value of the CI. I have found that the best way to define it would be more like “if I were to pull 100 random samples of 30 claims and audit all of them, 90 percent would have a true average of somewhere between $15 and $18,” meaning that the true average for some 1 out of 10 would fall outside of that range – either below the lower boundary or above the upper boundary. The main reason that auditors use this technique is to avoid challenges based on sample error.

To the crux of the issue, the Centers for Medicare & Medicaid Services (CMS) levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits. And while the use of extrapolation is well-established and well-accepted, its use in an audit is not an automatic, and depends upon the creation of a statistically valid and representative sample. Thousands of extrapolation audits are completed each year, and for many of these, the targeted provider or organization will appeal the use of extrapolation. In most cases, the appeal is focused on one or more flaws in the methodology used to create the sample and calculate the extrapolated overpayment estimate. For government audits, such as with UPICs, there is a specific appeal process, as outlined in their Medical Learning Network booklet, titled “Medicare Parts A & B Appeals Process.”

On Aug. 20, 2020, the U.S. Department of Health and Human Services Office of Inspector General (HHS OIG) released a report titled “Medicare Contractors Were Not Consistent in How They Reviewed Extrapolated Overpayments in the Provider Appeals Process.” This report opens with the following statement: “although MACs (Medicare Administrative Contractors) and QICs (Qualified Independent Contractors) generally reviewed appealed extrapolated overpayments in a manner that conforms with existing CMS requirements, CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” These inconsistencies were associated with $42 million in extrapolated payments from fiscal years 2017 and 2018 that were overturned in favor of the provider. It’s important to note that at this point, we are only talking about appeal determinations at the first and second level, known as redetermination and reconsideration, respectively.

Redetermination is the first level of appeal, and is adjudicated by the MAC. And while the staff that review the appeals at this level are supposed to have not been involved in the initial claim determination, I believe that most would agree that this step is mostly a rubber stamp of approval for the extrapolation results. In fact, of the hundreds of post-audit extrapolation mitigation cases in which I have been the statistical expert, not a single one was ever overturned at redetermination.

The second level of appeal, reconsideration, is handled by a QIC. In theory, the QIC is supposed to independently review the administrative records, including the appeal results of redetermination. Continuing with the prior paragraph, I have to date had only several extrapolation appeals reversed at reconsideration; however, all were due to the fact that the auditor failed to provide the practice with the requisite data, and not due to any specific issues with the statistical methodology. In two of those cases, the QIC notified the auditor that if they were to get the required information to them, they would reconsider their decision. And in two other cases, the auditor appealed the decision, and it was reversed again. Only the fifth case held without objection and was adjudicated in favor of the provider.

Maybe this is a good place to note that the entire process for conducting extrapolations in government audits is covered under Chapter 8 of the Medicare Program Integrity Manual (PIM). Altogether, there are only 12 pages within the entire Manual that actually deal with the statistical methodology behind sampling and extrapolation; this is certainly not enough to provide the degree of guidance required to ensure consistency among the different government contractors that perform such audits. And this is what the OIG report is talking about.

Back to the $42 million that was overturned at either redetermination or reconsideration: the OIG report found that this was due to a “type of simulation testing that was performed only by a subset of contractors.” The report goes on to say that “CMS did not intend that the contractors use this procedure, (so) these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.” This was quite confusing for me at first, because this “simulation” testing was not well-defined, and also because it seemed to say that if this procedure was appropriate to use, then more contractors should have used it, which would have resulted in more reversals in favor of the provider.   

Interestingly, CMS seems to have written itself an out in Chapter 8, section 8.4.1.1 of the PIM, which states that “[f]ailure by a contractor to follow one or more of the requirements contained herein does not necessarily affect the validity of the statistical sampling that was conducted or the projection of the overpayment.” The use of the term “does not necessarily” leaves wide open the fact that the failure by a contractor to follow one or more of the requirements may affect the validity of the statistical sample, which will affect the validity of the extrapolated overpayment estimate. 

Regarding the simulation testing, the report stated that “one MAC performed this type of simulation testing for all extrapolation reviews, and two MACs recently changed their policies to include simulation testing for sample designs that are not well-supported by the program integrity contractor. In contrast, both QICs and three MACs did not perform simulation testing and had no plans to start using it in the future.” And even though it was referenced some 20 times, with the exception of an example given as Figure 2 on page 10, the report never did describe in any detail the type of simulation testing that went on. From the example, it was evident to me that the MACs and QICs involved were using what is known as a Monte Carlo simulation. In statistics, simulation is used to assess the performance of a method, typically when there is a lack of theoretical background. With simulations, the statistician knows and controls the truth. Simulation is used advantageously in a number of situations, including providing the empirical estimation of sampling distributions. Footnote 10 in the report stated that ”reviewers used the specific simulation test referenced here to provide information about whether the lower limit for a given sampling design was likely to achieve the target confidence level.” If you are really interested in learning more about it, there is a great paper called
“The design of simulation studies in medical statistics” by Burton et al. (2006). 

Its application in these types of audits is to “simulate” the audit many thousands of times to see if the mean audit results fall within the expected confidence interval range, thereby validating the audit results within what is known as the Central Limit Theorem (CLT).

Often, the sample sizes used in recoupment-type audits are too small, and this is usually due to a conflict between the sample size calculations and the distributions of the data. For example, in RAT-STATS, the statistical program maintained by the OIG, and a favorite of government auditors, sample size estimates are based on an assumption that the data are normally (or near normally) distributed. A normal distribution is defined by the mean and the standard deviation, and includes a bunch of characteristics that make sample size calculations relatively straightforward. But the truth is, because most auditors use the paid amount as the variable of interest, population data are rarely, if ever, normally distributed. Unfortunately, there is simply not enough room or time to get into the details of distributions, but suffice it to say that, because paid data are bounded on the left with zero (meaning that payments are never less than zero), paid data sets are almost always right-skewed. This means that the distribution tail continues on to the right for a very long distance.  

In these types of skewed situations, sample size normally has to be much larger in order to meet the CLT requirements. So, what one can do is simulate the random sample over and over again to see whether the sampling results ever end up reporting a normal distribution – and if not, it means that the results of that sample should not be used for extrapolation. And this seems to be what the OIG was talking about in this report. Basically, they said that some but not all of the appeals entities (MACs and QICs) did this type of simulation testing, and others did not. But for those that did perform the tests, the report stated that $41.5 million of the $42 million involved in the reversals of the extrapolations were due to the use of this simulation testing. The OIG seems to be saying this: if this was an unintended consequence, meaning that there wasn’t any guidance in place authorizing this type of testing, then it should not have been done, and those extrapolations should not have been overturned. But if it should have been done, meaning that there should have been some written guidance to authorize that type of testing, then it means that there are likely many other extrapolations that should have been reversed in favor of the provider. A sticky wicket, at best.

Under the heading “Opportunity To Improve Contractor Understanding of Policy Updates,” the report also stated that “the MACs and QICs have interpreted these requirements differently. The MAC that previously used simulation testing to identify the coverage of the lower limit stated that it planned to continue to use that approach. Two MACs that previously did not perform simulation testing indicated that they would start using such testing if they had concerns about a program integrity contractor’s sample design. Two other MACs, which did not use simulation testing, did not plan to change their review procedures.” One QIC indicated that it would defer to the administrative QIC (AdQIC, the central manager for all Medicare fee-for-service claim case files appealed to the QIC) regarding any changes. But it ended this paragraph by stating that “AdQIC did not plan to change the QIC Manual in response to the updated PIM.”

With respect to this issue and this issue alone, the OIG submitted two specific recommendations, as follows:

  • Provide additional guidance to MACs and QICs to ensure reasonable consistency in procedures used to review extrapolated overpayments during the first two levels of the Medicare Parts A and B appeals process; and
  • Take steps to identify and resolve discrepancies in the procedures that MACs and QICs use to review extrapolations during the appeals process.

In the end, I am not encouraged that we will see any degree of consistency between and within the QIC and MAC appeals in the near future.

Basically, it would appear that the OIG, while having some oversight in the area of recommendations, doesn’t really have any teeth when it comes to enforcing change. I expect that while some reviewers may respond appropriately to the use of simulation testing, most will not, if it means a reversal of the extrapolated findings. In these cases, it is incumbent upon the provider to ensure that these issues are brought up during the Administrative Law Judge (ALJ) appeal.

Programming Note: Listen to Frank Cohen report this story live during the next edition of Monitor Mondays, 10 a.m. Eastern.

Why Auditors Can’t be Unbiased

Last week on Monitor Mondays, Knicole Emanuel, Esq. reported on the case of Commonwealth v. Pediatric Specialist, PLLC, wherein the Recovery Audit Contractors’ (RACs’) experts were prohibited from testifying because they were paid on contingency. This means that the auditor (or the company for which they work) is paid some percentage of the overpayment findings it reports.

In this case, as in most nowadays, the overpayment estimate was based upon extrapolation, which means that the auditor extended the overpayment amount found in the sample to that of all claims within the universe from which the sample was drawn. I have written about this process before, but basically, it can turn a $1,500 overpayment on the sample into a $1.5 million overpayment demand.

The key to an effective extrapolation is that the statistical process is appropriate, proper, and accurate. In many audits, this is not the case, and so what happens is, if the provider believes that the extrapolation is not appropriate, they may choose to challenge the results in their appeal. Many times, this is when they will hire a statistician, like me, to review the statistical sampling and overpayment estimate (SSOE), including data and documentation to assist with the appeal. I have worked on hundreds of these post-audit extrapolation mitigation appeals over the years, and even though I am employed by the provider, I maintain a position as an independent fact-finder.  My reports are based on facts and figures, and my opinion is based on those findings. Period.

So, what is it that allows me to remain independent? To perform my job without undue influence or bias? Is it my incredibly high ethical standards? Check! My commitment to upholding the standards of my industry? Check!  Maybe my good looks? Well, not check! It is the fact that my fees are fixed, and are not contingent on the outcome. I mean, it would be great if I could do what the RACs do and cash in on the outcomes of a case, but alas, no such luck.

In one large class-action case in which I was the statistical expert, the defendant settled for $122 million. The law firm got something like a quarter or a third of that, and the class members all received some remuneration as well. Me? I got my hourly rate, and after the case was done, a bottle of Maker’s Mark whiskey as a thank you. And I’m not even sure that was appropriate, so I sent it back. I would love to be paid a percentage of what I am able to save a client in this type of appeal. I worked on a case a couple of years ago for which we were able to get the extrapolation thrown out, which reduced the payment demand from $5.9 million to $3,300. Imagine if I got paid even 2 percent of that; it would be nearly $120,000. But that can’t happen, because the moment my work product is tied to the results, I am no longer independent, nor unbiased. I don’t care how honest or ethical you are, contingency deals change the landscape – and that is as true for me, as an expert, as it is for the auditor.

In the pediatric case referenced above, the RAC that performed the audit is paid on a contingency, although I like to refer to it as a “bounty.” As such, the judge ruled, as Ms. Emanuel reported, that their experts could not testify on behalf of the RAC. Why not? Because the judge, unlike the RAC, is an independent arbiter, and having no skin in the game, is unbiased in their adjudication. But you can’t say that about the RAC. If they are being paid a “bounty” (something like 10 percent), then how in the world could they be considered independent and unbiased?

The short answer is, they can’t. And this isn’t just based on standards of statistical practice; it is steeped in common sense. Look at the appeal statistics; some 50 percent of all RAC findings are eventually reversed in favor of the provider. If that isn’t evidence of an overzealous, biased, bounty-hunting process, I don’t know what is. Basically, as Knicole reported, having their experts prohibited from testifying, the RAC was unable to contest the provider’s arguments, and the judge ruled in favor of the provider.

But, in my opinion, it should not stop here. This is one of those cases that exemplifies the “fruit of the poisonous tree” defense, meaning that if this case passes muster, then every other case for which the RAC did testify and the extrapolation held should be challenged and overturned. Heck, I wouldn’t be surprised if there was a class-action lawsuit filed on behalf of all of those affected by RAC extrapolated audits. And if there is one, I would love to be the statistical expert – but for a flat fee, of course, and not contingent upon the outcome.

And that’s the world according to Frank.

Frank Cohen is a frequent panelist with me on RACMonitor. I love his perspective on expert statistician witnesses. He drafted based off a Monitor Monday report of mine. Do not miss both Frank and me on RACMonitor, every Monday.

CMS Revises and Details Extrapolation Rules

Effective Jan. 2, 2019, the Centers for Medicare & Medicaid Services (CMS) radically changed its guidance on the use of extrapolation in audits by Recovery Audit Contractors (RACs), Medicare Administrative Contractors (MACs), Unified Program Integrity Contractors (UPICs), and the Supplemental Medical Review Contractor (SMRC).

Extrapolation is a veritable tsunami in Medicare/Medicaid audits. The auditor collects a small sample of claims to review for compliance, then determines the “error rate” of the sample. For example, if 500 claims are reviewed and one is found to be noncompliant for a total of $100, then the error rate is set at 20 percent. That error rate is applied to the universe, which is generally a three-year time period. It is assumed that the random sample is indicative of all your billings, regardless of whether you changed your billing system during that time period or maybe hired a different biller. In order to extrapolate an error rate, contractors must use a “statistically valid random sample” and then apply that error rate on a broader universe of claims, using “statistically valid methods.”

With extrapolated results, auditors allege millions of dollars of overpayments against healthcare providers – sometimes a sum of more than the provider even made during the relevant time period. It is an overwhelming impact that can put a provider and its company out of business.

Prior to this recent change to extrapolation procedure, the Program Integrity Manual (PIM) offered little guidance regarding the proper method for extrapolation.

Prior to 2019, CMS offered broad strokes with few details. Its guidance was limited to generally identifying the steps contractors should take: “a) selecting the provider or supplier; b) selecting the period to be reviewed; c) defining the universe, the sampling unit, and the sampling frame; d) designing the sampling plan and selecting the sample; e) reviewing each of the sampling units and determining if there was an overpayment or an underpayment; and, as applicable, f) estimating the overpayment.”

Well, Change Request 10067 overhauled extrapolation in a huge way.

The first modification to the extrapolation rules is that the PIM now dictates when extrapolation should be used.

Under the new guidance, a contractor “shall use statistical sampling when it has been determined that a sustained or high level of payment error exists. The use of statistical sampling may be used after a documented educational intervention has failed to correct the payment error.” This guidance now creates a three-tier structure:

  1. Extrapolation shall be used when a sustained or high level of payment error exists.
  2. Extrapolation may be used after documented educational intervention (such as in the Targeted Probe-and-Educate (TPE) program).
  3. It follows that extrapolation should not be used if there is not a sustained or high level of payment error or evidence that documented educational intervention has failed.

“High level of payment error” is defined as 50 percent or greater. The PIM also states that the contractor may review the provider’s past noncompliance for the same or similar billing issues or a historical pattern of noncompliant billing practice. This is critical because so many times providers simply pay the alleged overpayment amount if the amount is low or moderate in order to avoid costly litigation. Now, those past times that you simply paid the alleged amounts will be held against you.

Another monumental modification to RAC audits is that the RAC auditor now must receive authorization from CMS to go forward in recovering from the provider if the alleged overpayment exceeds $500,000 or is an amount that is greater than 25 percent of the provider’s Medicare revenue received within the previous 12 months.

The identification of the claims universe was also redefined. Even CMS admitted in the change request that, on occasion, “the universe may include items that are not utilized in the construction of the sample frame. This can happen for a number of reasons, including, but not limited to: a) some claims/claim lines are discovered to have been subject to a prior review; b) the definitions of the sample unit necessitate eliminating some claims/claim lines; or c) some claims/claim lines are attributed to sample units for which there was no payment.”

How many of you have been involved in an alleged overpayment in which the auditor misplaced or lost documents? I know I have. The new rule also states that the auditors must be able to recreate the sample and maintain all documentation pertinent to the calculation of an alleged overpayment.

High-volume providers should face a lower risk of extrapolation if their audited error rate is less than 50 percent and they do not have a history of noncompliance for the same or similar billing issues, or a historical pattern of noncompliant billing practice.

CMS Revises and Details Extrapolation Rules: Part II

Biggest RACs Changes Are Here: Learn to Avoid Denied Claims

See Part I: Medicare Audits: Huge Overhaul on Extrapolation Rules

Part II continues to explain the nuances in the changes made by CMS to its statistical sampling methodology. Originally published on RACMonitor.

The Centers for Medicare & Medicaid Services (CMS) recently made significant changes in its statistical sampling methodology for overpayment estimation. Effective Jan. 2, 2019, CMS radically changed its guidance on the use of extrapolation in audits by Recovery Audit Contractors (RACs), Medicare Administrative Contractors (MACs), Unified Program Integrity Contractors (UPICs), and the Supplemental Medical Review Contractor (SMRC).

The RAC program was created through the Medicare Modernization Act of 2003 (MMA) to identify and recover improper Medicare payments paid to healthcare providers under fee-for-service (FFS) Medicare plans. The RAC auditors review a small sample of claims, usually 150, and determine an error rate. That error rate is attributed to the universe, which is normally three years, and extrapolated to that universe. Extrapolation is similar to political polls – in that a Gallup poll will ask the opinions of 1-2 percent of the U.S. population, yet will extrapolate those opinions to the entire country.

First, I would like to address a listener’s question regarding the dollar amount’s factor in extrapolation cases. I recently wrote, “for example, if 500 claims are reviewed and one is found to be noncompliant for a total of $100, then the error rate is set at 20 percent.”

I need to explain that the math here is not “straight math.” The dollar amount of the alleged noncompliant claims factors into the extrapolation amount. If the dollar amount did not factor into the extrapolation, then a review of 500 claims with one non-compliant claim is 0.2 percent. The fact that, in my hypothetical, the one claim’s dollar amount equals $100 changes the error rate from 0.2 percent to 20 percent.

Secondly, the new rule includes provisions implementing the additional Medicare Advantage telehealth benefit added by the Bipartisan Budget Act of 2018. Prior to the new rule, audits were limited in the telehealth services they could include in their basic benefit packages because they could only cover the telehealth services available under the FFS Medicare program. Under the new rule, telehealth becomes more prominent in basic services. Telehealth is now able to be included in the basic benefit packages for any Part B benefit that the plan identifies as “clinically appropriate,” to be furnished electronically by a remote physician or practitioner.

The pre-Jan. 2, 2019 approach to extrapolation employed by RACs was inconsistent, and often statistically invalid. This often resulted in drastically overstated overpayment findings that could bankrupt a physician practice. The method of extrapolation is often a major issue in appeals, and the, new rules address many providers’ frustrations and complaints about the extrapolation process. This is not to say that the post-Jan. 2, 2019 extrapolation approach is perfect…far from it. But the more detailed guidance by CMS just provides more ways to defend against an extrapolation if the RAC auditor veers from instruction.

Thirdly, hiring an expert is a key component in debunking an extrapolation. Your attorney should have a relationship with a statistical expert. Keep in mind the following factors when choosing an expert:

  • Price (more expensive is not always better, but expect the hourly rate to increase for trial testimony).
  • Intelligence (his/her CV should tout a prestigious educational background).
  • Report (even though he/she drafts a report, the report is not a substitute for testimony).
  • Clusters (watch out for a sample that has a significant number of higher reimbursed claims. For example, if you generally use three CPT codes at an equal rate and the sample has an abnormal amount of the higher reimbursed claim, then you have an argument that the sample is an invalid example of your claims.
  • Sample (the sample must be random and must not contain claims not paid by Medicaid).
  • Oral skills (can he/she make statistics understandable to the average person?)

Fourthly, the new revised rule redefines the universe. In the past, suppliers have argued that some of the claims (or claim lines) included in the universe were improperly used for purposes of extrapolation. However, the pre-Jan. 2, 2019 Medicare Manual provided little to no additional guidance regarding the inclusion or exclusion of claims when conducting the statistical analysis. By contrast, the revised Medicare Manual specifically states:

“The universe includes all claim lines that meet the selection criteria. The sampling frame is the listing of sample units, derived from the universe, from which the sample is selected. However, in some cases, the universe may include items that are not utilized in the construction of the sample frame. This can happen for a number of reasons, including but not limited to:

  • Some claims/claim lines are discovered to have been subject to a prior review;
  • The definitions of the sample unit necessitate eliminating some claims/claim lines; or
  • Some claims/claim lines are attributed to sample units for which there was no payment.”

By providing detailed criteria with which contractors should exclude certain claims from the universe or sample frame, the revised Medicare Manual will also provide suppliers another means to argue against the validity of the extrapolation.

Lastly, the revised rules explicitly instruct the auditors to retain an expert statistician when changes occur due to appeals and legal arguments.

As a challenge to an extrapolated overpayment determination works its way through the administrative appeals process, often, a certain number of claims may be reversed from the initial claim determination. When this happens, the statistical extrapolation must be revised, and the extrapolated overpayment amount must be adjusted. This requirement remains unchanged in the revised PIM; however, the Medicare contractors will now be required to consult with a statistical expert in reviewing the methodology and adjusting the extrapolated overpayment amount.

Between my first article on extrapolation, “CMS Revises and Details Extrapolation Rules,” and this follow-up, you should have a decent understanding of the revised extrapolation rules that became effective Jan. 2, 2019. But my two articles are not exhaustive. Please, click here for Change Request 10067 for the full and comprehensive revisions.

Medicare Audits: Huge Overhaul on Extrapolation Rules

Effective January 2, 2019, the Center for Medicare and Medicaid Services (CMS) radically changed its guidance on the use of extrapolation in audits by recovery audit contractors (RACs), Medicare administrative contractors (MACs), Unified Program Integrity Contractors (UPICs), and the Supplemental Medical Review Contractor (SMRC).

Extrapolation is the tsunami in Medicare/caid audits. The auditor collects a small sample of claims to review for compliance. She then determines the “error rate” of the sample. For example, if 50 claims are reviewed and 10 are found to be noncompliant, then the error rate is set at 20%. That error rate is applied to the universe, which is generally a three-year time period. It is assumed that the random sample is indicative of all your billings regardless of whether you changed your billing system during that time period of the universe or maybe hired a different biller.

With extrapolated results, auditors allege millions of dollars of overpayments against health care providers…sometimes more than the provider even made during that time period. It is an overwhelming wave that many times drowns the provider and the company.

Prior to this recent change to extrapolation procedure, the Program Integrity Manual (PIM) offered little guidance to the proper method for extrapolation.

Well, Change Request 10067 – overhauled extrapolation in a HUGE way.

The first modification to the extrapolation rules is that the PIM now dictates when extrapolation should be used.

Determining When a Statistical Sampling May Be Used. Under the new guidance, a contractor “shall use statistical sampling when it has been determined that a sustained or high level of payment error exists. The use of statistical sampling may be used after documented educational intervention has failed to correct the payment error.” This guidance now creates a three-tier structure:

  1. Extrapolation shall be used when a sustained or high level of payment error exists.
  2. Extrapolation may be used after documented educational intervention (such as in the Targeted Probe and Educate (TPE) program).
  3. It follows that extrapolation should not be used if there is not a sustained or high level of payment error or evidence that documented educational intervention has failed.

“High level of payment error” is defined as 50% or greater. The PIM also states that the contractor may review the provider’s past noncompliance for the same or similar billing issues, or a historical pattern of noncompliant billing practice. This is HUGE because so many times providers simply pay the alleged overpayment amount if the amount is low or moderate in order to avoid costly litigation. Now those past times that you simply pay the alleged amounts will be held against you.

Another monumental modification to RAC audits is that the RAC auditor must receive authorization from CMS to go forward in recovering from the provider if the alleged overpayment exceeds $500,000 or is an amount that is greater than 25% of the provider’s Medicare revenue received within the previous 12 months.

The identification of the claims universe was also re-defined. Even CMS admitted in the change request that, on occasion, “the universe may include items that are not utilized in the construction of the sample frame. This can happen for a number of reasons, including, but not limited to: (1) Some claims/claim lines are discovered to have been subject to a prior review, (2) The definitions of the sample unit necessitate eliminating some claims/claim lines, or (3) Some claims/claim lines are attributed to sample units for which there was no payment.”

There are many more changes to discuss, but I have been asked to appear on RACMonitor to present the details on February 19, 2019. So sign up to listen!!!

5th Circuit Finds Subject Matter Jurisdiction For Medicare and Medicaid Providers – Why Collards Matter

“I’d like some spaghetti, please, and a side of meatballs.” – This sentence is illogical because meatballs are integral to spaghetti and meatballs. If you order spaghetti  – and -meatballs, you are ordering “spaghetti and meatballs.” Meatballs on the side is not a thing.

Juxtapose, a healthcare provider defending itself from an alleged overpayment, But during the appeal process undergoes a different penalty – the state or federal government begins to recoup future funds prior to a decision that the alleged recoupment is authorized, legal, or warranted. When a completely new issue unrelated to the allegation of overpayment inserts itself into the mix, then you have spaghetti and meatballs with a side of collard greens. Collard greens need to be appealed in a completely different manner than spaghetti and meatballs, especially when the collard greens could put the company out of business because of the premature and unwarranted recoupments without due process.

I have been arguing this for years based off of, not only, a 1976 Supreme Court case, but multiple state case law, as well as, success I have had in the federal and administrative courts, and BTW – logic.

On March 27, 2018, I was confirmed again. The Fifth Circuit Court of Appeals decided a landmark case for Medicare and Medicaid providers across the country. The case, Family Rehab., Inc. v Azar, 2018 U.S. App. LEXIS 7668, involved a Medicare home health service provider, which was assessed for approximately $7.8 million in Medicare overpayments. Family Rehab, the plaintiff in the case, relied on 88% to 94% of its revenue from Medicare. The company had timely appealed the alleged overpayment, and it was at the third level of the Medicare five step process for appeals. See blog. But there is a 3 – 5 year backlog on the third level, and the government began to recoup the $7.8 million despite the ongoing appeal. If no action were taken, the company would be out of business well-before any ALJ could rule on the merits of the case, i.e. whether the recoupment was warranted. How is that fair? The provider may not owe $7.8 million, but before an objective tribunal decides what is actually owed, if anything, we are going to go ahead and take the money and reap the benefit of any interest accrued during the time it takes the provider to get a hearing.

The backlog for Medicare appeals at the ALJ level is unacceptably long. See blog and blog. However, the federal regulations only  prevent recoupment during the appeal process during the first and second levels. This is absolutely asinine and should be changed considering we do have a clause in the Constitution called “due process.” Purported criminals receive due process, but healthcare providers who accept Medicare or Medicaid, at times, do not.

At the third level of appeal, Family Rehab underwent recoupments, even though it was still appealing the decision, which immediately stifled Family Rehab’s income. Family Rehab, because of the premature recoupments, was at risk of losing everything, going bankrupt, firing its staff, and no longer providing medically necessary home health services for the elderly. This situation mimics a situation in which I represented a client in northern Indiana who was losing its Medicaid contract.  I also successfully obtained a preliminary injunction preventing the provider from losing its Medicaid contract. See blog.

It is important to note that in this case the ZPIC had audited only 43 claims. Then it used a statistical method to extrapolate the alleged over-billings and concluded that the alleged overpayment was $7,885,803.23. I cannot tell you how many times I have disputed an extrapolation and won. See blog.

42 USC 1395(f)(f)(d)(1)(A) states that the ALJ shall conduct and conclude the hearing and render a decision no later than 90 days after a timely request. Yet the Fifth Circuit Court of Appeals found that an ALJ hearing would not be forthcoming not within 90 days or even 900 days. The judge noted in his decision that the Medicare appeal backlog for an ALJ hearing was 3 – 5 years. The District Court held that it lacked subject matter jurisdiction because Family Rehab had not exhausted its administrative remedies. Family Rehab appealed.

On appeal, Family Rehab argued the same arguments that I have made in the past: (1) its procedural due process and ultra vires claims are collateral to the agency’s appellate process; and (2) going through the appellate process would mean no review at all because the provider would be out of business by the time it would be heard by an ALJ.

What does collateral mean? Collard greens are collateral. When you think collateral; think collards. Collard greens do not normally come with spaghetti and meatballs. A collateral issue is an issue that is entirely collateral to a substantive agency decision and would not be decided through the administrative appeal process. In other words, even if Family Rehab were to only pursue the $7.8 million overpayment issue through the administrative process, the issue of having money recouped and the damage to the company that the recoupment was causing would never be heard by the ALJ because those “collateral” issues are outside the ALJ’s purview. The premature recoupment issue could not be remedied by an ALJ. The Fifth Circuit Court of Appeals agreed.

The collateral argument also applies to terminations of Medicare and Medicaid contracts without due process. In an analogous case (Affiliated Professional), the provider argued that the termination of its Medicare contract without due process violated its right to due process and the Equal Protection Clause and was successful.

The upshot is obvious, if the Court must examine the merits of the underlying dispute, delve into the statute and regulations, or make independent judgments as to plaintiff’s eligibility under a statute, the claim is not collateral.

The importance of this case is that it verifies my contention that if a provider is undergoing a recoupment or termination without due process, there is relief for that provider – an injunction stopping the premature recoupments or termination until due process has been completed.

Medicare and Medicaid RAC Audits: How Auditors Get It Wrong

Here is an article that I wrote that was first published on RACMonitor on March 15, 2018:

All audits are questionable, contends the author, so appeal all audit results.

Providers ask me all the time – how will you legally prove that an alleged overpayment is erroneous? When I explain some examples of mistakes that Recovery Audit Contractors (RACs) and other health care auditors make, they ask, how do these auditors get it so wrong?

First, let’s debunk the notion that the government is always right. In my experience, the government is rarely right. Auditors are not always healthcare providers. Some have gone to college. Many have not. I googled the education criteria for a clinical compliance reviewer. The job application requires the clinical reviewer to “understand Medicare and Medicaid regulations,” but the education requirement was to have an RN. Another company required a college degree…in anything.

Let’s go over the most common mistakes auditors make that I have seen. I call them “oops, I did it again.” And I am not a fan of reruns.

  1. Using the Wrong Clinical Coverage Policy/Manual/Regulation

Before an on-site visit, auditors are given a checklist, which, theoretically, is based on the pertinent rules and regulations germane to the type of healthcare service being audited. The checklists are written by a government employee who most likely is not an attorney. There is no formal mechanism in place to compare the Medicare policies, rules, and manuals to the checklist. If the checklist is erroneous, then the audit results are erroneous. The Centers for Medicare & Medicaid Services (CMS) frequently revises final rules, changing requirements for certain healthcare services. State agencies amend small technicalities in the Medicaid policies constantly. These audit checklists are not updated every time CMS issues a new final rule or a state agency revises a clinical coverage policy.

For example, for hospital-based services, there is a different reimbursement rate depending on whether the patient is an inpatient or outpatient. Over the last few years there have been many modifications to the benchmarks for inpatient services. Another example is in behavioral outpatient therapy; while many states allow 32 unmanaged visits, others have decreased the number of unmanaged visits to 16, or, in some places, eight. Over and over, I have seen auditors apply the wrong policy or regulation. They apply the Medicare Manual from 2018 for dates of service performed in 2016, for example. In many cases, the more recent policies are more stringent that those of two or three years ago.

  1. A Flawed Sample Equals a Flawed Extrapolation

The second common blunder auditors often make is producing a flawed sample. Two common mishaps in creating a sample are: a) including non-government paid claims in the sample and b) failing to pick the sample randomly. Both common mistakes can render a sample invalid, and therefore, the extrapolation invalid. Auditors try to throw out their metaphoric fishing nets wide in order to collect multiple types of services. The auditors accidentally include dates of service of claims that were paid by third-party payors instead of Medicare/Medicaid. You’ve heard of the “fruit of the poisonous tree?” This makes the audit the fruit of the poisonous audit. The same argument goes for samples that are not random, as required by the U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG). A nonrandom sample is not acceptable and would also render any extrapolation invalid.

  1. A Simple Misunderstanding

A third common blooper found with RAC auditors is simple misunderstandings based on lack of communication between the auditor and provider. Say an auditor asks for a chart for date of service X. The provider gives the auditor the chart for date of service X, but what the auditor is really looking for is the physician’s order or prescription that was dated the day prior. The provider did not give the auditor the pertinent document because the auditor did not request it. These issues cause complications later, because inevitably, the auditor will argue that if the provider had the document all along, then why was the document not presented? Sometimes inaccurate accusations of fraud and fabrication are averred.

  1. The Erroneous Extrapolation

Auditors use a computer program called RAT-STATS to extrapolate the sample error rate across a universe of claims. There are so many variables that can render an extrapolation invalid. Auditors can have too low a confidence level. The OIG requires a 90 percent confidence level at 25 percent precision for the “point estimate.” The size and validity of the sample matters to the validity of the extrapolation. The RAT-STATS outcome must be reviewed by a statistician or a person with equal expertise. An appropriate statistical formula for variable sampling must be used. Any deviations from these directives and other mandates render the extrapolation invalid. (This is not an exhaustive list of requirements for extrapolations).

  1. That Darn Purple Ink!

A fifth reason that auditors get it wrong is because of nitpicky, nonsensical reasons such as using purple ink instead of blue. Yes, this actually happened to one of my clients. Or if the amount of time with the patient is not denoted on the medical record, but the duration is either not relevant or the duration is defined in the CPT code. Electronic signatures, when printed, sometimes are left off – but the document was signed. A date on the service note is transposed. Because there is little communication between the auditor and the provider, mistakes happen.

The moral of the story — appeal all audit results.

Medicare Audits: DRG Downcoding in Hospitals: Algorithms Substituting for Medical Judgment, Part 1

This article is written by our good friend, Ed Roche. He is the founder of Barraclough NY, LLC, which is a litigation support firm that helps us fight against extrapolations.

e-roche

The number of Medicare audits is increasing. In the last five years, audits have grown by 936 percent. As reported previously in RACmonitor, this increase is overwhelming the appeals system. Less than 3 percent of appeal decisions are being rendered on time, within the statutory framework.

It is peculiar that the number of audits has grown rapidly, but without a corresponding growth in the number of employees for Recovery Audit Contractors (RACs). How can this be? Have the RAC workers become more than 900 percent more efficient? Well, in a way, they have. They have learned to harness the power of big data.

Since 1986, the ability to store digital data has grown from 0.02 exabytes to 500 exabytes. An exabyte is one quintillion bytes. Every day, the equivalent 30,000 Library of Congresses is put into storage. That’s lots of data.

Auditing by RACs has morphed into using computerized techniques to pick targets for audits. An entire industry has emerged that specializes in processing Medicare claims data and finding “sweet spots” on which the RACs can focus their attention. In a recent audit, the provider was told that a “focused provider analysis report” had been obtained from a subcontractor. Based on that report, the auditor was able to target the provider.

A number of hospitals have been hit with a slew of diagnosis-related group (DRG) downgrades from internal hospital RAC teams camping out in their offices, continually combing through their claims data. The DRG system constitutes a framework that classifies any inpatient stay into groups for purposes of payment.

The question then becomes: how is this work done? How is so much data analyzed? Obviously, these audits are not being performed manually. They are cyber audits. But again, how?

An examination of patent data sheds light on the answer. For example, Optum, Inc. of Minnesota (associated with UnitedHealthcare) has applied for a patent on “computer-implemented systems and methods of healthcare claim analysis.” These are complex processes, but what they do is analyze claims based on DRGs.

The information system envisaged in this patent appears to be specifically designed to downgrade codes. It works by running a simulation that switches out billed codes with cheaper codes, then measures if the resulting code configuration is within the statistical range averaged from other claims.

If it is, then the DRG can be downcoded so that the revenue for the hospital is reduced correspondingly. This same algorithm can be applied to hundreds of thousands of claims in only minutes. And the same algorithm can be adjusted to work with different DRGs. This is only one of many patents in this area.

When this happens, the hospital may face many thousands of downgraded claims. If it doesn’t like it, then it must appeal.

Here there is a severe danger for any hospital. The problem is that the cost the RAC incurs running the audit is thousands of time less expensive that what the hospital must spend to refute the DRG coding downgrade.

This is the nature of asymmetric warfare. In military terms, the cost of your enemy’s offense is always much smaller than the cost of your defense. That is why guerrilla warfare is successful against nation states. That is why the Soviet Union and United States decided to stop building anti-ballistic missile (ABM) systems — the cost of defense was disproportionately greater than the cost of offense.

Hospitals face the same problem. Their claims data files are a giant forest in which these big data algorithms can wander around downcoding and picking up substantial revenue streams.

By using artificial intelligence (advanced statistical) methods of reviewing Medicare claims, the RACs can bombard hospitals with so many DRG downgrades (or other claim rejections) that it quickly will overwhelm their defenses.

We should note that the use of these algorithms is not really an “audit.” It is a statistical analysis, but not done by any doctor or healthcare professional. The algorithm could just as well be counting how many bags of potato chips are sold with cans of beer.

If the patient is not an average patient, and the disease is not an average disease, and the treatment is not an average treatment, and if everything else is not “average,” then the algorithm will try to throw out the claim for the hospital to defend. This has everything to do with statistics and correlation of variables and very little to do with understanding whether the patient was treated properly.

And that is the essence of the problem with big data audits. They are not what they say they are, because they substitute mathematical algorithms for medical judgment.

EDITOR’ NOTE: In Part II of this series, Edward Roche will examine the changing appeals landscape and what big data will mean for defense against these audits. In Part III, he will look at future scenarios for the auditing industry and the corresponding public policy agenda that will involve lawmakers.