Blog Archives

The Ugly Truth about Medicare Provider Appeals

Extrapolated audits are the worst.

These audits under sample and over extrapolate – almost to the point that some audits allege that you owe more than you were paid. How is that fair in our judicial system? I mean, our country was founded on “due process.” That means you have a right to life, liberty, and the pursuit of happiness. If the government attempts to pursue your reimbursements at all, much less a greater amount than what you received, you are required notice and a hearing.

Not to mention that OIG conducted a Report back in 2020 that identified numerous mistakes in the extrapolations. The Report stated: “CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” I don’t know about you, but that is disconcerting to me. It also stated that “The test was associated with at least $42 million in extrapolated overpayments that were overturned in fiscal years 2017 and 2018. If CMS did not intend that the contractors use this procedure, these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.

I have undergone hundreds of Medicare and Medicaid audits with extrapolations. You defend against these audits twofold: 1) by hiring an expert statistician to debunk the extrapolation; and 2) by using the provider as an expert clinician to discredit the denials. However, I am always dismayed…maybe that’s not the right word…flabbergasted that no one ever shows up on the other side. It is as if CMS via whatever contractor conducted the extrapolated audit believes that their audit needs no one to prove its veracity. As if we attorneys and providers should just accept their findings as truth, and they get the benefit of NOT hiring a lawyer and NOT showing up to ALJ trials.

In the above picture, the side with the money is CMS. The empty side is the provider.

In normal trials, as you know, there are two opposing sides: a Plaintiff and a Defendant, although in administrative law it’s called a Petitioner and a Respondent. Medicaid provider appeals also have two opponents. However, in Medicare provider appeals, there is only one side: YOU. An ALJ will appear, but no auditor to defend the merits of the alleged overpayment that you, as a provider, are accused of owing.

In normal trials, if a party fails to appear, the Judge will almost automatically rule against the non-appearing party. Why isn’t it the same for Medicare provider appeals? If a Medicare provider appears to dispute an alleged audit, the Judge does not rule automatically in favor of the provider. Quite the opposite quite frankly. The CMS Rules, which apply to all venues under the purview of CMS, which includes the ALJ level and the Medicare Appeals Council level, are crafted against providers, it seems. Regardless the Rules create a procedure in which providers, not the auditors, are forced to retain counsel, which costs money, retain a statistician in cases of extrapolations, which costs money, go through years of appeals through 5 levels, all of which the CMS Rules apply. Real law doesn’t apply until the district court level, which is a 6th level – and 8 years later.

Any providers reading, who retain lobbyists, this Medicare appeal process needs to change legislatively.

Always Challenge the Extrapolation in Medicare Provider Audits!

Always challenge the extrapolation! It is my personal opinion that extrapolation is used too loosely. What I mean is that sample sizes are usually too small to constitute a valid representation of the provider’s claims. Say a provider bills 10,000 claims. Is a sample of 50 adequate?

In a 2020 case, Palmetto audited .0051% of claims by Palm Valley, and Palm Valley challenged CMS’ sample and extrapolation method. Palm Valley Health Care, Inc. v. Azar, No. 18-41067, 2020 BL 14097 (5th Cir., Jan. 15, 2020). As an aside, I had 2 back-to-back extrapolation cases recently. The provider, however, did not hire me until the ALJ level – or the 3rd level of Medicare provider appeals. Unfortunately, no one argued that the extrapolation was faulty at the first 2 levels. We had 2 different ALJs, but both ALJs ruled that the provider could not raise new arguments; i.e., that the extrapolation was erroneous, at the 3rd level. They decided that all arguments should be raised from the beginning. This is just a reminder that: (a) raise all defenses immediately; and (b) don’t try the first two levels without an attorney.

Going back to Palm Valley.

The 5th Circuit held that while the statistical sampling methodology may not be the most precise methodology available, CMS’ selection methodology did represent a valid “complex balance of interests.” Principally, the court noted, quoting the Medicare Appeals Council, that CMS’ methodology was justified by the “real world constraints imposed by conflicting demands on limited public funds” and that Congress clearly envisioned extrapolation being applied to calculate overpayments in instances like this. I disagree with this result. I find it infuriating that auditors, like Palmetto, can scrutinize providers’ claims, yet circumvent similar accountability. They are being allowed to conduct a “hack” job at extrapolating to the financial detriment of the provider.

Interestingly, Palm Valley’s 5th Circuit decision was rendered in 2020. The dates of service of the claims Palmetto audited were July 2006-January 2009. It just shows how long the legal battle can be in Medicare audits. Also, Palm Valley’s error rate was 53.7%. Remember, in 2019, CMS revised the extrapolation rules to allow extrapolations in 50% or higher error rates. If you want to read the extrapolations rules, you can find them in Chapter 8 of the Medicare Program Integrity Manuel (“MPIM”).

On RACMonitor, health care attorney, David Glaser, mentioned that there is a difference in arguments versus evidence. While you cannot admit new evidence at the ALJ level, you can make new arguments. He and I agreed, however, even if you can dispute the extrapolation legally, a statistical report would not allowed as new evidence, which are important to submit.

Lastly, 42 CFR 405.1014(a)(3) requires the provider to assert the reasons the provider disagrees with the extrapolation in the request for ALJ hearing.

CMS Rulings Are Not Law; Yet Followed By ALJs

Lack of medical necessity is one of the leading reasons for denials during RAC, MAC, TPE, and UPIC audits. However, case law dictates that the treating physician should be allowed deference with the decision that medical necessity exists because the Medicare and/or Medicaid auditor never had the privilege to see the recipient.

However, recent ALJ decisions have gone against case law. How is that possible? CMS creates “Rules” – I say that in air quotes – these Rules are not promulgated, but are binding on anyone under CMS’ umbrella. Guess what? That includes the ALJs for Medicare appeals. As an example, the “treating physician” Rule is law based on case law. Juxtapose, CMS’ Ruling 93-l. It states that no presumptive weight should be assigned to a treating physician’s medical opinion in determining the medical necessity of inpatient hospital and skilled nursing facility services. The Ruling adds parenthetically that the Ruling does not “by omission or implication” endorse the application of the treating physician rule to services not addressed in the Ruling. So, we get a decision from an ALJ that dismisses the treating physician rule.

The ALJ decision actually said: Accordingly, I find that the treating physician rule, standing alone, provides no basis for coverage.

This ALJ went against the law but followed CMS Rulings.

CMS Rulings, however, are not binding. CMS Rulings aren’t even law. Yet the CMS Rulings, according to CMS, are binding onto the entities that are under the CMS umbrella. This means that the Medicare appeals process, which include the redeterminations, the reconsiderations, the ALJ decisions, and the Medicare Appeals Councils’ decisions are all dictated by these non-law, CMS Rulings, which fly in the face of actual law. ALJs uphold extrapolations based on CMS Rulings because they have to. But once you get to a federal district court judge, who are not bound by CMS, non-law, rulings, you get a real Judge’s decision, and most extrapolations are thrown out if the error rate is under 50%.

Basically, if you are a Medicare provider, you have to jump through the hoops of 4 levels of appeals that is not dictated by law, but by an administration that is rewarded for taking money from providers on the pretense of FWA. Most providers do not have the financial means to make it to the 5th level of appeal. So, CMS wins by default.

Folks, create a legal fund for your provider entity. You have got to appeal and be able to afford it. That is the only way that we can change the disproportionately unfair Medicare appeal process that providers must endure now.

CMS Rulings Can Devastate a Provider, But Should It?

If you could light a torch to a Molotov Cocktail and a bunch of newspapers, you could not make a bigger explosion in my head than a recent Decision from a Medicare administrative law judge (“ALJ”). The extrapolation was upheld, despite an expert statistician citing its shortcomings, based on a CMS Ruling, which is neither law nor precedent. The Decision reminded me of the new Firestarter movie because everything is up in flames. Drew Barrymore would be proud.

I find it very lazy of the government to rely on sampling and extrapolations, especially in light that no witness testifies to its accuracy.

Because this ALJ relied so heavily on CMS Rulings, I wanted to do a little detective work as to whether CMS Rulings are binding or even law. First, I logged onto Westlaw to search for “CMS Ruling” in any case in any jurisdiction in America. Nothing. Not one case ever mentioned “CMS Ruling.” Ever. (Nor did my law school).

What Is a CMS Ruling?

A CMS Ruling is defined as, “decisions of the Administrator that serve as precedent final opinions and orders and statements of policy and interpretation. They provide clarification and interpretation of complex or ambiguous provisions of the law or regulations relating to Medicare, Medicaid, Utilization and Quality Control Peer Review, private health insurance, and related matters.”

But Are CMS Rulings Law?

No. CMS Rulings are not law. CMS Rulings are not binding on district court judges because district court judges are not part of HHS or CMS. However, the Medicare ALJs are considered part of HHS and CMS; thus the CMS Rulings are binding on Medicare ALJs.

This creates a dichotomy between the “real law” and agency rules. When you read CMS Ruling 86-1, it reads as if there two parties with oppositive views, both presented their arguments, and the Administrator makes a ruling. But the Administrator is not a Judge, but the Ruling reads like a court case. CMS Rulings are not binding on:

  1. The Supreme Court
  2. Appellate Courts
  3. The real world outside of CMS
  4. District Courts
  5. The Department of Transportation
  6. Civil Jurisprudence
  7. The Department of Education
  8. Etc. – You get the point.

So why are Medicare providers held subject to penalties based on CMS Rulings, when after the providers appeal their case to district court, that “rule” that was subjected against them (saying they owe $7 million) is rendered moot? Can we say – not fair, equitable, Constitutional, and flies in the face of due process?

The future does not look bright for providers going forward in defending overzealous, erroneous, and misplaced audits. These audits aren’t even backed up by witnesses – seriously, at the ALJ Medicare appeals, there is no statistician testifying to verify the results. Yet some of the ALJs are still upholding these audits.

In the “court case,” which resulted in CMS Ruling 86-1, the provider argued that:

  1. There is no legal authority in the Medicare statute or regulations for HCFA or its intermediaries to determine overpayments by projecting the findings of a sample of specific claims onto a universe of unspecified beneficiaries and claims.
  2. Section 1879 of the Social Security Act, 42 U.S.C. 1395pp, contemplates that medical necessity and custodial care coverage determinations will be made only by means of a case-by-case review.
  3. When sampling is used, providers are not able to bill individual beneficiaries not in the sample group for the services determined to be noncovered.
  4. Use of a sampling procedure violates the rights of providers to appeal adverse determinations.
  5. The use of sampling and extrapolation to determine overpayments deprives the provider of due process.

The CMS Ruling 86-1 was decided by Mr. Henry R. Desmarais, Acting Administrator, Health Care Financing Administration in 1986.

Think it should be upheld?

Post-COVID (ish) RAC Audits – Temporary Restrictions

2020 was an odd year for recovery audit contractor (“RAC”) and Medicare Administrative Contractors (“MAC”) audits. Well, it was an odd year for everyone. After trying five virtual trials, each one with up to 23 witnesses, it seems that, slowly but surely, we are getting back to normalcy. A tell-tale sign of fresh normalcy is an in-person defense of health care regulatory audits. I am defending a RAC audit of pediatric facility in Georgia in a couple weeks and the clerk of court said – “The hearing is in person.” Well, that’s new. Even when we specifically requested a virtual trial, we were denied with the explanation that GA is open now. The virtual trials are cheaper and more convenient; clients don’t have to pay for hotels and airlines.

In-person hearings are back – at least in most states. We have similar players and new restrictions.

On March 16, 2021, CMS announced that it will temporarily restrict audits to March 1, 2020, and before. Medicare audits are not yet dipping its metaphoric toes into the shark infested waters of auditing claims with dates of service (“DOS”) March 1 – today. This leaves a year and half time period untouched. Once the temporary hold is lifted, audits of 2020 DOS will be abound. March 26, 2021, CMS awarded Performant Recovery, Inc., the incumbent, the new RAC Region 1 contract.

RAC’s review claims on a post-payment and/or pre-payment basis. (FYI – You would rather a post payment review rather than a pre – I promise).

The RACs were created to detect fraud, waste, and abuse (“FWA”) by reviewing medical records. Any health care provider – not matter how big or small –  are subject to audits at the whim of the government. CMS, RACs, MCOs, MACs, TPEs, UPICs, and every other auditing company can implement actions that will prevent future improper payments, as well. As we all know, RACs are paid on a contingency basis. Approximately, 13%. When the RACs were first created, the RACs were compensated based on accusations of overpayments, not the amounts that were truly owed after an independent tribunal. As any human could surmise, the contingency payment creates an overzealousness that can only be demonstrated by my favorite case in my 21 years – in New Mexico against Public Consulting Group (“PCG”). A behavioral health care (“BH”) provider was accused of over $12 million overpayment. After we presented before the administrative law judge (“ALJ”) in NM Administrative Court, the ALJ determined that we owed $896.35. The 99.23% reduction was because of the following:

  1. Faulty Extrapolation: NM HSD’s contractor PCG reviewed approximately 150 claims out of 15,000 claims between 2009 and 2013. Once the error rate was defined as high as 92%, the base error equaled $9,812.08; however the extrapolated amount equaled over $12 million. Our expert statistician rebutted the error rate being so high.  Once the extrapolation is thrown out, we are now dealing with much more reasonable amounts – only $9k
  • Attack the Clinical Denials: The underlying, alleged overpayment of $9,812.08 was based on 150 claims. We walked through the 150 claims that PCG claimed were denials and proved PCG wrong. Examples of their errors include denials based on lack of staff credentialing, when in reality, the auditor could not read the signature. Other denials were erroneously denied based the application of the wrong policy year.

The upshot is that we convinced the judge that PCG was wrong in almost every denial PCG made. In the end, the Judge found we owed $896.35, not $12 million. Little bit of a difference! We appealed.

A Study of Contractor Consistency in Reviewing Extrapolated Overpayments

By Frank Cohen, MPA, MBB – my colleague from RACMonitor. He wrote a great article and has permitted me to share it with you. See below.

CMS levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits.

The use of extrapolation in Medicare and private payer audits has been around for quite some time now. And lest you be of the opinion that extrapolation is not appropriate for claims-based audits, there are many, many court cases that have supported its use, both specifically and in general. Arguing that extrapolation should not have been used in a given audit, unless that argument is supported by specific statistical challenges, is mostly a waste of time. 

For background purposes, extrapolation, as it is used in statistics, is a “statistical technique aimed at inferring the unknown from the known. It attempts to predict future data by relying on historical data, such as estimating the size of a population a few years in the future on the basis of the current population size and its rate of growth,” according to a definition created by Eurostat, a component of the European Union. For our purposes, extrapolation is used to estimate what the actual overpayment amount might likely be for a population of claims, based on auditing a smaller sample of that population. For example, say a Uniform Program Integrity Contractor (UPIC) pulls 30 claims from a medical practice from a population of 10,000 claims. The audit finds that 10 of those claims had some type of coding error, resulting in an overpayment of $500. To extrapolate this to the entire population of claims, one might take the average overpayment, which is the $500 divided by the 30 claims ($16.67 per claim) and multiply this by the total number of claims in the population. In this case, we would multiply the $16.67 per claim by 10,000 for an extrapolated overpayment estimate of $166,667. 

The big question that normally crops up around extrapolation is this: how accurate are the estimates? And the answer is (wait for it …), it depends. It depends on just how well the sample was created, meaning: was the sample size appropriate, were the units pulled properly from the population, was the sample truly random, and was it representative of the population? The last point is particularly important, because if the sample is not representative of the population (in other words, if the sample data does not look like the population data), then it is likely that the extrapolated estimate will be anything but accurate.

To account for this issue, referred to as “sample error,” statisticians will calculate something called a confidence interval (CI), which is a range within which there is some acceptable amount of error. The higher the confidence value, the larger the potential range of error. For example, in the hypothetical audit outlined above, maybe the real average for a 90-percent confidence interval is somewhere between $15 and $18, while, for a 95-percent confidence interval, the true average is somewhere between $14 and $19. And if we were to calculate for a 99-percent confidence interval, the range might be somewhere between $12 and $21. So, the greater the range, the more confident I feel about my average estimate. Some express the confidence interval as a sense of true confidence, like “I am 90 percent confident the real average is somewhere between $15 and $18,” and while this is not necessarily wrong, per se, it does not communicate the real value of the CI. I have found that the best way to define it would be more like “if I were to pull 100 random samples of 30 claims and audit all of them, 90 percent would have a true average of somewhere between $15 and $18,” meaning that the true average for some 1 out of 10 would fall outside of that range – either below the lower boundary or above the upper boundary. The main reason that auditors use this technique is to avoid challenges based on sample error.

To the crux of the issue, the Centers for Medicare & Medicaid Services (CMS) levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits. And while the use of extrapolation is well-established and well-accepted, its use in an audit is not an automatic, and depends upon the creation of a statistically valid and representative sample. Thousands of extrapolation audits are completed each year, and for many of these, the targeted provider or organization will appeal the use of extrapolation. In most cases, the appeal is focused on one or more flaws in the methodology used to create the sample and calculate the extrapolated overpayment estimate. For government audits, such as with UPICs, there is a specific appeal process, as outlined in their Medical Learning Network booklet, titled “Medicare Parts A & B Appeals Process.”

On Aug. 20, 2020, the U.S. Department of Health and Human Services Office of Inspector General (HHS OIG) released a report titled “Medicare Contractors Were Not Consistent in How They Reviewed Extrapolated Overpayments in the Provider Appeals Process.” This report opens with the following statement: “although MACs (Medicare Administrative Contractors) and QICs (Qualified Independent Contractors) generally reviewed appealed extrapolated overpayments in a manner that conforms with existing CMS requirements, CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” These inconsistencies were associated with $42 million in extrapolated payments from fiscal years 2017 and 2018 that were overturned in favor of the provider. It’s important to note that at this point, we are only talking about appeal determinations at the first and second level, known as redetermination and reconsideration, respectively.

Redetermination is the first level of appeal, and is adjudicated by the MAC. And while the staff that review the appeals at this level are supposed to have not been involved in the initial claim determination, I believe that most would agree that this step is mostly a rubber stamp of approval for the extrapolation results. In fact, of the hundreds of post-audit extrapolation mitigation cases in which I have been the statistical expert, not a single one was ever overturned at redetermination.

The second level of appeal, reconsideration, is handled by a QIC. In theory, the QIC is supposed to independently review the administrative records, including the appeal results of redetermination. Continuing with the prior paragraph, I have to date had only several extrapolation appeals reversed at reconsideration; however, all were due to the fact that the auditor failed to provide the practice with the requisite data, and not due to any specific issues with the statistical methodology. In two of those cases, the QIC notified the auditor that if they were to get the required information to them, they would reconsider their decision. And in two other cases, the auditor appealed the decision, and it was reversed again. Only the fifth case held without objection and was adjudicated in favor of the provider.

Maybe this is a good place to note that the entire process for conducting extrapolations in government audits is covered under Chapter 8 of the Medicare Program Integrity Manual (PIM). Altogether, there are only 12 pages within the entire Manual that actually deal with the statistical methodology behind sampling and extrapolation; this is certainly not enough to provide the degree of guidance required to ensure consistency among the different government contractors that perform such audits. And this is what the OIG report is talking about.

Back to the $42 million that was overturned at either redetermination or reconsideration: the OIG report found that this was due to a “type of simulation testing that was performed only by a subset of contractors.” The report goes on to say that “CMS did not intend that the contractors use this procedure, (so) these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.” This was quite confusing for me at first, because this “simulation” testing was not well-defined, and also because it seemed to say that if this procedure was appropriate to use, then more contractors should have used it, which would have resulted in more reversals in favor of the provider.   

Interestingly, CMS seems to have written itself an out in Chapter 8, section 8.4.1.1 of the PIM, which states that “[f]ailure by a contractor to follow one or more of the requirements contained herein does not necessarily affect the validity of the statistical sampling that was conducted or the projection of the overpayment.” The use of the term “does not necessarily” leaves wide open the fact that the failure by a contractor to follow one or more of the requirements may affect the validity of the statistical sample, which will affect the validity of the extrapolated overpayment estimate. 

Regarding the simulation testing, the report stated that “one MAC performed this type of simulation testing for all extrapolation reviews, and two MACs recently changed their policies to include simulation testing for sample designs that are not well-supported by the program integrity contractor. In contrast, both QICs and three MACs did not perform simulation testing and had no plans to start using it in the future.” And even though it was referenced some 20 times, with the exception of an example given as Figure 2 on page 10, the report never did describe in any detail the type of simulation testing that went on. From the example, it was evident to me that the MACs and QICs involved were using what is known as a Monte Carlo simulation. In statistics, simulation is used to assess the performance of a method, typically when there is a lack of theoretical background. With simulations, the statistician knows and controls the truth. Simulation is used advantageously in a number of situations, including providing the empirical estimation of sampling distributions. Footnote 10 in the report stated that ”reviewers used the specific simulation test referenced here to provide information about whether the lower limit for a given sampling design was likely to achieve the target confidence level.” If you are really interested in learning more about it, there is a great paper called
“The design of simulation studies in medical statistics” by Burton et al. (2006). 

Its application in these types of audits is to “simulate” the audit many thousands of times to see if the mean audit results fall within the expected confidence interval range, thereby validating the audit results within what is known as the Central Limit Theorem (CLT).

Often, the sample sizes used in recoupment-type audits are too small, and this is usually due to a conflict between the sample size calculations and the distributions of the data. For example, in RAT-STATS, the statistical program maintained by the OIG, and a favorite of government auditors, sample size estimates are based on an assumption that the data are normally (or near normally) distributed. A normal distribution is defined by the mean and the standard deviation, and includes a bunch of characteristics that make sample size calculations relatively straightforward. But the truth is, because most auditors use the paid amount as the variable of interest, population data are rarely, if ever, normally distributed. Unfortunately, there is simply not enough room or time to get into the details of distributions, but suffice it to say that, because paid data are bounded on the left with zero (meaning that payments are never less than zero), paid data sets are almost always right-skewed. This means that the distribution tail continues on to the right for a very long distance.  

In these types of skewed situations, sample size normally has to be much larger in order to meet the CLT requirements. So, what one can do is simulate the random sample over and over again to see whether the sampling results ever end up reporting a normal distribution – and if not, it means that the results of that sample should not be used for extrapolation. And this seems to be what the OIG was talking about in this report. Basically, they said that some but not all of the appeals entities (MACs and QICs) did this type of simulation testing, and others did not. But for those that did perform the tests, the report stated that $41.5 million of the $42 million involved in the reversals of the extrapolations were due to the use of this simulation testing. The OIG seems to be saying this: if this was an unintended consequence, meaning that there wasn’t any guidance in place authorizing this type of testing, then it should not have been done, and those extrapolations should not have been overturned. But if it should have been done, meaning that there should have been some written guidance to authorize that type of testing, then it means that there are likely many other extrapolations that should have been reversed in favor of the provider. A sticky wicket, at best.

Under the heading “Opportunity To Improve Contractor Understanding of Policy Updates,” the report also stated that “the MACs and QICs have interpreted these requirements differently. The MAC that previously used simulation testing to identify the coverage of the lower limit stated that it planned to continue to use that approach. Two MACs that previously did not perform simulation testing indicated that they would start using such testing if they had concerns about a program integrity contractor’s sample design. Two other MACs, which did not use simulation testing, did not plan to change their review procedures.” One QIC indicated that it would defer to the administrative QIC (AdQIC, the central manager for all Medicare fee-for-service claim case files appealed to the QIC) regarding any changes. But it ended this paragraph by stating that “AdQIC did not plan to change the QIC Manual in response to the updated PIM.”

With respect to this issue and this issue alone, the OIG submitted two specific recommendations, as follows:

  • Provide additional guidance to MACs and QICs to ensure reasonable consistency in procedures used to review extrapolated overpayments during the first two levels of the Medicare Parts A and B appeals process; and
  • Take steps to identify and resolve discrepancies in the procedures that MACs and QICs use to review extrapolations during the appeals process.

In the end, I am not encouraged that we will see any degree of consistency between and within the QIC and MAC appeals in the near future.

Basically, it would appear that the OIG, while having some oversight in the area of recommendations, doesn’t really have any teeth when it comes to enforcing change. I expect that while some reviewers may respond appropriately to the use of simulation testing, most will not, if it means a reversal of the extrapolated findings. In these cases, it is incumbent upon the provider to ensure that these issues are brought up during the Administrative Law Judge (ALJ) appeal.

Programming Note: Listen to Frank Cohen report this story live during the next edition of Monitor Mondays, 10 a.m. Eastern.

CMS Clarifying Medicare Overpayment Rules: The Bar Is Raised (Yet Again) for Health Care Providers

Have you ever watched athletes compete in the high jump? Each time an athlete is successful in pole vaulting over the bar, the bar gets raised…again…and again…until the athlete can no longer vault over the bar. Similarly, the Center for Medicare and Medicaid Services (CMS) continue to raise the bar on health care providers who accept Medicare and Medicaid.

In February, CMS finalized the rule requiring providers to proactively investigate themselves and report any overpayments to CMS for Medicare Part A and B. (The Rule for Medicare Parts C and D were finalized in 2014, and the Rule for Medicaid has not yet been promulgated). The Rule makes it very clear that CMS expects providers and suppliers to enact robust self auditing policies.

We all know that the Affordable Care Act (ACA) was intended to be self-funding. Who is funding it? Doctors, psychiatrists, home care agencies, hospitals, long term care facilities, dentists…anyone who accepts Medicare and Medicaid. The self-funding portion of the ACA is strict; it is infallible, and its fraud, waste, and abuse (FWA) detection tools…oh, how wide that net is cast!

Subsection 1128J(d) was added to Section 6402 of the ACA, which requires that providers report overpayments to CMS “by the later of – (A) the date which is 60 days after the date on which the overpayment was identified; or (B) the date any corresponding cost report is due, if applicable.”

Identification of an overpayment is when the person has, or reasonably should have through the exercise of reasonable diligence, determined that the person received an overpayment. Overpayment includes referrals or those referrals that violate the Anti-Kickback statute.

CMS allows providers to extrapolate their findings, but what provider in their right mind would do so?

There is a six-year look back period, so you don’t have to report overpayments for claims older than six years.

You can get an extension of the 60-day deadline if:

• Office of Inspector General (OIG) acknowledges receipt of a submission to the OIG Self-Disclosure Protocol
• OIG acknowledges receipt of a submission to the OIG Voluntary Self-Referral Protocol
• Provider requests an extension under 42 CFR §401.603

My recommendation? Strap on your pole vaulting shoes and get to jumping!

Audits “Breaking Bad” in New Mexico: Part II

By: Edward M. Roche, the founder of Barraclough NY LLC, a litigation support firm that helps healthcare providers fight against statistical extrapolations.

In the first article in this series, we covered how a new governor of New Mexico recently came into power and shortly thereafter, all 15 of the state’s nonprofit providers for behavioral health services were accused of fraud and replaced with companies owned by UnitedHealthcare.

When a new team is brought in to take over a crisis situation, one might expect that things would improve. The replacement companies might be presumed to transfer to New Mexico newer and more efficient methods of working, and patient services would become better and more efficient. Out with the old, in with the new. The problem in New Mexico is that this didn’t happen – not at all.

The corporate structure in New Mexico is byzantine. UnitedHealth Group, Inc. is a Minnesota corporation that works through subsidiaries, operating companies and joint ventures to provide managed healthcare throughout the United States. In New Mexico, UnitedHealth worked through Optum Behavioral Health Solutions and United Behavioral Health, Inc. OptumHealth New Mexico is a joint venture between UnitedHealthcare Insurance Company and United Behavioral Health, according to the professional services contract signed with the State of New Mexico.

And that’s not all. OptumHealth is not the company providing the services. According to the contract, It was set up to act as a bridge between actual providers of health services and a legal entity called the State of New Mexico Interagency Behavioral Health Purchasing Collaborative. This Collaborative combines together 16 agencies within the state government.

OptumHealth works by using subcontractors to actually deliver healthcare under both Medicaid and Medicare. Its job is to make sure that all claims from the subcontractors are compliant with state and federal law. It takes payment for the claims submitted and then pays out to the subcontractors. But for this service, OptumHealth takes a 28-percent commission, according to court papers.

This is a nice margin. A complaint filed by whistleblower Karen Clark, an internal auditor with OptimumHealth, indicated that from October 2011 until April 2012, OptumHealth paid out about $88.25 million in Medicaid funds and got a commission of $24.7 million. The payments went out to nine subcontractors. Clark claimed that from Oct. 1, 2011 until April 22, 2013, the overall payouts were about $529.5 million, and the 28-percent commission was about $148.3 million.

In spite of the liberal flow of taxpayer money, things did not go well. Clark’s whistleblower suit, filed in the U.S. District Court for the District of New Mexico, claimed that OptumHealth knew of massive fraud but refused to investigate. Clark says she was eventually fired after she uncovered the malfeasance. It appears that even after learning of problems, OptumHealth kept billing away, eager to continue collecting that 28-percent commission.

Clark’s complaint details a number of problems in New Mexico’s behavioral health sector. It is a list of horrors: there were falsified records, services provided by unlicensed providers, use of improper billing codes, claims for services that never were provided, and many other problems. Allegedly, many client files contained no treatment plans or treatment notes, or even records of what treatments had been provided and s services billed for times when offices were closed. The suit also claims that some services were provided by probationers instead of licensed providers, and a number of bills were submitted for a person who was outside the United States at the time.

The complaint further alleges that one provider received $300,000 in payments, but had submitted only $200,000 worth of claims. When Clark discovered this she allegedly was told by her supervisor at OptumHealth that it was “too small to be concerned about”. It also is alleged that a) insight-oriented psychotherapy was billed when actually the client was being taught how to brush their teeth; b) the same services were billed to the same patient several times per month, and files were falsified to satisfy Medicaid rules; c) interactive therapy sessions were billed for patients who were non-verbal and unable to participate; d) individual therapy was claimed when group therapy was given; e) apart from Medicaid, other sources allegedly were billed for exactly the same services; and f) developmentally disabled patients were used to bill for group therapy from which they had no capacity to benefit. Clark also stated that investigations of one provider for false billing were suspended because they were “a big player in the state”.

Other alleged abuse included a provider that submitted claims for 15-20 hours per day of group therapy for 20 to 40 children at a time, and for numerous psychotherapy services never provided. The complaint also describes one individual provider that supposedly worked three days per week, routinely billing Medicaid for twelve 30-minute individual psychotherapy sessions; 12 family psychotherapy sessions; 23 children in group therapy; and 32 children in group interactive psychotherapy each day.

A number of other abuses are detailed in the complaint: a) some providers had secretaries prescribing medication; b) one provider claimed that it saw 30 patients each 90 minutes per day for psychotherapeutic treatment; c) some individuals allegedly submitted claims for 30 hours per day of treatment; and d) some facilities had no credentialed psychotherapist at any of its facilities. Remember that all of these subcontractors are providing behavioral (psychiatric and psychological) services. Clark found that others submitted bills claiming the services were performed by a medical doctor, but there were none at their facility.

And in one of the most stunning abuses imaginable, one provider allegedly diagnosed all of their patients as having autism. Clark believes this was done because it allowed billing under both medical and mental health billing codes.

These are only a few of the apparent problems we see in New Mexico’s behavioral services.

You would think that once all of this had been brought to light, then public authorities such as the state’s Attorney General’s office would be eager to investigate and begin to root out the abusers. But that isn’t what happened.

James Hallinan, a spokesman for that office, stated that “based on its investigation, the Office of the Attorney General determined it would be in the best interest of the State to decline to intervene in the case.”

While it was making this decision, Clark’s allegations remained under court seal. But now they can be shown.

Note:

(*) Hallinan, James, spokesman for Attorney General’s office, quoted by Peters, J. and Lyman, A. Lawsuit: $14 million in new Medicaid fraud ignored in botched behavioral health audits, January 8, 2016, NM Political Report, URL: http://nmpoliticalreport.com/26519/lawsuit-optumhealth-botched-audits-of-nm-providers/ accessed March 22, 2016.

This article is based on US ex rel. Karen Clark and State of New Mexico ex rel. Karen Clark and Karen Clark, individually vs. UnitedHealth Group, Inc., United Healthcare Insurance Company, United Behavioral Health, Inc., and OptumHealth New Mexico, Complaint for Damages and Penalties, United States District Court for the District of New Mexico, No. 13-CV-372, April 22, 2013 held under court seal until a few weeks ago.

Audits “Breaking Bad” in New Mexico

By: Ed Roche, founder of Barraclough NY LLC, a litigation support firm that helps healthcare providers fight against statistical extrapolations

It was published in RACMonitor.

Healthcare providers sometimes can get caught up in a political storm. When this happens, audits can be used as a weapon to help preferred providers muscle into a market. This appears to have happened recently in New Mexico.

Let’s go back in time.

On Sept. 14, 2010, Susana Martinez was in Washington, D.C. She was looking for campaign contributions to run for the governorship of New Mexico. She visited the office of the government lobbying division of UnitedHealth Group and picked up a check for $25,000.

The next day, Martinez published an editorial claiming that Bill Richardson’s administration in New Mexico was tolerating much “waste, fraud and abuse” in its Medicaid program. Eventually, she was elected as the 31st governor of New Mexico and took office Jan. 1, 2011.

According to an email trail, by the fall of 2012, Martinez’s administration was busy exchanging emails with members of the boards of directors of several healthcare companies in Arizona. During this same period, the Arizonans made a number of contributions to a political action committee (PAC) set up to support Martinez. At the same time, officers from New Mexico’s Human Services Department (HSD) made a number of unannounced visits to Arizona.

The lobbying continued in earnest. Hosted in part by UnitedHealth money, the head of HSD visited Utah’s premier ski resort, and the bill was paid for by an organization financed in part by UnitedHealth. The governor’s chief of staff was treated to dinner at an expensive steakhouse in Las Vegas. There is suspicion of other contacts, but these have not been identified. All of these meetings were confidential.

The governor continued to publicly criticize health services in New Mexico. She focused on 15 mental health providers who had been in business for 40 years. They were serving 87 percent of the mental health population in New Mexico and had developed an extensive delivery system that reached all corners of the state.

Martinez honed in on one mental health provider because the CEO used a private aircraft. He was accused of using Medicaid funds to finance a lavish lifestyle. None of this was true. It turned out that the owner had operations all over the state and used the plane for commuting, but it made for good sound bites to feed the press.

The state decided to raise the pressure against the providers. Public Consulting Group (PCG), a Boston-based contractor, was called in to perform an audit of mental health services. In addition to taking samples and performing analyses of claims, PCG was asked to look for “credible allegations of fraud.”

In legal terms, the phrase “credible allegations of fraud” carries much weight. Under the Patient Protection and Affordable Care Act, it can be used to justify punitive actions against a provider. It is surprising that only “allegations” are necessary, not demonstrated proof. The reality is that in practical terms, a provider can be shut down based on allegations alone.

In a letter regarding its work, PCG stated that “there are no credible allegations of fraud.” Evidently, that was the wrong answer. PCG was kicked out of New Mexico and not allowed to complete its audit. HSD took over.

The PCG letter had been supplied to HSD in a Microsoft Word format. In a stunning act, HSD removed the statement concluding that there were “no credible allegations of fraud.” HSD continued to use the PCG letter, but only in this altered form.

HSD continued to insist publicly that there were credible allegations of fraud. Since PCG had been kicked out before completing the audit, a HSD staff attorney took the liberty of performing several statistical extrapolations that generated a repayment demand of more than $36 million. During testimony, the attorney admitted that the extent of his experience with statistics was an introductory course he had taken years earlier in college.

Two years later, statistical experts from Barraclough NY LLC who are elected fellows of the American Statistical Association examined HSD’s work and concluded that it was faulty and unreliable. They concluded there was zero credibility in the extrapolations.

But for the time being, the extrapolations and audits were powerful tools. On June 24, 2013, all of the aforementioned 15 nonprofits were called into a meeting with HSD. All were accused of massive fraud. They were informed that their Medicaid payments were to be impounded. The money needed to service 87 percent of New Mexico’s mental health population was being cut off.

The next day, UnitedHealth announced a $22 million investment in Santa Fe. We have not been able to track down the direct beneficiaries of these investments. However, we do know that the governor’s office immediately issued a press release on their behalf.

The 15 New Mexico providers were being driven out of business. This had been planned well in advance. Shortly thereafter, the government of New Mexico, through HSD, [approved] issued $18 million in no-bid contracts to five Arizona-based providers affiliated with UnitedHealth. These are the same companies that had been contributing to the governor’s PAC.

These five Arizona companies then took over all mental health services for New Mexico. Their first step was to begin cutting back services. To give one example: patients with two hours therapy per week were cut back to 10 fifteen-minute sessions per year.It was the beginning of a mental health crisis in New Mexico.

As of today, two of the Arizona providers have abandoned their work in New Mexico. A third is in the process of leaving. What is the result? Thousands of New Mexico mental health patients have been left with no services. Entire communities have been completely shut [cut] off. The most vulnerable communities have been hit the hardest.

Through litigation, the 15 original providers forced the New Mexico Attorney General to examine the situation. It took a long time. All of the providers now are out of business. The Attorney General reported a few weeks ago that there were never any credible allegations of fraud.

This should mean that the impounded money would be returned to the 15 providers. After all, the legal reason why it was impounded in the first place has been shown to be false. One would think that the situation could return to normal.

The original 15 should be able to continue their business, and hire back the more than 1,500 persons they had been forced to lay off. Once the impounded monies are returned to the providers, they will be able to pay their legal bills, which now add up to hundreds of thousands of dollars.

Unfortunately, that is not happening. HSD still is claiming that the $36 million extrapolation is due, and that actually, the providers owe the state money. The New Mexico government is not budging from its position. The litigation continues.

Meanwhile, New Mexico now is tied with Montana in having the highest suicide rate in the continental United States.

Alphabet Soup: RACs, MICs, MFCUs, CERTs, ZPICs, PERMs and Their Respective Look Back Periods

I have a dental client, who was subject to a post payment review by Public Consulting Group (PCG). During the audit, PCG reviewed claims that were 5 years old.  In communication with the state, I pointed out that PCG surpassed its allowable look back period of 3 years.  To which the Assistant Attorney General (AG) said, “This was not a RAC audit.”  I said, “Huh. Then what type of audit is it? MIC? ZPIC? CERT?” Because the audit has to be one of the known acronyms, otherwise, where is PCG’s authority to conduct the audit?

There has to be a federal and state regulation applicable to every audit.  If there is not, the audit is not allowable.

So, with the state claiming that this post payment review is not a RAC audit, I looked into what it could be.

In order to address health care fraud, waste, and abuse (FWA), Congress and CMS developed a variety of approaches over the past several years to audit Medicare and Medicaid claims. For all the different approaches, the feds created rules and different acronyms.  For example, a ZPIC audit varies from a CERT audit, which differs from a RAC audit, etc. The rules regulating the audit differ vastly and impact the provider’s audit results greatly. It can be as varied as hockey and football; both have the same purpose of scoring points, but the equipment, method of scoring, and ways to defend against an opponent scoring are as polar opposite as oil and water. It can be confusing and overwhelming to figure out which entity has which rule and which entity has exceeded its scope in an audit.

It can seem that we are caught swimming in a bowl of alphabet soup. We have RACs, ZPICs, MICs, CERTs, and PERMs!!

alphabet soup

What are these acronyms??

This blog will shed some light on the different types of agencies auditing your Medicare and Medicaid claims and what restrictions are imposed on such agencies, as well as provide you with useful tips while undergoing an audit and defending the results.

First, what do the acronyms stand for?

  • Medicare Recovery Audit Contractors (RACs)
  • Medicaid RACs
  • Medicaid Integrity Contractors (MICs)
  • Zone Program Integrity Contractors (ZPICs)
  • State Medicaid Fraud Control Units (MFCUs)
  • Comprehensive Error Rate Testing (CERT)
  • Payment Error Rate Measurement (PERM)

Second, what are the allowable scope, players, and look back periods for each type of audit? I have comprised the following chart for a quick “cheat sheet” when it comes to the various types of audits. When an auditor knocks on your door, ask them, “What type of audit is this?” This can be invaluable information when it comes to defending the alleged overpayment.

SCOPE, AUDITOR, AND LOOK-BACK PERIOD
Name Scope Auditor Look-back period
Medicare RACs

Focus:

Medicare zaqoverpayments and underpayments

Medicare RACs are nationwide. The companies bid for federal contracts. They use post payment reviews to seek over and under payments and are paid on a contingency basis. Region A:  Performant Recovery

Region B:  CGI Federal, Inc.

Region C:  Connolly, Inc.

Region D:  HealthDataInsights, Inc.

Three years after the date the claim was filed.
Medicaid RACs

Focus:

Medicaid overpayments and underpayments

Medicaid RACs operate nationwide on a state-by-state basis. States choose the companies to perform RAC functions, determine the areas to target without informing the public, and pay on a contingency fee basis. Each state contracts with a private company that operates as a Medicaid RAC.

In NC, we use PCG and HMS.

Three years after the date the claim was filed, unless the Medicaid RAC has approval from the state.
MICs

Focus:

Medicaid overpayments and education

MICs review all Medicaid providers to identify high-risk areas, overpayments, and areas for improvement. CMS divided the U.S. into five MIC jurisdictions.

New York (CMS Regions I & II) – Thomson Reuters (R) and IPRO (A) • Atlanta (CMS Regions III & IV) – Thomson Reuters (R) and Health Integrity (A) • Chicago (CMS Regions V & VII) – AdvanceMed (R) and Health Integrity (A) • Dallas (CMS Regions VI & VIII) – AdvanceMed (R) and HMS (A) • San Francisco (CMS Regions IX & X) – AdvanceMed (R) and HMS (A)

MICs are not paid on a contingency fee basis.

MICs  may review a claim as far back as permitted under the laws of the respective states (generally a five-year look-back period).
ZPICs

Focus:

Medicare fraud, waste, and abuse

ZPICs investigate potential Medicare FWA and refer these cases to other entities.

Not random.

CMS, which has divided the U.S. into seven ZPICs jurisdictions.

Only investigate potential fraud.

ZPICs are not paid on a contingency fee basis.

ZPICs have no specified look-back period.
MFCUs

Focus:

Medicaid fraud, waste, and abuse

MFCUs investigate and prosecute (or refer for prosecution) criminal and civil Medicaid fraud cases. Each state, except North Dakota, has an MFCU.

Contact info for NC’s:

Medicaid Fraud Control Unit of North Carolina
Office of the Attorney General
5505 Creedmoor Rd
Suite 300
Raleigh, NC   27612

Phone: (919) 881-2320

website

MFCUs have no stated look-back period.
CERT

Focus:

Medicare improper payment rate

CERT companies indicate the rate of improper payments in the Medicare program in an annual report. CMS runs the CERT program using two private contractors (which I am yet to track down, but I will). The look back period is the current fiscal year (October 1 to September 30).
PERM

Focus:

Medicaid improper payment rate

PERM companies research improper payments in Medicaid and the Children’s Health Insurance Program. They extrapolate a national error rate. CMS runs the PERM program using two private contractors(which I am yet to track down, but I will). The look back period is the current fiscal year (the complete measurement cycle is 22 to 28 months).

 As you can see, the soup is flooded with letters of the alphabet. But which letters are attached to which audit company determines which rules are followed.

It is imperative to know, when audited, exactly which acronym those auditors are

Which brings me back to my original story of my dental provider, who was audited by a “non-RAC” entity for claims 5 years old.

What entity could be performing this audit, since PCG was not acting as its capacity as a RAC auditor? Let’s review:

  • RAC: AG claims no.
  • MIC: This is a state audit, not federal. No.
  • MFCU: No prosecutor involved. No.
  • ZPIC: This is a state audit, not federal. No allegation of fraud. No.
  • CERT:This is a state audit, not federal. No.
  • PERM: This is a state audit, not federal. No.

Hmmmm….

If it walks like a duck, talks like a duck, and acts like a duck, it must be a duck, right?

Or, in this case, a RAC.