Blog Archives

Why Auditors Can’t be Unbiased

Last week on Monitor Mondays, Knicole Emanuel, Esq. reported on the case of Commonwealth v. Pediatric Specialist, PLLC, wherein the Recovery Audit Contractors’ (RACs’) experts were prohibited from testifying because they were paid on contingency. This means that the auditor (or the company for which they work) is paid some percentage of the overpayment findings it reports.

In this case, as in most nowadays, the overpayment estimate was based upon extrapolation, which means that the auditor extended the overpayment amount found in the sample to that of all claims within the universe from which the sample was drawn. I have written about this process before, but basically, it can turn a $1,500 overpayment on the sample into a $1.5 million overpayment demand.

The key to an effective extrapolation is that the statistical process is appropriate, proper, and accurate. In many audits, this is not the case, and so what happens is, if the provider believes that the extrapolation is not appropriate, they may choose to challenge the results in their appeal. Many times, this is when they will hire a statistician, like me, to review the statistical sampling and overpayment estimate (SSOE), including data and documentation to assist with the appeal. I have worked on hundreds of these post-audit extrapolation mitigation appeals over the years, and even though I am employed by the provider, I maintain a position as an independent fact-finder.  My reports are based on facts and figures, and my opinion is based on those findings. Period.

So, what is it that allows me to remain independent? To perform my job without undue influence or bias? Is it my incredibly high ethical standards? Check! My commitment to upholding the standards of my industry? Check!  Maybe my good looks? Well, not check! It is the fact that my fees are fixed, and are not contingent on the outcome. I mean, it would be great if I could do what the RACs do and cash in on the outcomes of a case, but alas, no such luck.

In one large class-action case in which I was the statistical expert, the defendant settled for $122 million. The law firm got something like a quarter or a third of that, and the class members all received some remuneration as well. Me? I got my hourly rate, and after the case was done, a bottle of Maker’s Mark whiskey as a thank you. And I’m not even sure that was appropriate, so I sent it back. I would love to be paid a percentage of what I am able to save a client in this type of appeal. I worked on a case a couple of years ago for which we were able to get the extrapolation thrown out, which reduced the payment demand from $5.9 million to $3,300. Imagine if I got paid even 2 percent of that; it would be nearly $120,000. But that can’t happen, because the moment my work product is tied to the results, I am no longer independent, nor unbiased. I don’t care how honest or ethical you are, contingency deals change the landscape – and that is as true for me, as an expert, as it is for the auditor.

In the pediatric case referenced above, the RAC that performed the audit is paid on a contingency, although I like to refer to it as a “bounty.” As such, the judge ruled, as Ms. Emanuel reported, that their experts could not testify on behalf of the RAC. Why not? Because the judge, unlike the RAC, is an independent arbiter, and having no skin in the game, is unbiased in their adjudication. But you can’t say that about the RAC. If they are being paid a “bounty” (something like 10 percent), then how in the world could they be considered independent and unbiased?

The short answer is, they can’t. And this isn’t just based on standards of statistical practice; it is steeped in common sense. Look at the appeal statistics; some 50 percent of all RAC findings are eventually reversed in favor of the provider. If that isn’t evidence of an overzealous, biased, bounty-hunting process, I don’t know what is. Basically, as Knicole reported, having their experts prohibited from testifying, the RAC was unable to contest the provider’s arguments, and the judge ruled in favor of the provider.

But, in my opinion, it should not stop here. This is one of those cases that exemplifies the “fruit of the poisonous tree” defense, meaning that if this case passes muster, then every other case for which the RAC did testify and the extrapolation held should be challenged and overturned. Heck, I wouldn’t be surprised if there was a class-action lawsuit filed on behalf of all of those affected by RAC extrapolated audits. And if there is one, I would love to be the statistical expert – but for a flat fee, of course, and not contingent upon the outcome.

And that’s the world according to Frank.

Frank Cohen is a frequent panelist with me on RACMonitor. I love his perspective on expert statistician witnesses. He drafted based off a Monitor Monday report of mine. Do not miss both Frank and me on RACMonitor, every Monday.

Medicare and Medicaid RAC Audits: How Auditors Get It Wrong

Here is an article that I wrote that was first published on RACMonitor on March 15, 2018:

All audits are questionable, contends the author, so appeal all audit results.

Providers ask me all the time – how will you legally prove that an alleged overpayment is erroneous? When I explain some examples of mistakes that Recovery Audit Contractors (RACs) and other health care auditors make, they ask, how do these auditors get it so wrong?

First, let’s debunk the notion that the government is always right. In my experience, the government is rarely right. Auditors are not always healthcare providers. Some have gone to college. Many have not. I googled the education criteria for a clinical compliance reviewer. The job application requires the clinical reviewer to “understand Medicare and Medicaid regulations,” but the education requirement was to have an RN. Another company required a college degree…in anything.

Let’s go over the most common mistakes auditors make that I have seen. I call them “oops, I did it again.” And I am not a fan of reruns.

  1. Using the Wrong Clinical Coverage Policy/Manual/Regulation

Before an on-site visit, auditors are given a checklist, which, theoretically, is based on the pertinent rules and regulations germane to the type of healthcare service being audited. The checklists are written by a government employee who most likely is not an attorney. There is no formal mechanism in place to compare the Medicare policies, rules, and manuals to the checklist. If the checklist is erroneous, then the audit results are erroneous. The Centers for Medicare & Medicaid Services (CMS) frequently revises final rules, changing requirements for certain healthcare services. State agencies amend small technicalities in the Medicaid policies constantly. These audit checklists are not updated every time CMS issues a new final rule or a state agency revises a clinical coverage policy.

For example, for hospital-based services, there is a different reimbursement rate depending on whether the patient is an inpatient or outpatient. Over the last few years there have been many modifications to the benchmarks for inpatient services. Another example is in behavioral outpatient therapy; while many states allow 32 unmanaged visits, others have decreased the number of unmanaged visits to 16, or, in some places, eight. Over and over, I have seen auditors apply the wrong policy or regulation. They apply the Medicare Manual from 2018 for dates of service performed in 2016, for example. In many cases, the more recent policies are more stringent that those of two or three years ago.

  1. A Flawed Sample Equals a Flawed Extrapolation

The second common blunder auditors often make is producing a flawed sample. Two common mishaps in creating a sample are: a) including non-government paid claims in the sample and b) failing to pick the sample randomly. Both common mistakes can render a sample invalid, and therefore, the extrapolation invalid. Auditors try to throw out their metaphoric fishing nets wide in order to collect multiple types of services. The auditors accidentally include dates of service of claims that were paid by third-party payors instead of Medicare/Medicaid. You’ve heard of the “fruit of the poisonous tree?” This makes the audit the fruit of the poisonous audit. The same argument goes for samples that are not random, as required by the U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG). A nonrandom sample is not acceptable and would also render any extrapolation invalid.

  1. A Simple Misunderstanding

A third common blooper found with RAC auditors is simple misunderstandings based on lack of communication between the auditor and provider. Say an auditor asks for a chart for date of service X. The provider gives the auditor the chart for date of service X, but what the auditor is really looking for is the physician’s order or prescription that was dated the day prior. The provider did not give the auditor the pertinent document because the auditor did not request it. These issues cause complications later, because inevitably, the auditor will argue that if the provider had the document all along, then why was the document not presented? Sometimes inaccurate accusations of fraud and fabrication are averred.

  1. The Erroneous Extrapolation

Auditors use a computer program called RAT-STATS to extrapolate the sample error rate across a universe of claims. There are so many variables that can render an extrapolation invalid. Auditors can have too low a confidence level. The OIG requires a 90 percent confidence level at 25 percent precision for the “point estimate.” The size and validity of the sample matters to the validity of the extrapolation. The RAT-STATS outcome must be reviewed by a statistician or a person with equal expertise. An appropriate statistical formula for variable sampling must be used. Any deviations from these directives and other mandates render the extrapolation invalid. (This is not an exhaustive list of requirements for extrapolations).

  1. That Darn Purple Ink!

A fifth reason that auditors get it wrong is because of nitpicky, nonsensical reasons such as using purple ink instead of blue. Yes, this actually happened to one of my clients. Or if the amount of time with the patient is not denoted on the medical record, but the duration is either not relevant or the duration is defined in the CPT code. Electronic signatures, when printed, sometimes are left off – but the document was signed. A date on the service note is transposed. Because there is little communication between the auditor and the provider, mistakes happen.

The moral of the story — appeal all audit results.