Blog Archives

Medicare and Medicaid RAC Audits: How Auditors Get It Wrong

Here is an article that I wrote that was first published on RACMonitor on March 15, 2018:

All audits are questionable, contends the author, so appeal all audit results.

Providers ask me all the time – how will you legally prove that an alleged overpayment is erroneous? When I explain some examples of mistakes that Recovery Audit Contractors (RACs) and other health care auditors make, they ask, how do these auditors get it so wrong?

First, let’s debunk the notion that the government is always right. In my experience, the government is rarely right. Auditors are not always healthcare providers. Some have gone to college. Many have not. I googled the education criteria for a clinical compliance reviewer. The job application requires the clinical reviewer to “understand Medicare and Medicaid regulations,” but the education requirement was to have an RN. Another company required a college degree…in anything.

Let’s go over the most common mistakes auditors make that I have seen. I call them “oops, I did it again.” And I am not a fan of reruns.

  1. Using the Wrong Clinical Coverage Policy/Manual/Regulation

Before an on-site visit, auditors are given a checklist, which, theoretically, is based on the pertinent rules and regulations germane to the type of healthcare service being audited. The checklists are written by a government employee who most likely is not an attorney. There is no formal mechanism in place to compare the Medicare policies, rules, and manuals to the checklist. If the checklist is erroneous, then the audit results are erroneous. The Centers for Medicare & Medicaid Services (CMS) frequently revises final rules, changing requirements for certain healthcare services. State agencies amend small technicalities in the Medicaid policies constantly. These audit checklists are not updated every time CMS issues a new final rule or a state agency revises a clinical coverage policy.

For example, for hospital-based services, there is a different reimbursement rate depending on whether the patient is an inpatient or outpatient. Over the last few years there have been many modifications to the benchmarks for inpatient services. Another example is in behavioral outpatient therapy; while many states allow 32 unmanaged visits, others have decreased the number of unmanaged visits to 16, or, in some places, eight. Over and over, I have seen auditors apply the wrong policy or regulation. They apply the Medicare Manual from 2018 for dates of service performed in 2016, for example. In many cases, the more recent policies are more stringent that those of two or three years ago.

  1. A Flawed Sample Equals a Flawed Extrapolation

The second common blunder auditors often make is producing a flawed sample. Two common mishaps in creating a sample are: a) including non-government paid claims in the sample and b) failing to pick the sample randomly. Both common mistakes can render a sample invalid, and therefore, the extrapolation invalid. Auditors try to throw out their metaphoric fishing nets wide in order to collect multiple types of services. The auditors accidentally include dates of service of claims that were paid by third-party payors instead of Medicare/Medicaid. You’ve heard of the “fruit of the poisonous tree?” This makes the audit the fruit of the poisonous audit. The same argument goes for samples that are not random, as required by the U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG). A nonrandom sample is not acceptable and would also render any extrapolation invalid.

  1. A Simple Misunderstanding

A third common blooper found with RAC auditors is simple misunderstandings based on lack of communication between the auditor and provider. Say an auditor asks for a chart for date of service X. The provider gives the auditor the chart for date of service X, but what the auditor is really looking for is the physician’s order or prescription that was dated the day prior. The provider did not give the auditor the pertinent document because the auditor did not request it. These issues cause complications later, because inevitably, the auditor will argue that if the provider had the document all along, then why was the document not presented? Sometimes inaccurate accusations of fraud and fabrication are averred.

  1. The Erroneous Extrapolation

Auditors use a computer program called RAT-STATS to extrapolate the sample error rate across a universe of claims. There are so many variables that can render an extrapolation invalid. Auditors can have too low a confidence level. The OIG requires a 90 percent confidence level at 25 percent precision for the “point estimate.” The size and validity of the sample matters to the validity of the extrapolation. The RAT-STATS outcome must be reviewed by a statistician or a person with equal expertise. An appropriate statistical formula for variable sampling must be used. Any deviations from these directives and other mandates render the extrapolation invalid. (This is not an exhaustive list of requirements for extrapolations).

  1. That Darn Purple Ink!

A fifth reason that auditors get it wrong is because of nitpicky, nonsensical reasons such as using purple ink instead of blue. Yes, this actually happened to one of my clients. Or if the amount of time with the patient is not denoted on the medical record, but the duration is either not relevant or the duration is defined in the CPT code. Electronic signatures, when printed, sometimes are left off – but the document was signed. A date on the service note is transposed. Because there is little communication between the auditor and the provider, mistakes happen.

The moral of the story — appeal all audit results.

Medicare Audits: DRG Downcoding in Hospitals: Algorithms Substituting for Medical Judgment, Part 1

This article is written by our good friend, Ed Roche. He is the founder of Barraclough NY, LLC, which is a litigation support firm that helps us fight against extrapolations.

e-roche

The number of Medicare audits is increasing. In the last five years, audits have grown by 936 percent. As reported previously in RACmonitor, this increase is overwhelming the appeals system. Less than 3 percent of appeal decisions are being rendered on time, within the statutory framework.

It is peculiar that the number of audits has grown rapidly, but without a corresponding growth in the number of employees for Recovery Audit Contractors (RACs). How can this be? Have the RAC workers become more than 900 percent more efficient? Well, in a way, they have. They have learned to harness the power of big data.

Since 1986, the ability to store digital data has grown from 0.02 exabytes to 500 exabytes. An exabyte is one quintillion bytes. Every day, the equivalent 30,000 Library of Congresses is put into storage. That’s lots of data.

Auditing by RACs has morphed into using computerized techniques to pick targets for audits. An entire industry has emerged that specializes in processing Medicare claims data and finding “sweet spots” on which the RACs can focus their attention. In a recent audit, the provider was told that a “focused provider analysis report” had been obtained from a subcontractor. Based on that report, the auditor was able to target the provider.

A number of hospitals have been hit with a slew of diagnosis-related group (DRG) downgrades from internal hospital RAC teams camping out in their offices, continually combing through their claims data. The DRG system constitutes a framework that classifies any inpatient stay into groups for purposes of payment.

The question then becomes: how is this work done? How is so much data analyzed? Obviously, these audits are not being performed manually. They are cyber audits. But again, how?

An examination of patent data sheds light on the answer. For example, Optum, Inc. of Minnesota (associated with UnitedHealthcare) has applied for a patent on “computer-implemented systems and methods of healthcare claim analysis.” These are complex processes, but what they do is analyze claims based on DRGs.

The information system envisaged in this patent appears to be specifically designed to downgrade codes. It works by running a simulation that switches out billed codes with cheaper codes, then measures if the resulting code configuration is within the statistical range averaged from other claims.

If it is, then the DRG can be downcoded so that the revenue for the hospital is reduced correspondingly. This same algorithm can be applied to hundreds of thousands of claims in only minutes. And the same algorithm can be adjusted to work with different DRGs. This is only one of many patents in this area.

When this happens, the hospital may face many thousands of downgraded claims. If it doesn’t like it, then it must appeal.

Here there is a severe danger for any hospital. The problem is that the cost the RAC incurs running the audit is thousands of time less expensive that what the hospital must spend to refute the DRG coding downgrade.

This is the nature of asymmetric warfare. In military terms, the cost of your enemy’s offense is always much smaller than the cost of your defense. That is why guerrilla warfare is successful against nation states. That is why the Soviet Union and United States decided to stop building anti-ballistic missile (ABM) systems — the cost of defense was disproportionately greater than the cost of offense.

Hospitals face the same problem. Their claims data files are a giant forest in which these big data algorithms can wander around downcoding and picking up substantial revenue streams.

By using artificial intelligence (advanced statistical) methods of reviewing Medicare claims, the RACs can bombard hospitals with so many DRG downgrades (or other claim rejections) that it quickly will overwhelm their defenses.

We should note that the use of these algorithms is not really an “audit.” It is a statistical analysis, but not done by any doctor or healthcare professional. The algorithm could just as well be counting how many bags of potato chips are sold with cans of beer.

If the patient is not an average patient, and the disease is not an average disease, and the treatment is not an average treatment, and if everything else is not “average,” then the algorithm will try to throw out the claim for the hospital to defend. This has everything to do with statistics and correlation of variables and very little to do with understanding whether the patient was treated properly.

And that is the essence of the problem with big data audits. They are not what they say they are, because they substitute mathematical algorithms for medical judgment.

EDITOR’ NOTE: In Part II of this series, Edward Roche will examine the changing appeals landscape and what big data will mean for defense against these audits. In Part III, he will look at future scenarios for the auditing industry and the corresponding public policy agenda that will involve lawmakers.