Auditors are not lawyers. Some auditors do not even possess the clinical background of the services they are auditing. In this blog, I am concentrating on the lack of legal licenses. Because the standards to which auditors need to hold providers to are not only found in the Medicare Provider Manuals, regulations, NCDs and LCDs. Oh, no… To add even more spice to the spice cabinet, common law court cases also create and amend Medicare and Medicaid policies.
For example, the Jimmo v. Selebius settlement agreement dictates the standards for skilled nursing and skilled therapy in skilled nursing facilities, home health, and outpatient therapy settings and importantly holds that coverage does not turn on the presence or absence of a beneficiary’s potential for improvement.
The Jimmo settlement dictates that:
“Specifically, in accordance with the settlement agreement, the manual revisions clarify that coverage of skilled nursing and skilled therapy services in the skilled nursing facility (SNF), home health (HH), and outpatient therapy (OPT) settings “…does not turn on the presence or absence of a beneficiary’s potential for improvement, but rather on the beneficiary’s need for skilled care.” Skilled care may be necessary to improve a patient’s current condition, to maintain the patient’s current condition, or to prevent or slow further deterioration of the patient’s condition.”
This Jimmo standard – not requiring a potential for improvement – is essential for diseases that are lifelong and debilitating, like Multiple Sclerosis (“MS”). For beneficiaries suffering from MS, skilled therapy is essential to prevent regression.
I have reviewed numerous audits by UPICs, in particular, which have failed to follow the Jimmo settlement standard and denied 100% of my provider-client’s claims. 100%. All for failure to demonstrate potential for improvement for MS patients. It’s ludicrous until you stop and remember that auditors are not lawyers. This Jimmo standard is found in a settlement agreement from January 2013. While we will win on appeal, it costs providers money valuable money when auditors apply the wrong standards.
The amounts in controversy are generally high due to extrapolations, which is when the UPIC samples a low number of claims, determines an error rate and extrapolates that error rate across the universe. When the error rate is falsely 100%, the extrapolation tends to be high.
While an expectation of improvement could be a reasonable criterion to consider when evaluating, for example, a claim in which the goal of treatment is restoring a prior capability, Medicare policy has long recognized that there may also be specific instances where no improvement is expected but skilled care is, nevertheless, required in order to prevent or slow deterioration and maintain a beneficiary at the maximum practicable level of function. For example, in the regulations at 42 CFR 409.32(c), the level of care criteria for SNF coverage specify that the “. . . restoration potential of a patient is not the deciding factor in determining whether skilled services are needed. Even if full recovery or medical improvement is not possible, a patient may need skilled services to prevent further deterioration or preserve current capabilities.” The auditors should understand this and be trained on the proper standards. The Medicare statute and regulations have never supported the imposition of an “Improvement Standard” rule-of-thumb in determining whether skilled care is required to prevent or slow deterioration in a patient’s condition.
When you are audited by an auditor whether it be a RAC, MAC or UPIC, make sure the auditors are applying the correct standards. Remember, the auditors aren’t attorneys or doctors.
Who knows that – regardless your innocence –the government can and will recoup your funds preemptively at the third level of Medicare appeals. This flies in the face of the elements of due process. However, courts have ruled that the redetermination and the reconsideration levels afford the providers enough due process, which entails notice and an opportunity to be heard. I am here to tell you – that is horse manure. The first two levels of a Medicare appeal are hoops to jump through in order to get to an independent tribunal – the administrative law judge (“ALJ”). The odds of winning at the 1st or 2nd level Medicare appeal is next to zilch, although often you can get the alleged amount reduced. The first level is before the same entity that found you owe the money. Auditors are normally not keen on overturning themselves. The second level is little better. The first time that you present to an independent tribunal is at the third level.
Between 2009 and 2014, the number of ALJ appeals increased more than 1,200 percent. And the government recoups all alleged overpayments before you ever get before an ALJ.
In a recent case, Sahara Health Care, Inc. v. Azar, 975 F.3d 523 (5th Cir. 2020), a home health care provider brought an action against Secretary of Department of Health and Human Services (“HHS”) and Administrator for the Centers for Medicare and Medicaid Services (“CMS”), asserting that its statutory and due process rights were violated and that defendants acted ultra vires by recouping approximately $2.4 million in Medicare overpayments without providing a timely ALJ hearing. HHS moved to dismiss, and the provider moved to amend, for a temporary restraining order (“TRO”) and preliminary injunction, and for an expedited hearing.
The case was thrown out, concluding that adequate process had been provided and that defendants had not exceeded statutory authority, and denied provider’s motion for injunctive relief and to amend. The provider appealed and lost again.
What’s the law?
Congress prohibited HHS from recouping payments during the first two stages of administrative review. 42 U.S.C. § 1395ff(f)(2)(A).
If repayment of an overpayment would constitute an “extreme hardship, as determined by the Secretary,” the agency “shall enter into a plan with the provider” for repayment “over a period of at least 60 months but … not longer than 5 years.” 42 U.S.C. § 1395ddd(f)(1)(A). That hardship safety valve has some exceptions that work against insolvent providers. If “the Secretary has reason to believe that the provider of services or supplier may file for bankruptcy or otherwise cease to do business or discontinue participation” in the Medicare program, then the extended repayment plan is off the table. 42 U.S.C. § 1395ddd(f)(1)(C)(i). A provider that ultimately succeeds in overturning an overpayment determination receives the wrongfully recouped payments with interest. 42 U.S.C. § 1395ddd(f)(2)(B). The government’s interest rate is high. If you do have to pay back the alleged overpayment prematurely, the silver lining is that you may receive extra money for your troubles.
The years-long back log, however, may dwindle. The agency has received a funding increase, and currently expects to clear the backlog by 2022. In fact, the Secretary is under a Mandamus Order requiring such a timetable.
A caveat regarding this grim news. This was in the Fifth Circuit. Other Courts disagree. The Fourth Circuit has held that providers do have property interests in Medicare reimbursements owed for services rendered, which is the correct holding. Of course, you have a property interest in your own money. An allegation of wrongdoing does not erase that property interest. The Fourth Circuit agrees with me.
CMS levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits.
The use of extrapolation in Medicare and private payer audits has been around for quite some time now. And lest you be of the opinion that extrapolation is not appropriate for claims-based audits, there are many, many court cases that have supported its use, both specifically and in general. Arguing that extrapolation should not have been used in a given audit, unless that argument is supported by specific statistical challenges, is mostly a waste of time.
For background purposes, extrapolation, as it is used in statistics, is a “statistical technique aimed at inferring the unknown from the known. It attempts to predict future data by relying on historical data, such as estimating the size of a population a few years in the future on the basis of the current population size and its rate of growth,” according to a definition created by Eurostat, a component of the European Union. For our purposes, extrapolation is used to estimate what the actual overpayment amount might likely be for a population of claims, based on auditing a smaller sample of that population. For example, say a Uniform Program Integrity Contractor (UPIC) pulls 30 claims from a medical practice from a population of 10,000 claims. The audit finds that 10 of those claims had some type of coding error, resulting in an overpayment of $500. To extrapolate this to the entire population of claims, one might take the average overpayment, which is the $500 divided by the 30 claims ($16.67 per claim) and multiply this by the total number of claims in the population. In this case, we would multiply the $16.67 per claim by 10,000 for an extrapolated overpayment estimate of $166,667.
The big question that normally crops up around extrapolation is this: how accurate are the estimates? And the answer is (wait for it …), it depends. It depends on just how well the sample was created, meaning: was the sample size appropriate, were the units pulled properly from the population, was the sample truly random, and was it representative of the population? The last point is particularly important, because if the sample is not representative of the population (in other words, if the sample data does not look like the population data), then it is likely that the extrapolated estimate will be anything but accurate.
To account for this issue, referred to as “sample error,” statisticians will calculate something called a confidence interval (CI), which is a range within which there is some acceptable amount of error. The higher the confidence value, the larger the potential range of error. For example, in the hypothetical audit outlined above, maybe the real average for a 90-percent confidence interval is somewhere between $15 and $18, while, for a 95-percent confidence interval, the true average is somewhere between $14 and $19. And if we were to calculate for a 99-percent confidence interval, the range might be somewhere between $12 and $21. So, the greater the range, the more confident I feel about my average estimate. Some express the confidence interval as a sense of true confidence, like “I am 90 percent confident the real average is somewhere between $15 and $18,” and while this is not necessarily wrong, per se, it does not communicate the real value of the CI. I have found that the best way to define it would be more like “if I were to pull 100 random samples of 30 claims and audit all of them, 90 percent would have a true average of somewhere between $15 and $18,” meaning that the true average for some 1 out of 10 would fall outside of that range – either below the lower boundary or above the upper boundary. The main reason that auditors use this technique is to avoid challenges based on sample error.
To the crux of the issue, the Centers for Medicare & Medicaid Services (CMS) levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits. And while the use of extrapolation is well-established and well-accepted, its use in an audit is not an automatic, and depends upon the creation of a statistically valid and representative sample. Thousands of extrapolation audits are completed each year, and for many of these, the targeted provider or organization will appeal the use of extrapolation. In most cases, the appeal is focused on one or more flaws in the methodology used to create the sample and calculate the extrapolated overpayment estimate. For government audits, such as with UPICs, there is a specific appeal process, as outlined in their Medical Learning Network booklet, titled “Medicare Parts A & B Appeals Process.”
On Aug. 20, 2020, the U.S. Department of Health and Human Services Office of Inspector General (HHS OIG) released a report titled “Medicare Contractors Were Not Consistent in How They Reviewed Extrapolated Overpayments in the Provider Appeals Process.” This report opens with the following statement: “although MACs (Medicare Administrative Contractors) and QICs (Qualified Independent Contractors) generally reviewed appealed extrapolated overpayments in a manner that conforms with existing CMS requirements, CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” These inconsistencies were associated with $42 million in extrapolated payments from fiscal years 2017 and 2018 that were overturned in favor of the provider. It’s important to note that at this point, we are only talking about appeal determinations at the first and second level, known as redetermination and reconsideration, respectively.
Redetermination is the first level of appeal, and is adjudicated by the MAC. And while the staff that review the appeals at this level are supposed to have not been involved in the initial claim determination, I believe that most would agree that this step is mostly a rubber stamp of approval for the extrapolation results. In fact, of the hundreds of post-audit extrapolation mitigation cases in which I have been the statistical expert, not a single one was ever overturned at redetermination.
The second level of appeal, reconsideration, is handled by a QIC. In theory, the QIC is supposed to independently review the administrative records, including the appeal results of redetermination. Continuing with the prior paragraph, I have to date had only several extrapolation appeals reversed at reconsideration; however, all were due to the fact that the auditor failed to provide the practice with the requisite data, and not due to any specific issues with the statistical methodology. In two of those cases, the QIC notified the auditor that if they were to get the required information to them, they would reconsider their decision. And in two other cases, the auditor appealed the decision, and it was reversed again. Only the fifth case held without objection and was adjudicated in favor of the provider.
Maybe this is a good place to note that the entire process for conducting extrapolations in government audits is covered under Chapter 8 of the Medicare Program Integrity Manual (PIM). Altogether, there are only 12 pages within the entire Manual that actually deal with the statistical methodology behind sampling and extrapolation; this is certainly not enough to provide the degree of guidance required to ensure consistency among the different government contractors that perform such audits. And this is what the OIG report is talking about.
Back to the $42 million that was overturned at either redetermination or reconsideration: the OIG report found that this was due to a “type of simulation testing that was performed only by a subset of contractors.” The report goes on to say that “CMS did not intend that the contractors use this procedure, (so) these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.” This was quite confusing for me at first, because this “simulation” testing was not well-defined, and also because it seemed to say that if this procedure was appropriate to use, then more contractors should have used it, which would have resulted in more reversals in favor of the provider.
Interestingly, CMS seems to have written itself an out in Chapter 8, section 220.127.116.11 of the PIM, which states that “[f]ailure by a contractor to follow one or more of the requirements contained herein does not necessarily affect the validity of the statistical sampling that was conducted or the projection of the overpayment.” The use of the term “does not necessarily” leaves wide open the fact that the failure by a contractor to follow one or more of the requirements may affect the validity of the statistical sample, which will affect the validity of the extrapolated overpayment estimate.
Regarding the simulation testing, the report stated that “one MAC performed this type of simulation testing for all extrapolation reviews, and two MACs recently changed their policies to include simulation testing for sample designs that are not well-supported by the program integrity contractor. In contrast, both QICs and three MACs did not perform simulation testing and had no plans to start using it in the future.” And even though it was referenced some 20 times, with the exception of an example given as Figure 2 on page 10, the report never did describe in any detail the type of simulation testing that went on. From the example, it was evident to me that the MACs and QICs involved were using what is known as a Monte Carlo simulation. In statistics, simulation is used to assess the performance of a method, typically when there is a lack of theoretical background. With simulations, the statistician knows and controls the truth. Simulation is used advantageously in a number of situations, including providing the empirical estimation of sampling distributions. Footnote 10 in the report stated that ”reviewers used the specific simulation test referenced here to provide information about whether the lower limit for a given sampling design was likely to achieve the target confidence level.” If you are really interested in learning more about it, there is a great paper called
“The design of simulation studies in medical statistics” by Burton et al. (2006).
Its application in these types of audits is to “simulate” the audit many thousands of times to see if the mean audit results fall within the expected confidence interval range, thereby validating the audit results within what is known as the Central Limit Theorem (CLT).
Often, the sample sizes used in recoupment-type audits are too small, and this is usually due to a conflict between the sample size calculations and the distributions of the data. For example, in RAT-STATS, the statistical program maintained by the OIG, and a favorite of government auditors, sample size estimates are based on an assumption that the data are normally (or near normally) distributed. A normal distribution is defined by the mean and the standard deviation, and includes a bunch of characteristics that make sample size calculations relatively straightforward. But the truth is, because most auditors use the paid amount as the variable of interest, population data are rarely, if ever, normally distributed. Unfortunately, there is simply not enough room or time to get into the details of distributions, but suffice it to say that, because paid data are bounded on the left with zero (meaning that payments are never less than zero), paid data sets are almost always right-skewed. This means that the distribution tail continues on to the right for a very long distance.
In these types of skewed situations, sample size normally has to be much larger in order to meet the CLT requirements. So, what one can do is simulate the random sample over and over again to see whether the sampling results ever end up reporting a normal distribution – and if not, it means that the results of that sample should not be used for extrapolation. And this seems to be what the OIG was talking about in this report. Basically, they said that some but not all of the appeals entities (MACs and QICs) did this type of simulation testing, and others did not. But for those that did perform the tests, the report stated that $41.5 million of the $42 million involved in the reversals of the extrapolations were due to the use of this simulation testing. The OIG seems to be saying this: if this was an unintended consequence, meaning that there wasn’t any guidance in place authorizing this type of testing, then it should not have been done, and those extrapolations should not have been overturned. But if it should have been done, meaning that there should have been some written guidance to authorize that type of testing, then it means that there are likely many other extrapolations that should have been reversed in favor of the provider. A sticky wicket, at best.
Under the heading “Opportunity To Improve Contractor Understanding of Policy Updates,” the report also stated that “the MACs and QICs have interpreted these requirements differently. The MAC that previously used simulation testing to identify the coverage of the lower limit stated that it planned to continue to use that approach. Two MACs that previously did not perform simulation testing indicated that they would start using such testing if they had concerns about a program integrity contractor’s sample design. Two other MACs, which did not use simulation testing, did not plan to change their review procedures.” One QIC indicated that it would defer to the administrative QIC (AdQIC, the central manager for all Medicare fee-for-service claim case files appealed to the QIC) regarding any changes. But it ended this paragraph by stating that “AdQIC did not plan to change the QIC Manual in response to the updated PIM.”
With respect to this issue and this issue alone, the OIG submitted two specific recommendations, as follows:
- Provide additional guidance to MACs and QICs to ensure reasonable consistency in procedures used to review extrapolated overpayments during the first two levels of the Medicare Parts A and B appeals process; and
- Take steps to identify and resolve discrepancies in the procedures that MACs and QICs use to review extrapolations during the appeals process.
In the end, I am not encouraged that we will see any degree of consistency between and within the QIC and MAC appeals in the near future.
Basically, it would appear that the OIG, while having some oversight in the area of recommendations, doesn’t really have any teeth when it comes to enforcing change. I expect that while some reviewers may respond appropriately to the use of simulation testing, most will not, if it means a reversal of the extrapolated findings. In these cases, it is incumbent upon the provider to ensure that these issues are brought up during the Administrative Law Judge (ALJ) appeal.
Programming Note: Listen to Frank Cohen report this story live during the next edition of Monitor Mondays, 10 a.m. Eastern.
As 2020 ends and we look forward to starting a new chapter in 2021, we offer you this little nugget of advice—a resolution that sounds deceptively easy—read your mail. Yes, friends you heard it here first. . . the best thing you can do to protect yourself, your business, your patients, and your loved ones is to read the dang mail. Email, text messages, real mail, carrier pigeon or messages in a bottle. READ THEM!
2020 brought us a lot of curve balls and unexpected events but some of those events could have been avoided had mail been opened and read.
CMS and its third party contractors hold a lot of power in the healthcare world and can cause your practice to come crashing down by hitting send or putting a forever stamp on a letter. A regular practice of reading your mail can avoid that CMS avalanche of doom. 
You may be reading this and thinking, you’ve got to be crazy I always read my mail. Or perhaps you are thinking, this is the easiest new year’s resolution yet—all I have to do is read the mail.
Don’t be too hasty with your self-confidence. This is a hard practice to establish and an even harder one to maintain.
First, you have to actually read the mail. All of the mail. Even the mail you think will contain bad news. Constitutional due process requires only notice NOT successful notice. If successful notice were required, “then people could evade knowledge, and avoid responsibility for their conduct, by burning notices on receipt—or just leaving them unopened.” See Ho v. Donovan, 569 F.3d 677, 680 (7th Cir. 2009). “Conscious avoidance of information is a form of knowledge.” Id.
Second, you need a policy or procedure regarding the opening and reading of mail. One client we worked with did not have a system for logging mail once it was received in the office. Mail was lost. Deadlines were missed. Payments from the largest payer were suspended. The cost – too much to print.
It’s like that old Mastercard ad, yes, I’m talking to those of you out there who were around in the late 90s.
The cost of establishing a policy for logging in mail. . . zero.
The cost of reading mail. . . zero.
The cost of neglecting your mail, missing deadlines, and losing your practice. . . priceless.
So, as this year ends and you contemplate ways to improve your practice in 2021, please, please, please take our advice and READ YOUR MAIL.
It’s not just CMS that has holds the mailbox power. Just ask the City of North Charleston, SC. A motorist’s emailed complaint to the city over injuries sustained in an accident was not forwarded to the insurance carrier resulting in a multi-million dollar default judgement against the city. See Campbell v. City of North Charleston, 431 S.C. 454,459 (SC Ct. App. 2020) (holding that “the failure to forward an email did not amount to good cause shown for failure to timely file an answer).
 For those of you who have no idea what we are talking about see https://www.aaaa.org/timeline-event/mastercard-mccann-erickson-campaign-never-got-old-priceless/
Ashley Thomson brings 20 years of extensive in-house, hospital counsel and law firm experience to our team. Well-versed in a variety of disciplines, her emphasis is in health care, insurance and compliance, specifically medical malpractice, employment, healthcare and privacy law compliance and defense, including matters involving HIPAA. Ashley has also been heavily involved in risk management, patient safety, corporate governance, contract and policy drafting, negotiations and healthcare management. Prior to joining Practus, Ashley served as Associate General Counsel for Truman Medical Center (TMC) where she oversaw litigation, managed all aspects of their corporate compliance matters, including governmental audits and investigations, cybersecurity issues, HIPAA enforcement, 340B compliance and provider-based billing. As their Staff Litigation Counsel, she defended and litigated medical malpractice and general liability matters on behalf of the hospital, its employees, physician group and residents. Prior to joining TMC, Ashley was an Associate Attorney for Husch Blackwell.
Ashley is an outdoors woman at heart. When she’s not working, she’s hiking, walking, working in her yard, or playing with her kids. She’s also an avid reader and a football fan especially when she’s watching her favorite team, the Kansas City Chiefs!
Before the informative article below , I have two announcements!
(1) My blog has been “in publication” for over eight (8) years, this September 2020. Yay! I truly hope that my articles have been educational for the thousands of readers of my blog. Thank you to everyone who follows my blog. And…
Click here: For my new bio and contact information.
Ok – Back to the informative news about the most recent Executive Orders…
My co-panelist on RACMonitor, Matthew Albright, gave a fascinating and informative summary on the recent, flurry of Executive Orders, and, he says, expect many more to come in the near future. He presented the following article on RACMonitor Monitor Monday, August 10, 2020. I found his article important enough to be shared on my blog. Enjoy!!
By Matthew Albright
Original story posted on: August 12, 2020
Presidential Executive Order No. 1 was issued on Oct. 20, 1862 by President Lincoln; it established a wartime court in Louisiana. The most famous executive order was also issued by Lincoln a few years later – the Emancipation Proclamation.
Executive orders are derived from the Constitution, which gives the president the authority to determine how to carry out the laws passed by Congress. The trick here is that executive orders can’t make new laws; they can only establish new – and perhaps creative – approaches to implementing existing laws.
President Trump has signed 18 executive orders and presidential memorandums in the past seven days. That sample of orders and memos are a good illustration of the authority – and the constraints – of presidential powers.
An executive order and a presidential memorandum are basically the same thing; the difference is that a memorandum doesn’t have to cite the specific law passed by Congress that the president is implementing, and a memorandum isn’t published in the Federal Register. In other words, an executive order says “this is what the President is going to do,” and a memorandum says “the President is going to do this too, but it shouldn’t be taken as seriously.”
Executive orders and memorandums often give instructions to federal agencies on what elements of a broader law they should focus on. One good example of this is the executive order signed a week ago by President Trump that provides new support and access to healthcare for rural communities. In that executive order, the President cited the Patient Protection and Affordable Care Act as the broad law he was using to improve access to rural communities.
Executive orders also often illustrate the limits of presidential authority, a good example being the series of executive orders and memorandums that the president signed this past Saturday, intended to provide Americans financial relief during the pandemic.
One of the memorandums signed on Saturday delayed the due date for employers to submit payroll taxes. The idea was that companies would in turn decide to stop taking those taxes out of employees’ paychecks, at least until December.
By looking at the language in the memorandum and seeing what it does not try to do, we can learn a lot about presidential limits.
The memorandum does not give employers or employees a tax break. That power rests unquestionably with Congress. The order only delays when the taxes will be collected. Like the grim reaper, the tax man will come to your door someday, even if you can delay when that “someday” is.
Also, the tax delay is only for employers, and – again, another illustration of the limits of presidential power – it doesn’t tell employers how they should manage this extra time they have to pay the tax. That is, companies could decide to continue to take taxes out of people’s paychecks, knowing that the taxes will still have to be paid someday.
Another memorandum that the president signed on Saturday concerned unemployment benefits. That order illustrates the division in powers between the federal Executive Branch and the authority of the states.
The memorandum provides an extra $400 in unemployment benefits, but in order for it to work, the states would have to put up one-fourth of the money. The memorandum doesn’t require states to put up the money; it “calls on” them to do it, because the President, unless authorized by Congress, can’t make states pay for something they don’t want.
Executive orders and memorandums are reflective of my current position as the father of two pre-teen girls. I can declare the direction the household should go, I can “call on them” to play less Fortnite and eat more fruit, but my orders and their subsequent implementation often just serve to illustrate the limits – both perceived and real –of my paternal power.
Programming Note: Matthew Albright is a permanent panelist on Monitor Mondays (with me:) ). Listen to his legislative update sponsored by Zelis, Mondays at 10 a.m. EST.
We have had parity laws between mental and physical health care services on the books for years. Regardless of the black letter law, mental health health care services have been treated with stigma, embarrassment, and of lesser importance than physical health care services. A broken leg is easily proven by an X-Ray; whereas a broken mind is less obvious.
In an unprecedented Decision ripe with scathing remarks against Optum/United Behavioral Health’s (UBH) actions, a Court recently ruled that UBH improperly denied mental health services to insureds and that those improper denials were financially-driven. A slap-on-the-wrist, this Decision was not. More of a public whipping.
In a 106-page opinion, the US District Court, Northern District of California, slammed UBH in a blistering decision finding that UBH purposely and improperly denied behavioral health care benefits to thousands of mentally ill insureds by utilizing overly restrictive guidelines. This is a HUGE win for the mental health community, which often does not receive the parity of services (of physical health) that it is legally is entitled. U.S. Chief Magistrate Judge Joseph Spero spared no political correctness in his mordacious written opinion, which is rarity in today’s vitriolic world.
The Plaintiffs filed a lawsuit under the Employee Retirement Income Security Act of 1974 (ERISA), saying the insurer denied benefits in violation of the terms of their insurance plans and state law. The Plaintiffs consisted of participants in UBH health care plans and who were denied mental health care services.
Judge Spero found United Behavioral’s guidelines were influenced by financial incentives concerning fully-funded and self-funded ERISA plans:
“While the incentives related to fully insured and self-funded plans are not identical, with respect to both types of plan UBH has a financial interest in keeping benefit expense down … [A]ny resulting shortcomings in its Guideline development process taints its decision-making as to both categories of plan because UBH maintains a uniform set of Guidelines for fully insured and self-funded plans … Instead of insulating its Guideline developers from these financial pressures, UBH has placed representatives of its Finance and Affordability Departments in key roles in the Guidelines development process throughout the class period.”
Surprisingly, this decision came out of California, which is notoriously socially-driven. Attorneys generally avert their eyes when opinions come from the 9th District.
Judge Spero found that UBH violated “generally accepted standards of care” to administer requests for benefits.
The Court found that “many mental health and substance use disorders are long-term and chronic.” It also found that, in questionable instances, the insurance company should err on the caution of placing the patient in a higher level of care. The Court basically cited the old adage – “Better safe than sorry,” which seems a pretty darn good idea when you are talking about mental health. Just ask Ted Bundy.
Even though the Wit Decision involved private pay insurance, the Court repeatedly cited to the Center for Medicare and Medicaid Services’ (CMS) Manual. For example, the Court stated that “the CMS Manual explains, [f]or many . . . psychiatric patients, particularly those with long-term, chronic conditions, control of symptoms and maintenance of a functional level to avoid further deterioration or hospitalization is an acceptable expectation of improvement.” It also quoted ASAM criteria as generally accepted standards, as well as LOCUS, which tells me that the law interprets the CMS Manual, ASAM criteria, and LOCUS as “generally accepted standards,” and not UBH’s or any other private pay insurance’s arbitrary standards. In fact, the Court actually stated that its decision was influenced by the fact that UBH’s adopted many portions of CMS’ Manual, but drafted the language in a more narrow way to ensure more denials of mental health benefits.
The Court emphasized the importance of ongoing care instead of acute care that ceases upon the end of the acute crisis. The denial of ongoing care was categorized as a financial decision. The Court found that UBH’s health care policy “drove members to lower levels of care even when treatment of the member’s overall and/or co-occurring conditions would have been more effective at the higher level of care.”
The Wit decision will impact us in so many ways. For one, if a State Medicaid program limits mental health services beyond what the CMS Manual, ASAM criteria, or LOCUS determines, then providers (and beneficiaries) have a strong legal argument that the State Medicaid criteria do not meet generally accepted standards. Even more importantly, if the State Medicaid policies do NOT limit mental health care services beyond what the CMS Manual, ASAM criteria, and LOCUS defines, but an agent of the State Medicaid Division; i.e, a managed care organization (MCO) deny mental health care services that would be considered appropriate under the generally accepted standards, then, again, both providers and beneficiaries would have strong legal arguments overturning those denials.
I, for one, hope this is a slippery slope…in the right direction.