Blog Archives

Knicole Partners-Up with Nelson Mullins and Questions NC Partial Hospitalization!

I have an announcement! I have the pleasure of joining Nelson Mullins as a partner. You may have heard of Nelson Mullins; it is a nationwide firm, and its health care team is “spot on.” Instead of spinning my own wheels trying to figure out the health care law; I now will be able to collaborate with colleagues and like-minded, health care, geeks. Yes, I will be doing the same thing – Medicare and Medicaid provider appeals and fighting terminations, suspensions, and penalties for long-term care facilities, home health, DME, hospitals, dentists…basically anyone who receives an adverse decision from any state or the federal government or a contracted vendor, such as RACs, MACs, TPE, UPICs, etc.

Now to my blog… Today I want to talk about partial hospitalization and billing to Medicare and Medicaid. One of my clients has been not getting paid for services rendered, which is always a problem. The 3rd party payor claims that substance abuse treatment is not partial hospitalization. 49 States consider substance abuse intensive outpatient services (“SAIOP”) and substance abuse comprehensive outpatient treatment (“SACOT”) partial hospitalization. Do you agree? Because, apparently, NC is the sole State that refuses to identify SAIOP and SACOT as partial hospitalization.

Partial hospitalization is defined as a structured mental health treatment program that runs for several hours each day, three to five days per week. Clients participate in the scheduled treatment sessions during the day and return home at night. This program is a step down from 24-hour care in a psychiatric hospital setting (inpatient treatment). It can also be used to prevent the need for an inpatient hospital stay. In reality, partial hospitalization saves massive amounts of tax dollars by not taking up a bed in an actual hospital.

In NC, partial hospitalization is codified in 10A NCAC 27G.1101, which states “A partial hospitalization facility is a day/night facility which provides a broad range of intensive and therapeutic approaches which may include group, individual, occupational, activity and recreational therapies, training in community living and specific coping skills, and medical services as needed primarily for acutely mentally-ill individuals. This facility provides services to: (1) prevent hospitalization; or (2) to serve as an interim step for those leaving an inpatient hospital. This facility provides a medical component in a less restrictive setting than a hospital or a rehabilitation facility.”

So, why does this 3rd party payor believe that SAIOP and SACOT are not partial hospitalization? I believe this payor’s stance is wrong. I spoke about their wrongness on RACMoniter, and I hope it may give me some “sway.”

Partial hospitalization is considered a short-term treatment. It is supposed to last 2-3 weeks. However, as many of you know substance abuse is not wiped away in 2-3 weeks. It is a long term process to overcome substance abuse issues. States’ Medicaid programs will question why consumers bounce from SAIOP AND SACOT over and over. In fact, another one of clients is being investigated by the Medicaid Investigative Division (“MID”) for having consumers in SAIOP and SACOT too long or too many times.

Substance abuse services are audited a lot. In fact, Medicare and Medicaid audits occur most often in behavioral health care, home health, and hospice. On January 24, 2023, the New York State Comptroller announced it found $22 million in alleged improper payments. I say alleged because, I would say, 90% of alleged overpayments accusations are inaccurate. The poor provider receives a letter saying you owe $12 million dollars, and their hearts drop. They imagine themselves going out of business. Then they hire a lawyer and it turns out that they owe $896.36. I give that example as a real-life example. I actually had a client accused of owing $12 million dollars and after a 2-week trial, the judge decided that this company owed $896.36. A big difference, right? We appealed nonetheless. 🙂

Always Challenge the Extrapolation in Medicare Provider Audits!

Always challenge the extrapolation! It is my personal opinion that extrapolation is used too loosely. What I mean is that sample sizes are usually too small to constitute a valid representation of the provider’s claims. Say a provider bills 10,000 claims. Is a sample of 50 adequate?

In a 2020 case, Palmetto audited .0051% of claims by Palm Valley, and Palm Valley challenged CMS’ sample and extrapolation method. Palm Valley Health Care, Inc. v. Azar, No. 18-41067, 2020 BL 14097 (5th Cir., Jan. 15, 2020). As an aside, I had 2 back-to-back extrapolation cases recently. The provider, however, did not hire me until the ALJ level – or the 3rd level of Medicare provider appeals. Unfortunately, no one argued that the extrapolation was faulty at the first 2 levels. We had 2 different ALJs, but both ALJs ruled that the provider could not raise new arguments; i.e., that the extrapolation was erroneous, at the 3rd level. They decided that all arguments should be raised from the beginning. This is just a reminder that: (a) raise all defenses immediately; and (b) don’t try the first two levels without an attorney.

Going back to Palm Valley.

The 5th Circuit held that while the statistical sampling methodology may not be the most precise methodology available, CMS’ selection methodology did represent a valid “complex balance of interests.” Principally, the court noted, quoting the Medicare Appeals Council, that CMS’ methodology was justified by the “real world constraints imposed by conflicting demands on limited public funds” and that Congress clearly envisioned extrapolation being applied to calculate overpayments in instances like this. I disagree with this result. I find it infuriating that auditors, like Palmetto, can scrutinize providers’ claims, yet circumvent similar accountability. They are being allowed to conduct a “hack” job at extrapolating to the financial detriment of the provider.

Interestingly, Palm Valley’s 5th Circuit decision was rendered in 2020. The dates of service of the claims Palmetto audited were July 2006-January 2009. It just shows how long the legal battle can be in Medicare audits. Also, Palm Valley’s error rate was 53.7%. Remember, in 2019, CMS revised the extrapolation rules to allow extrapolations in 50% or higher error rates. If you want to read the extrapolations rules, you can find them in Chapter 8 of the Medicare Program Integrity Manuel (“MPIM”).

On RACMonitor, health care attorney, David Glaser, mentioned that there is a difference in arguments versus evidence. While you cannot admit new evidence at the ALJ level, you can make new arguments. He and I agreed, however, even if you can dispute the extrapolation legally, a statistical report would not allowed as new evidence, which are important to submit.

Lastly, 42 CFR 405.1014(a)(3) requires the provider to assert the reasons the provider disagrees with the extrapolation in the request for ALJ hearing.

Licensure Penalties, Plans of Corrections, and Summary Suspensions, Oh My!!

Most of you know that I also appear on RACMonitor every Monday morning at 10:00am eastern. I present a 3-minute segment on RACMonitor, which is a national, syndicated podcast that focuses on RAC audits and the casualties they leave in their wakes. I am joined on that podcast with nation Medicare and Medicaid experts, such as Dr. Ronald Hirsh, health care attorneys David Glaser and me, Tiffany Ferguson, who speaks on the social determinants of health and Matthew Albright, who presents on legislative matters. Other experts join in a rotating fashion, such as Mary Inman, a whistleblower attorney who resides in London, England, Ed Roche, an attorney and statistical wizard who debunks extrapolations, and it is hosted by my friend and producer, Chuck Buck and Clark Anthony and Chyann and others….

But there are other audits that wield similar dire results: OTHER THAN RAC, TPE, MAC, and ZPICs. Licensure audits, for example, can cause monetary penalties, plans of corrections, or even summary suspensions…OH MY!!! (A reference to The Wizard of Oz, obviously).

For hospitals and other health facilities, the licensure laws typically cover issues such as professional and non-professional staffing; physical plant requirements; required clinical services; administrative capabilities; and a vast array of other requirements. In most states, in addition to hospital licensure, full-service hospitals require other licenses and permits, such as laboratory permits, permits relating to hazardous wastes, food service permits, and transportation licenses for hospital-affiliated ambulances. Other residential healthcare facilities, such as nursing homes or behavioral health homes, are typically subject to similar requirements.

Penalties are brandished once audits ensue. Licensure audits do not possess the same financial incentives as RAC audits. In NC the entity that conducts licensure audits is DHSR, the Department of Health Service Regulation. DHSR is still under the umbrella of DHHS, which is the single state entity charged with managing Medicaid. Every State has a DHHS although it may be named something else. In New Mexico, the single state entity is called HSD or Health Services Department. In CA, the single state entity is called DHCS or Department of Health Care Services.

The entity in your State that conducts licensure audits will be under the umbrella of your State’s single State entity that manages Medicaid.

Penalties can be severe.

Summary suspensions occur in all 50 States. A summary suspension is an action in administrative law in which a judge suspends a provider’s license upon the receipt of allegations and prior to a full hearing on the matter. In general, the summary suspension is based on a finding that the suspension is necessary, given the allegations, to protect safety or public health. The summary suspension is a temporary, emergency ruling pending a full hearing on the allegations. For example, in Washington State WAC 170-03-0300(1)(a), permits summary suspension of a child care license by the Department where “conditions in the licensed facility constitute an imminent danger to a child or children in care.”

Imminent dangers can be alleged in hospitals, nursing homes, or residential facilities. I say “alleged” because an allegation is all it takes for a summary suspension to be bestowed. Allegations, unfortunately, must be defended.

Appeal! Appeal! Appeal! Be like Dorothy and get to the Wizard of Oz – no matter what, even if she has to defeat the Wicked Witch of the West!

Last year I had two residential facilities receive summary suspensions at the same time. What do you do if your facility receives a summary suspension?

PANIC.

Kidding. Do not panic. Contact your Medicaid attorney immediately.

Ultimately, we went to trial and defended these two facilities successfully.

Can Medicare/caid Auditors Double-Dip?

The issue today is whether health care auditors can double-dip. In other words, if a provider has two concurrent audits, can the audits overlap? Can two audits scrutinize one date of service (“DOS”) for the same consumer. It certainly doesn’t seem fair. Five years ago, CMS first compiled a list of services that the newly implemented RAC program was to audit. It’s been 5 years with the RAC program. What is it about the RAC program that stands out from the other auditor abbreviations?

We’re talking about Cotiviti and Performant Recovery; you know the players. The Recovery Audit Program’s mission is to reduce Medicare improper payments through the efficient detection and collection of overpayments, the identification of underpayments and the implementation of actions that will prevent future improper payments.

RACs review claims on a post-payment basis. The RACs detect and correct past improper payments so that CMS and Carriers, and MACs can implement actions that will prevent future improper payments.

RACs are also held to different regulations than the other audit abbreviations. 42 CFR Subpart F dictates the Medicaid RACs. Whereas the Medicare program is run by 42 CFR Subchapter B.

The auditors themselves are usually certified coders or LPNs.

As most of you know, I present on RACMonitor every week with a distinguished panel of experts. Last week, a listener asked whether 2 separate auditors could audit the same record. Dr. Ronald Hirsh’s response was: yes, a CERT can audit a chart that another reviewer is auditing if it is part of a random sample. I agree with Dr. Hirsh. When a random sample is taken, then the auditors, by definition, have no idea what claims will be pulled, nor would the CERT have any knowledge of other contemporaneous and overlapping audits. But what about multiple RAC audits? I do believe that the RACs should not overlap its own audits. Personally, I don’t like the idea of one claim being audited more than once. What if the two auditing companies make differing determinations? What if CERT calls a claim compliant and the RAC denies the claim? The provider surely should not pay back a claim twice.

I believe Ed Roche presented on this issue a few weeks ago, and he called it double-dipping.

This doesn’t seem fair. What Dr. Hirsh did not address in his response to the listener was that, even if a CERT is allowed to double-dip via the rules or policies, there could be case law saying otherwise.

I did a quick search on Westlaw to see if there were any cases where the auditor was accused of double-dipping. It was not a comprehensive search by any means, but I did not see any cases where auditors were accused of double-dipping. I did see a few cases where hospitals were accused of double-dipping by collecting DSH payments to cover costs that had already been reimbursed, which seems like a topic for another day.

Increased Medicare Reimbursements and Nursing Home Audits

HEAR YE, HEAR YE: Medicare reimbursement rate increase!!

On April 27th, CMS proposed a rule to increase Medicare fee-for-service payment rates and policies for inpatient hospitals and long-term care hospitals for fiscal year (FY) 2022. The proposed rule will update Medicare payment policies and rates for operating and capital‑related costs of acute care hospitals and for certain hospitals. The proposed increase in operating payment rates for general acute care hospitals paid under the IPPS that successfully participate in the Hospital Inpatient Quality Reporting (“IQR”) Program and are meaningful electronic health record (“EHR”) users is approximately 2.8%. This reflects the projected hospital market basket update of 2.5% reduced by a 0.2 percentage point productivity adjustment and increased by a 0.5 percentage point adjustment required by legislation.

Secondly, a sample audit of nursing homes conducted by CMS will lead to more scrutiny of nursing homes and long-term care facilities. The sample audit showed that two-thirds of Massachusetts’s nursing homes that receive federal Medicaid and Medicare funding are lagging in required annual inspections — and MA is demonstrative of the country.

237 nursing homes and long-term care facilities in the state, or 63.7% of the total, are behind on their federal health and safety inspections by at least 18 months. The national average is 51.3%.

We cannot blame COVID for everything. Those inspections lagged even before the pandemic, the data shows, but ground to a halt last year when the federal agency discontinued in-person visits to nursing homes as they were closed off to the public to help prevent spread of the COVID.

Lastly, on April 29, 2021, CMS issued a final rule to extend and make changes to the Comprehensive Care for Joint Replacement (“CJR”) model. You’ve probably heard Dr. Ron Hirsch reporting on the joint replacement model on RACMonitor. The CJR model aims to pay providers based on total episodes of care for hip and knee replacements to curb costs and improve quality. Hospitals in the model that meet spending and quality thresholds can get an additional Medicare payment. But hospitals that don’t meet targets must repay Medicare for a portion of their spending.

This final rule revises the episode definition, payment methodology, and makes other modifications to the model to adapt the CJR model to changes in practice and fee-for-service payment occurring over the past several years. The changes in practice and payment are expected to limit or reverse early evaluation results demonstrating the CJR model’s ability to achieve savings while sustaining quality. This rule provides the time needed to test modifications to the model by extending the CJR model for an additional three performance years through December 31, 2024 for certain participant hospitals.

The CJR model has proven successful according to CMS. It began in 2016. Hospitals had a “statistically significant decrease” in average payments for all hip and knee replacements relative to a control group. $61.6 million (a savings of 2% of the baseline)

A Study of Contractor Consistency in Reviewing Extrapolated Overpayments

By Frank Cohen, MPA, MBB – my colleague from RACMonitor. He wrote a great article and has permitted me to share it with you. See below.

CMS levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits.

The use of extrapolation in Medicare and private payer audits has been around for quite some time now. And lest you be of the opinion that extrapolation is not appropriate for claims-based audits, there are many, many court cases that have supported its use, both specifically and in general. Arguing that extrapolation should not have been used in a given audit, unless that argument is supported by specific statistical challenges, is mostly a waste of time. 

For background purposes, extrapolation, as it is used in statistics, is a “statistical technique aimed at inferring the unknown from the known. It attempts to predict future data by relying on historical data, such as estimating the size of a population a few years in the future on the basis of the current population size and its rate of growth,” according to a definition created by Eurostat, a component of the European Union. For our purposes, extrapolation is used to estimate what the actual overpayment amount might likely be for a population of claims, based on auditing a smaller sample of that population. For example, say a Uniform Program Integrity Contractor (UPIC) pulls 30 claims from a medical practice from a population of 10,000 claims. The audit finds that 10 of those claims had some type of coding error, resulting in an overpayment of $500. To extrapolate this to the entire population of claims, one might take the average overpayment, which is the $500 divided by the 30 claims ($16.67 per claim) and multiply this by the total number of claims in the population. In this case, we would multiply the $16.67 per claim by 10,000 for an extrapolated overpayment estimate of $166,667. 

The big question that normally crops up around extrapolation is this: how accurate are the estimates? And the answer is (wait for it …), it depends. It depends on just how well the sample was created, meaning: was the sample size appropriate, were the units pulled properly from the population, was the sample truly random, and was it representative of the population? The last point is particularly important, because if the sample is not representative of the population (in other words, if the sample data does not look like the population data), then it is likely that the extrapolated estimate will be anything but accurate.

To account for this issue, referred to as “sample error,” statisticians will calculate something called a confidence interval (CI), which is a range within which there is some acceptable amount of error. The higher the confidence value, the larger the potential range of error. For example, in the hypothetical audit outlined above, maybe the real average for a 90-percent confidence interval is somewhere between $15 and $18, while, for a 95-percent confidence interval, the true average is somewhere between $14 and $19. And if we were to calculate for a 99-percent confidence interval, the range might be somewhere between $12 and $21. So, the greater the range, the more confident I feel about my average estimate. Some express the confidence interval as a sense of true confidence, like “I am 90 percent confident the real average is somewhere between $15 and $18,” and while this is not necessarily wrong, per se, it does not communicate the real value of the CI. I have found that the best way to define it would be more like “if I were to pull 100 random samples of 30 claims and audit all of them, 90 percent would have a true average of somewhere between $15 and $18,” meaning that the true average for some 1 out of 10 would fall outside of that range – either below the lower boundary or above the upper boundary. The main reason that auditors use this technique is to avoid challenges based on sample error.

To the crux of the issue, the Centers for Medicare & Medicaid Services (CMS) levies billions of dollars in overpayments a year against healthcare providers, based on the use of extrapolation audits. And while the use of extrapolation is well-established and well-accepted, its use in an audit is not an automatic, and depends upon the creation of a statistically valid and representative sample. Thousands of extrapolation audits are completed each year, and for many of these, the targeted provider or organization will appeal the use of extrapolation. In most cases, the appeal is focused on one or more flaws in the methodology used to create the sample and calculate the extrapolated overpayment estimate. For government audits, such as with UPICs, there is a specific appeal process, as outlined in their Medical Learning Network booklet, titled “Medicare Parts A & B Appeals Process.”

On Aug. 20, 2020, the U.S. Department of Health and Human Services Office of Inspector General (HHS OIG) released a report titled “Medicare Contractors Were Not Consistent in How They Reviewed Extrapolated Overpayments in the Provider Appeals Process.” This report opens with the following statement: “although MACs (Medicare Administrative Contractors) and QICs (Qualified Independent Contractors) generally reviewed appealed extrapolated overpayments in a manner that conforms with existing CMS requirements, CMS did not always provide sufficient guidance and oversight to ensure that these reviews were performed in a consistent manner.” These inconsistencies were associated with $42 million in extrapolated payments from fiscal years 2017 and 2018 that were overturned in favor of the provider. It’s important to note that at this point, we are only talking about appeal determinations at the first and second level, known as redetermination and reconsideration, respectively.

Redetermination is the first level of appeal, and is adjudicated by the MAC. And while the staff that review the appeals at this level are supposed to have not been involved in the initial claim determination, I believe that most would agree that this step is mostly a rubber stamp of approval for the extrapolation results. In fact, of the hundreds of post-audit extrapolation mitigation cases in which I have been the statistical expert, not a single one was ever overturned at redetermination.

The second level of appeal, reconsideration, is handled by a QIC. In theory, the QIC is supposed to independently review the administrative records, including the appeal results of redetermination. Continuing with the prior paragraph, I have to date had only several extrapolation appeals reversed at reconsideration; however, all were due to the fact that the auditor failed to provide the practice with the requisite data, and not due to any specific issues with the statistical methodology. In two of those cases, the QIC notified the auditor that if they were to get the required information to them, they would reconsider their decision. And in two other cases, the auditor appealed the decision, and it was reversed again. Only the fifth case held without objection and was adjudicated in favor of the provider.

Maybe this is a good place to note that the entire process for conducting extrapolations in government audits is covered under Chapter 8 of the Medicare Program Integrity Manual (PIM). Altogether, there are only 12 pages within the entire Manual that actually deal with the statistical methodology behind sampling and extrapolation; this is certainly not enough to provide the degree of guidance required to ensure consistency among the different government contractors that perform such audits. And this is what the OIG report is talking about.

Back to the $42 million that was overturned at either redetermination or reconsideration: the OIG report found that this was due to a “type of simulation testing that was performed only by a subset of contractors.” The report goes on to say that “CMS did not intend that the contractors use this procedure, (so) these extrapolations should not have been overturned. Conversely, if CMS intended that contractors use this procedure, it is possible that other extrapolations should have been overturned but were not.” This was quite confusing for me at first, because this “simulation” testing was not well-defined, and also because it seemed to say that if this procedure was appropriate to use, then more contractors should have used it, which would have resulted in more reversals in favor of the provider.   

Interestingly, CMS seems to have written itself an out in Chapter 8, section 8.4.1.1 of the PIM, which states that “[f]ailure by a contractor to follow one or more of the requirements contained herein does not necessarily affect the validity of the statistical sampling that was conducted or the projection of the overpayment.” The use of the term “does not necessarily” leaves wide open the fact that the failure by a contractor to follow one or more of the requirements may affect the validity of the statistical sample, which will affect the validity of the extrapolated overpayment estimate. 

Regarding the simulation testing, the report stated that “one MAC performed this type of simulation testing for all extrapolation reviews, and two MACs recently changed their policies to include simulation testing for sample designs that are not well-supported by the program integrity contractor. In contrast, both QICs and three MACs did not perform simulation testing and had no plans to start using it in the future.” And even though it was referenced some 20 times, with the exception of an example given as Figure 2 on page 10, the report never did describe in any detail the type of simulation testing that went on. From the example, it was evident to me that the MACs and QICs involved were using what is known as a Monte Carlo simulation. In statistics, simulation is used to assess the performance of a method, typically when there is a lack of theoretical background. With simulations, the statistician knows and controls the truth. Simulation is used advantageously in a number of situations, including providing the empirical estimation of sampling distributions. Footnote 10 in the report stated that ”reviewers used the specific simulation test referenced here to provide information about whether the lower limit for a given sampling design was likely to achieve the target confidence level.” If you are really interested in learning more about it, there is a great paper called
“The design of simulation studies in medical statistics” by Burton et al. (2006). 

Its application in these types of audits is to “simulate” the audit many thousands of times to see if the mean audit results fall within the expected confidence interval range, thereby validating the audit results within what is known as the Central Limit Theorem (CLT).

Often, the sample sizes used in recoupment-type audits are too small, and this is usually due to a conflict between the sample size calculations and the distributions of the data. For example, in RAT-STATS, the statistical program maintained by the OIG, and a favorite of government auditors, sample size estimates are based on an assumption that the data are normally (or near normally) distributed. A normal distribution is defined by the mean and the standard deviation, and includes a bunch of characteristics that make sample size calculations relatively straightforward. But the truth is, because most auditors use the paid amount as the variable of interest, population data are rarely, if ever, normally distributed. Unfortunately, there is simply not enough room or time to get into the details of distributions, but suffice it to say that, because paid data are bounded on the left with zero (meaning that payments are never less than zero), paid data sets are almost always right-skewed. This means that the distribution tail continues on to the right for a very long distance.  

In these types of skewed situations, sample size normally has to be much larger in order to meet the CLT requirements. So, what one can do is simulate the random sample over and over again to see whether the sampling results ever end up reporting a normal distribution – and if not, it means that the results of that sample should not be used for extrapolation. And this seems to be what the OIG was talking about in this report. Basically, they said that some but not all of the appeals entities (MACs and QICs) did this type of simulation testing, and others did not. But for those that did perform the tests, the report stated that $41.5 million of the $42 million involved in the reversals of the extrapolations were due to the use of this simulation testing. The OIG seems to be saying this: if this was an unintended consequence, meaning that there wasn’t any guidance in place authorizing this type of testing, then it should not have been done, and those extrapolations should not have been overturned. But if it should have been done, meaning that there should have been some written guidance to authorize that type of testing, then it means that there are likely many other extrapolations that should have been reversed in favor of the provider. A sticky wicket, at best.

Under the heading “Opportunity To Improve Contractor Understanding of Policy Updates,” the report also stated that “the MACs and QICs have interpreted these requirements differently. The MAC that previously used simulation testing to identify the coverage of the lower limit stated that it planned to continue to use that approach. Two MACs that previously did not perform simulation testing indicated that they would start using such testing if they had concerns about a program integrity contractor’s sample design. Two other MACs, which did not use simulation testing, did not plan to change their review procedures.” One QIC indicated that it would defer to the administrative QIC (AdQIC, the central manager for all Medicare fee-for-service claim case files appealed to the QIC) regarding any changes. But it ended this paragraph by stating that “AdQIC did not plan to change the QIC Manual in response to the updated PIM.”

With respect to this issue and this issue alone, the OIG submitted two specific recommendations, as follows:

  • Provide additional guidance to MACs and QICs to ensure reasonable consistency in procedures used to review extrapolated overpayments during the first two levels of the Medicare Parts A and B appeals process; and
  • Take steps to identify and resolve discrepancies in the procedures that MACs and QICs use to review extrapolations during the appeals process.

In the end, I am not encouraged that we will see any degree of consistency between and within the QIC and MAC appeals in the near future.

Basically, it would appear that the OIG, while having some oversight in the area of recommendations, doesn’t really have any teeth when it comes to enforcing change. I expect that while some reviewers may respond appropriately to the use of simulation testing, most will not, if it means a reversal of the extrapolated findings. In these cases, it is incumbent upon the provider to ensure that these issues are brought up during the Administrative Law Judge (ALJ) appeal.

Programming Note: Listen to Frank Cohen report this story live during the next edition of Monitor Mondays, 10 a.m. Eastern.

KNICOLE EMANUEL TO HOST JANUARY WEBCAST ON PRFS AND RAC AUDITS

For healthcare providers looking to avoid any of the traps stemming from PRF (Provider Relief Funds) compliance, RACmonitor is inviting you to sign up for Knicole Emanuel’s upcoming webcast on January 21st, 2021. It is titled: COVID-19 Provider Relief Funds: How to Avoid Audits.  You can visit RACmonitor download the order form for the webcast to save yourself a spot. 

Webcast Description: 

If your facility accepted Provider Relief Funds (PRFs) as a consequence of the coronavirus pandemic, you need to be aware of the myriad of rules and regulations that are associated with this funding or else face penalties and takebacks. A word of caution: expect to be audited. In Medicare and Medicaid, regulatory audits are as certain as death and taxes. That is why your facility needs to arm itself with the knowledge of how to address documentation requests from the government, especially while the Public Health Emergency (PHE) is in effect.

This exclusive RACmonitor webcast, led by healthcare attorney Knicole Emanuel, discusses the PRF rules that providers must follow and how to prove that funds were appropriately used. There are strict regulations dictating why, how, and how much PRFs can be spent due to the catastrophic, financial impact of COVID-19. Register now to learn how to avoid penalties and takebacks related to PRFs.

Learning Objectives:

  • Rules and regulations relative to receiving and spending funds provided by the COVID-19 PRF
  • Exceptions to COVID-19 PRF and relevant effective dates
  • PRF documentation and reporting requirements
  • The importance of the legal dates of PHE
  • How to prove your facility’s use of funds is germane to COVID-19

Who Should Attend:

  • CFOs
  • RAC and appeals specialists
  • RAC coordinators
  • Compliance officers
  • Directors and managers

About Knicole C. Emanuel, Esq.

Healthcare industry expert and Practus partner, Knicole Emanuel, is a regular contributor to the healthcare industry podcast, Monitor Mondays, by RACmonitor. For more than 20 years, Knicole Emanuel has maintained a health care litigation practice, concentrating on Medicare and Medicaid litigation, health care regulatory compliance, administrative law and regulatory law. Knicole has tried over 2,000 administrative cases in over 30 states and has appeared before multiple states’ medical boards. 

She has successfully obtained federal injunctions in numerous states. This allowed health care providers to remain in business despite the state or federal laws allegations of health care fraud, abhorrent billings, and data mining. A wealth of knowledge in her industry, Knicole frequently lectures across the country on health care law. This includes the impact of the Affordable Care Act and regulatory compliance for providers, including physicians, home health and hospice, dentists, chiropractors, hospitals and durable medical equipment providers.

Executive Orders and Presidential Memorandums: A Civics Lesson

Before the informative article below , I have two announcements!

(1) My blog has been “in publication” for over eight (8) years, this September 2020. Yay! I truly hope that my articles have been educational for the thousands of readers of my blog. Thank you to everyone who follows my blog. And…

(2) Knicole Emanuel and her legal team have moved law firms!!! We are now at PractUS, LLP. See the video interview of John Lively, who started my new law firm: here. It’s a pretty cool concept.

Click here: For my new bio and contact information.

Ok – Back to the informative news about the most recent Executive Orders…

My co-panelist on RACMonitor, Matthew Albright, gave a fascinating and informative summary on the recent, flurry of Executive Orders, and, he says, expect many more to come in the near future. He presented the following article on RACMonitor Monitor Monday, August 10, 2020. I found his article important enough to be shared on my blog. Enjoy!!

By Matthew Albright
Original story posted on: August 12, 2020

Presidential Executive Order No. 1 was issued on Oct. 20, 1862 by President Lincoln; it established a wartime court in Louisiana. The most famous executive order was also issued by Lincoln a few years later – the Emancipation Proclamation.

Executive orders are derived from the Constitution, which gives the president the authority to determine how to carry out the laws passed by Congress. The trick here is that executive orders can’t make new laws; they can only establish new – and perhaps creative – approaches to implementing existing laws.

President Trump has signed 18 executive orders and presidential memorandums in the past seven days. That sample of orders and memos are a good illustration of the authority – and the constraints – of presidential powers.

An executive order and a presidential memorandum are basically the same thing; the difference is that a memorandum doesn’t have to cite the specific law passed by Congress that the president is implementing, and a memorandum isn’t published in the Federal Register. In other words, an executive order says “this is what the President is going to do,” and a memorandum says “the President is going to do this too, but it shouldn’t be taken as seriously.”  

Executive orders and memorandums often give instructions to federal agencies on what elements of a broader law they should focus on. One good example of this is the executive order signed a week ago by President Trump that provides new support and access to healthcare for rural communities. In that executive order, the President cited the Patient Protection and Affordable Care Act as the broad law he was using to improve access to rural communities.

Executive orders also often illustrate the limits of presidential authority, a good example being the series of executive orders and memorandums that the president signed this past Saturday, intended to provide Americans financial relief during the pandemic.

One of the memorandums signed on Saturday delayed the due date for employers to submit payroll taxes. The idea was that companies would in turn decide to stop taking those taxes out of employees’ paychecks, at least until December.  

By looking at the language in the memorandum and seeing what it does not try to do, we can learn a lot about presidential limits.

The memorandum does not give employers or employees a tax break. That power rests unquestionably with Congress. The order only delays when the taxes will be collected. Like the grim reaper, the tax man will come to your door someday, even if you can delay when that “someday” is.  

Also, the tax delay is only for employers, and – again, another illustration of the limits of presidential power – it doesn’t tell employers how they should manage this extra time they have to pay the tax. That is, companies could decide to continue to take taxes out of people’s paychecks, knowing that the taxes will still have to be paid someday.

Another memorandum that the president signed on Saturday concerned unemployment benefits. That order illustrates the division in powers between the federal Executive Branch and the authority of the states.

The memorandum provides an extra $400 in unemployment benefits, but in order for it to work, the states would have to put up one-fourth of the money. The memorandum doesn’t require states to put up the money; it “calls on” them to do it, because the President, unless authorized by Congress, can’t make states pay for something they don’t want.

Executive orders and memorandums are reflective of my current position as the father of two pre-teen girls. I can declare the direction the household should go, I can “call on them” to play less Fortnite and eat more fruit, but my orders and their subsequent implementation often just serve to illustrate the limits – both perceived and real –of my paternal power.

Programming Note: Matthew Albright is a permanent panelist on Monitor Mondays (with me:) ). Listen to his legislative update sponsored by Zelis, Mondays at 10 a.m. EST.

A Court Case in the Time of COVID: The Judge Forgot to Swear in the Witnesses

Since COVID-19, courts across the country have been closed. Judges have been relaxing at home.

As an attorney, I have not been able to relax. No sunbathing for me. Work has increased since COVID-19 (me being a healthcare attorney). I never thought of myself as an essential worker. I still don’t think that I am essential.

On Friday, May 8, my legal team had to appear in court.

“How in the world are we going to do this?” I thought.

My law partner lives in Philadelphia. Our client lives in Charlotte, N.C. I live on a horse farm in Apex, N.C. Who knows where the judge lives, or opposing counsel or their witnesses? How were we going to question a witness? Or exchange documents?

Despite COVID-19, we had to have court, so I needed to buck up, stop whining, and figure it out. “Pull up your bootstraps, girl,” I thought.

First, we practiced on Microsoft Teams. Multiple times. It is not a user-friendly interface. This Microsoft Team app was the judge’s choice, not mine. I had never heard of it. It turns out that it does have some cool features. For example, my paralegal had 100-percent control of the documents. If we needed a document up on the screen, then he made it pop up, at my direction. If I wanted “control” of the document, I simply placed my mouse cursor over it. But then my paralegal did not have control. In other words, two people cannot fight over a document on this new “TV Court.”

The judge forgot to swear in the witnesses. That was the first mess-up “on the record.” I didn’t want to call her out in front of people, so I went with it. She remembered later and did swear everyone in. These are new times.

Then we had to discuss HIPAA, because this was a health care provider asking for immediate relief because of COVID-19. We were sharing personal health information (PHI) over all of our computers and in space. We asked the judge to seal the record before we even got started. All of a sudden, our court case made us all “essentials.” Besides my client, the healthcare provider, no one else involved in this court case was an “essential.” We were all on the computer trying to get this provider back to work during COVID-19. That is what made us essentials!

Interestingly, we had 10 people participating on the Microsoft Team “TV Court” case. The person that I kept forgetting was there was Mr. Carr (because Mr. Carr works at the courthouse and I have never seen him). Also, another woman stepped in for a while, so even though the “name” of the masked attendee was Mr. Carr, for a while Patricia was in charge. A.K.A. Mr. Carr.

You cannot see all 10 people on the Team app. We discovered that whomever spoke, their face would pop up on the screen. I could only see three people at a time on the screen. Automatically, the app chose the three people to be visible based on who had spoken most recently. We were able to hold this hearing because of the mysterious Mr. Carr.

The witnesses stayed on the application the whole time. In real life, witnesses listen to others’ testimony all the time, but with this, you had to remember that everyone could hear everything. You can elect to not video-record yourself and mute yourself. When I asked my client to step away and have a private conversation, my paralegal, my partner, and the client would log off the link and log back on an 8 a.m. link that we used to practice earlier that day. That was our private chat room.

The judge wore no robe. She looked like she was sitting on the back porch of her house. Birds were whistling in the background. It was a pretty day, and there was a bright blue sky…wherever she was. No one wore suits except for me. I wore a nice suit. I wore no shoes, but a nice suit. Everyone one else wore jeans and a shirt.

I didn’t have to drive to the courthouse and find parking. I didn’t even have to wear high heels and walk around in them all day. I didn’t have to tell my paralegal to carry all 1,500 pages of exhibits to the courthouse, or bring him Advil for when he complains that his job is making his back ache.

Whenever I wanted to get a refill of sweet tea or go to the bathroom, I did so quietly. I turned off my video and muted myself and carried my laptop to the bathroom. Although, now, I completely understand why the Supreme Court had its “Supreme Flush.”

All in all, it went as smoothly as one could hope in such an awkward platform.

Oh, and happily, we won the injunction, and now a home healthcare provider can go back to work during COVID-19. All of her aides have PPE. All of her aides want to go to work to earn money. They are willing to take the risk. My client should get back-paid for all her services rendered prior to the injunction. She hadn’t been getting paid for months. However, this provider is still on prepayment review due to N.C. Gen. Stat. 108C-7(e), which legislators should really review. This statute does not work. Especially in the time of COVID. See blog.

I may be among the first civil attorneys to go to court in the time of COVID-19. If I’m honest, I kind of liked it better. I can go to the bathroom whenever I need to, as long as I turn off my audio. Interestingly, Monday, Texas began holding its first jury trial – virtually. I cannot wait to see that cluster! It is streaming live.

Being on RACMonitor for so long definitely helped me prepare for my first remote lawsuit. My next lawsuit will be in New York City, where adult day care centers are not getting properly reimbursed.

RACMonitor Programming Note:

Healthcare attorney Knicole Emanuel is a permanent panelist on Monitor Monday and you can hear her reporting every Monday, 10-10:30 a.m. EST.

How Coronavirus Has Affected Me as a Teenage Girl – by Madison Allen

RACMonitor published my daughter’s essay on living through the Coronavirus. Madison would like to share it here, on my blog, as well. She is a fifteen-year-old in North Carolina and attends high school at Thales Academy.

EDITOR’S NOTE: Coping with the COVID-19 pandemic has been difficult for just about everyone nationwide, but uniquely so for America’s young students, some of whom have been robbed of the opportunity to play their favorite spring sports, attend the junior or senior prom, or even enjoy a proper graduation ceremony. As such, we at RACMonitor have asked the children of several of our key contributors to pen essays describing their personal experiences amid these life-changing times.

My name is Madison Allen. I am a 15-year-old girl who loves spending her time outdoors or hanging out with her friends. If neither of those options are available, then I don’t really know what else to do to cure boredom.

I love technology, don’t get me wrong, but I would much rather be active and enjoy nature. I have been raised in a household that doesn’t tolerate being lazy, so sitting in my room and binge-watching Netflix all day is not an option. Despite the fact that I can’t have fun in the normal ways that I am used to, I have come up with three good ways that have kept me busy during this time. Before we get into that, I feel that it is necessary to talk about when COVID appeared in my life and my first impressions of the disease.

It was a very normal Saturday afternoon. I was out hanging with my friends Nicole and Ariana when their phones go off, saying that school has been cancelled for the next two weeks. I was so happy, because from there my dad told me that all schools are doing the same due to the growing concerns for coronavirus. I only had one week of the third quarter left anyway, so no schoolwork was going to be issued to do at home or virtually. About an hour after Governor Cooper announced that school was cancelled for two weeks, my school, Thales Academy, finally sent an email out to us that read, “due to the order from Governor Cooper as of 4:30 p.m. today, all Thales Academy locations will be closing on Monday, March 16th. On Tuesday, March 17th, school will be open from 8:00 a.m. to 3:00 p.m. for students to drop in and gather any items they need to from their lockers. Report cards will be issued to students on Tuesday, March 17th at noon. Students will return to campus for fourth quarter on April 13th.”

Me, being the child I am, thought that this was awesome, because I didn’t have to take my history test anymore. Yes, that is great but I didn’t realize the harm it is doing on the world. I wasn’t thinking about others’ lives, because I never thought that something bad would happen to me. I was really selfish when I thought about not taking the history test because I only thought of how I was benefitting, while other people were and still are suffering.

Anyway, I went through spring break, and it all got worse. I wasn’t allowed to see any of my friends, and trips, special events, and even celebrations got cancelled. When spring break was over, we were told to do online school on a website called Canvas and were given certain times to log onto Zoom to talk with our teachers. I am fortunate enough to live in a house with ample space and Internet to do schoolwork. I am also fortunate that I go to such a great school that will do their best to provide great education, no matter the circumstance.

While I have been in quarantine, I have thought of three ways to cure boredom without help from a phone. The first way is that I have been taking up a new hobby called “cleaning my room.” I haven’t made very much progress with that, though. Another way I have cured boredom is by decorating a secret room in my house and making it the ideal hangout spot. Lastly, I have been going outside and taking up hobbies that I once loved, such as bow and arrow, knitting, hiking, horseback riding, and basketball.

I am now in the third week of online school, and won’t be stopping until the end of the year. Summer break is just five weeks away, and it doesn’t look like quarantine will be ending soon. I will do my best to see the good out of this troubling time, but for now I am taking life day by day.