the future of Medicare

Healthcare for the elderly in the United States is in an untenable financial situation. This is the result of a convergence of factors: an aging population with increasing life expectancy; the pharmaceutical and device industries devising new and very costly techniques and applying them to this population; the sophistication of patients, demanding them. All of these together lead to health care expenses for the elderly that are consuming an unsustainable portion of the GDP.

Although this problem exists in other countries as well, it is particularly onerous in the US because of the malpractice industry, coupled with the notion that if something goes wrong, someone should pay. In other countries, there is a greater acceptance of the inevitability of things going wrong, a greater reliance on the paternalistic doctor-patient relationship and much lower malpractice rates and awards. This is not to say that paternalism and low malpractice awards are necessarily better or fairer or right, but it does explain the ability of the healthcare system in other countries to significantly limit outlays.

The future of Medicare, in my opinion, lies in some combination of the following approaches:

1. Conventional fee-for-service Medicare. This is what got Medicare into financial difficulty in the first place, and is unsustainable in its present form. However, the current trend, which will certainly accelerate, is towards drastic ratcheting down of Medicare payments to providers in order to keep costs manageable. This will continue and will accelerate the flight of doctors and other providers out of fee-for-service Medicare entirely. Because of this provider flight, fee reductions alone would not be politically feasible, but there will be other alternatives open to patients. As a result, fee-for-service Medicare will become the refuge of a greatly reduced number of highly efficient, low-overhead practitioners as well as those willing to accept much lower incomes.

2. Medicare managed care. This involves Medicare paying a fixed sum to an insurance company, in return for total care of the patient. Insurance companies are highly skilled at gaming the system and figuring out how to enroll the healthiest patients, so Medicare will end up paying more for the care of healthy patients than it would have had they been in fee-for-service. Not a good solution.

3. The new Accountable Care Organization (ACO) approach. This is basically the same as Medicare managed care, but this time, the organization assuming responsibility for the patient’s care is not an insurance company but a healthcare organization, ACO, presumably organized around a hospital system. Medicare thinks it can get a better deal out of the ACO’s than it could out of insurance companies. And hospitals and providers suffer from the delusion that they can provide quality care at low rates (something nobody else has been able to do). There is much enthusiasm for this approach right now, and everyone seems to be scrambling to get on the bandwagon. But there are numerous pitfalls. What happens when a patient wants / needs to get care for an expensive problem outside the ACO? When a patient ends up with a month long ICU stay at another hospital? Who pays? If the original ACO has to pay, it will be ruinous. If Medicare foots the bill, there will be a mad rush to dump the sickest patients onto outside institutions. And again, how should an ACO be able to solve the problems that are inherent in the costly healthcare of the elderly?

4. Finally, there is the voucher approach, where Medicare gives patients vouchers to pay for insurance on the private market. Of course, insurance that provides real quality care for a large population of sick, elderly patients will be extremely expensive, not affordable without supplementing the vouchers substantially.

[As an aside, what about single payer, in all this? I used to think that single payer was desireable but not achievable in this country. Medicare for all seemed to me to be a laudable if illusory goal. But single payer doesn’t have the answer to the cost problem any more than Medicare does. Given the state of Medicare, I no longer think single payer is even particularly desireable. ]

In my opinion, all four of the approaches listed above are likely to be adopted, to varying degrees. A poorly reimbursed fee-for-service system with relatively few providers, managed care and ACO’s, and a voucher system. But none of these approaches solve the underlying problem of the spiraling cost of medical care in an aging, litiginous and sophisticated population.

And so, the end result will be characteristically American: you gets what you pays for. The affluent will pay for expensive insurance, supplementing the Medicare vouchers with their own cash. The rest will have to deal with sparse fee-for-service and cost-shifting managed care and ACO’s (assuming these don’t eventually all fold).

Michael Jacobson, MD
New York, December 21, 2011

The zoster vaccine

The lead article in yesterday’s NEJM, A vaccine to prevent herpes zoster and postherpetic neuralgia in older adults, presents the results of a VA Cooperative Study looking at the efficacy of a high potency, live attenuated VZV vaccine developed by Merck from the Oka/Merck strain. The results are encouraging, with a reduction in the incidence of herpes zoster of 51.3% during 3 years of follow-up.

The authors state that the incidence of post-herpetic neuralgia was reduced by 66.5%. They are referring to the incidence in the overall study population: there were 27 cases of PHN among the 19,254 subjects who received the vaccine, vs. 80 cases among the 19,247 subjects who did not receive the vaccine. If you look at the number of cases of PHN among patients with zoster, the numbers are 27/315, vs. 80/642, a reduction of about 31%. Both of these numbers are important to judge the vaccine.

In other words, the vaccine reduced the incidence of zoster by about a half, the overall incidence of PHN by about 2/3 and the incidence of PHN among patients with zoster by about a third.

What about the choice of vaccine? The study used a live attenuated vaccine that was of higher potency than the standard vaccine given to children. Why not study the vaccine that is already available in the United States? Two reasons, one medical, one economic:

  • Medical: a higher potency vaccine may be necessary to boost the immune response in older patients sufficiently to prevent zoster.
  • Economic: a new vaccine formulation for this specific purpose can be priced much higher than the already available childhood vaccine.

Turning to the economic consideration first, in the accompanying editorial, Gilden states:

“To nonindigent recipients of the currently used childhood VZV vaccine (Varivax), the price of vaccination is between $50 and $100 (the sum of the cost of vaccine plus the visit or facility fee). An adult vaccine might cost more, given its greater potency. Nevertheless, the zoster vaccine appears to have been highly cost-effective in the Shingles Prevention Study (i.e., in the range of $2,000 per quality-adjusted life-year gained, even assuming a vaccine cost of $500).”

Clearly, Merck stands to make a lot of money if the vaccine used is a new one, costing several hundred dollars, rather than the existing vaccine which costs under $100 per dose.

What about the possibility of using the currently available vaccine, possibly with a booster dose, rather than a new, higher potency one? The vaccine used in this study contained between 18,700 and 60,000 plaque-forming units of virus, versus about 1,350 pfu’s in the Oka/Merck vaccine that is commercially available and is administered to children. In the concluding paragraph of the current study, the authors state:

“The minimum potency of the zoster vaccine administered to subjects in the study was at least 14 times greater than the minimum potency of Varivax (Merck), the vaccine currently licensed to prevent varicella. A preliminary study indicated that potencies of this magnitude are required to elicit a significant increase in the cell-mediated immunity to VZV among older adults — hence, the need to formulate a high-potency vaccine for this study. We know of no data to suggest that the licensed varicella vaccine would be efficacious in protecting older adults from herpes zoster or postherpetic neuralgia. Thus, we do not recommend the use of the current varicella vaccine in an attempt to protect against herpes zoster and postherpetic neuralgia. “

The authors provide no references to back up the results of their “preliminary study” indicating that such high potencies are necessary. In fact, a 1992 article published by some of the same authors of the current study, Immune response of elderly individuals to a live attenuated varicella vaccine, seems to indicate that such high doses may not be necessary. From the abstract of that article:

“The Oka strain live attenuated varicella-zoster virus (VZV) vaccine was administered subcutaneously to 202 VZV-immune individuals who were 55 to greater than 87 years old. The dose administered varied from 1100 to 12,000 pfu… Most significantly, VZV-specific proliferating T cells in PBMC of vaccinees were increased in frequency from 1 in 68,000 to 1 in 40,000… Dose and age of the vaccinees did not significantly influence the magnitude of the mean cell-mediated immune response…”

I understand that pharmaceutical companies may reformulate a drug before launching a large, expensive trial for a new indication, in order to maximize their profit. That’s how the health care market works. And it may well be that a higher potency vaccine is necessary to achieve adequate protection.

Still, it would have been nice if the authors of this trial had justified their use of a new vaccine with a published reference.

Why don’t we just put statins in the water supply and be done with it?

Statins and the risk of colorectal cancer in last week’s NEJM is a case-control study from Israel that looked at about 2000 patients with colorectal cancer and a similar number of controls, and found that “the use of statins for at least five years (vs. the nonuse of statins) was associated with a significantly reduced relative risk of colorectal cancer”. The odds ratio was about 0.50.

At the risk of sounding like a broken record, this is yet another case-control study, useful as hypothesis generating, but not much else. The authors performed their analysis adjusting for multiple co-variates, such as aspirin use, vegetable consumption, red-meat consumption, but there is no way they can sufficiently adjust for all variables to convince me.

For one thing, they don’t mention adjusting for low-saturated fat diets which patients who take statins are likely to adhere to. And there are sure to be other confounders associated with statin treatment.

The one fact that almost convinced me was that the authors found no protective effect from non-statin cholesterol lowering agents (fibrates). These are likely to be associated with most of the same confounders as statins. BUT, there were only 20 patients taking these drugs, too few to be statistically reassuring, and the reason for prescribing a fibrate rather than a statin is more likely to be hypertriglyceridemia than hypercholesterolemia, implying a different population and perhaps a different diet as well.

Medpundit is also critical of this study. I have to disagree with her main argument against it, however. She feels that the biggest flaw is that the two groups were not matched for ethnicity, with a higher percentage of Ashkenazi Jews in the cancer group. However, in their adjusted analysis, the authors specifically state that they adjusted for ethnicity. In my opinion, the biggest problem with case-control studies is not that they do not adequately adjust for known confounders, but rather that they don’t take into account confounders they have not thought of.

Multidetector spiral CT for PE

Last week’s NEJM has an article, from Switzerland and France, on multidetector-row computed tomography in suspected pulmonary embolism.

Rationale

The authors state that first-generation, single-detector, spiral CT scanning is quite specific (90%) but not very sensitive (70%) for detecting pulmonary emboli. In two previous studies by their group, negative CT scans were contradicted by positive lower extremity venous duplex scans in about 8% of cases.

This has led to their suggestion that CT scanning should be accompanied by ultrasound of the legs to improve sensitivity. They quote two other studies they performed which found that patients with low or moderate clinical suspicion for PE who had negative CT and duplex scanning and who were not anticoagulated fared about as well as patients who were untreated after negative pulmonary angiography (1 to 2 percent thromboembolic events in 3 months).

The main question addressed by the authors here is: If newer generation, multidetector CT scanners are more sensitive in picking up smaller emboli (which they seem to be), then might these newer scanners obviate any need for duplex scanning? A second question they address is the role of negative d-dimer testing in excluding patients from the need for further tests.

Methods

The basic approach was to evaluate patients with suspected PE, and classify them as clinically high, intermediate or low probability for PE.

  • Those who were low or intermediate had D-dimer levels drawn. If the D-dimer was normal (<500 mcg/l), no further investigation was performed, no anticoagulation was given and patients were followed up. If the D-dimer was high, both CT scanning and US were performed, and patients were treated accordingly and followed up.
  • Patients with a high clinical index of suspicion did not have D-dimer levels drawn but were evaluated by CT and US. If both of these studies were negative, these patients underwent pulmonary angiography.

The results were then analyzed to see how many patients had negative CT scans but positive US exams, and also to see how patients who were not anticoagulated fared.

Results

756 patients were included in the study. Of these, 674 had low or intermediate clinical probability of PE, and 82 had a high probability.

  • Of the 674 who had low or intermediate probability
    • 232 had negative D-dimers, and were not anticoagulated. There were no subsequent venous thromboembolic events in this group.
    • 442 had positive D-dimers and were evaluated with CT and US.
      • In this group, there were only 2 patients with negative CT scans but positive US examinations.
      • Both tests were negative in 318 patients, and they were not anticoagulated. At three month follow-up, there were 3 non-fatal thromboembolic events and 2 deaths, possibly from PE
      • 109 patients had a positive CT scan (with or without positive US), and were anticoagulated
      • 13 had inconclusive CT scans (most underwent VQ scanning)
  • Of the 82 with high probability
    • 78 had a positive CT scan
    • 3 had negative CT scan and negative US; all three had negative angiograms
    • 1 patient had negative CT scan but positive US

Comments

The narrower question addressed here is whether or not multidetector spiral CT is sufficiently sensitive to obviate a postulated need for duplex ultrasound scanning in the diagnosis of suspected PE. In this study, out of a total of 324 patients with negative CT scans, there were 3 positive ultrasound examinations (0.9%), which is much lower than the 6-9% which the authors report for conventional spiral CT (from previous studies by their group).

It is important to note that more sensitive diagnosis of PE does not necessarily translate into markedly better clinical outcomes, since the clinical course of untreated small, peripheral emboli will not be as poor as that of more easily detected, central emboli.

Interestingly, a meta-analysis just published (in last week’s JAMA) looking at outcome studies using CT scanning in suspected PE failed to find a benefit of multidetector vs. single detector, and also failed to find a benefit to the addition of other modalities (such as duplex scanning), in terms of clinical outcome. Both of these points are in contradiction to the main arguments of the current study, but of course all the caveats of meta-analyses apply here.

Having said this, it does seem reasonable that the addition of ultrasound to multidetector CT scanning adds little to the diagnosis and is probably overkill. Whether it is necessary to replace single detector scanning with multidetector scanning in order to safely rule out PE and eliminate a postulated need for duplex ultrasound is much less clear to me.

The broader question addressed by this study is the overall safety of using the approach outlined here, including d-dimer testing in patients who have low or intermediate clinical probability of PE and performing CT scans only in those with positive d-dimers. In this study, had the d-dimer assay and multidetector CT scanning without duplex US been performed, the overall rate of thromboemboli in patients not anticoagulated would have been 1.5%, comparable to the rate for patients with negative pulmonary angiograms. The authors suggest that this strategy should be prospectively evaluated.

Cardiac resynchronization in heart failure

Patients with heart failure often have intraventricular conduction delays (such as bundle branch blocks), which cause the ventricles to contract dyssynchronously, in an inefficient manner. This is the rationale behind the implantation of biventricular pacing devices to restore synchrony.

The effect of cardiac resynchronization on morbidity and mortality in heart failure, in this week’s NEJM, looked at biventricular pacing plus medical therapy (409 patients) vs. medical therapy alone (404 patients) in patients with class III or IV heart failure.

Patients in the European multicenter CARE-HF study had a QRS interval of at least 150 msec, or 120-149 msec plus echocardiographic evidence of ventricular dyssynchrony, in addition to heart failure and sinus rhythm. Enrolled patients were then randomized to implantation of a device or no implantation, in a non-blinded fashion.

After mean follow-up of 2.5 years, the number of deaths (mainly cardiovascular) in the device group was significantly lower (20% vs. 30%).

Patients who were hospitalized for worsening heart failure comprised 18% of the device group vs. 33% of the non-device group.

Ejection fraction and indices of symptom status were also improved in the device group.

These results were fairly consistent across a number of subgroups, and the improvements occurred rather gradually and progressively over time.

An editorial by Jarcho points out that device implantation is not always easy, since pacing of the left atrium via the coronary sinus is technically a bit tricky. He also points out that this study looked at the benefit of bi-ventricular pacing without ICD placement. There is a suggestion that bi-V pacing may reduce the additional benefit of ICD placement, but it is unlikely that this hypothesis will be tested by a clinical trial, so most patients will end up with a dual-purpose device.

Areas of uncertainty remain its role in patients with atrial fibrillation, and the utility of basing the criteria for device implantation on echocardiographic indices of ventricular dyssynchrony.

Intensive lipid-lowering therapy

[Back after a two-month hiatus, due to busy practice, vacation, and time spent/wasted playing with video editing software and a new camcorder…]

In this week’s NEJM, Intensive lipid lowering with atorvastatin in patients with stable coronary disease makes an argument in favor of reducing LDL cholesterol levels for secondary prevention to lower than the current guidelines of 100 mg/dl. Although this may well be justified, the data presented here do not quite make the case but are spun with great skill.

15,464 patients with clinically evident coronary disease and LDL levels between 130 and 250 mg/dl off statin therapy were treated for 8 weeks with 10 mg of atorvastatin. At the end of this period, 10,003 patients whose LDL levels were lowered to under 130 mg/dl were randomized to continue the atorvastatin at 10 mg daily or to 80 mg of atorvastatin daily.

Patients on the 10 mg dose of atorvastatin achieved a mean LDL level of 101 mg/dl; those randomized to 80 mg had their LDL levels reduced to 77 mg/dl. During median follow-up of 4.9 years, there was a significant reduction in cardiovascular events, although not in overall mortality, on the higher dose of atorvastatin.

The problem with the interpretation of these results is that an average LDL level of 100 mg/dl achieved with a fixed, 10 mg dose of atorvastatin is not the same thing as targetting an LDL level of 100 mg/dl (with adjusted statin doses). Since patients in the 10 mg group were only required to have LDL levels under 130 mg/dl, the average level of 100 implies that about half of the patients had levels between 100 and 130. Targetting an LDL level of 100 mg/dl with adjusted doses of a statin would certainly narrow the distribution of values obtained, and would most likely lower the number of patients whose LDL’s were closer to 130 mg/dl.

So what does this study actually show? It demonstrates that CHD patients who achieve LDL levels of less than 130 mg/dl on a dose of 10 mg of atorvastatin have more cardiovascular events than those who are assigned to 80 mg of the drug. Not very surprising. Once again, even though the 10 mg patients studied here achieved an average LDL level of 100 mg/dl, this is a less aggressive approach than actually targetting 100 mg/dl. The LDL target of 100 mg/dl is not given a fair chance in this study.

What do the authors say about their study? In the abstract, they state:

“Intensive lipid-lowering therapy with 80 mg of atorvastatin per day in patients with stable CHD provides significant clinical benefit beyond that afforded by treatment with 10 mg of atorvastatin per day.”

This statement is justified by the data. But in the Methods section, they state:

“The occurrence of major cardiovascular outcomes was compared in two groups of patients: one group received 10 mg of atorvastatin daily with the goal of an average LDL cholesterol level of 100 mg per deciliter, and the other group received 80 mg of atorvastatin daily with the goal of an average LDL cholesterol level of 75 mg per deciliter.”

This is very fine tap-dancing. The authors carefully avoid saying that the goal LDL level in the 10 mg group was 100 mg/dl, since it wasn’t; rather, they state that the goal in that group was an average LDL cholesterol level of 100 mg/dl. Subtle but significant difference, as noted above. They further blur the distinction between average LDL level achieved and actual targetted LDL level in the final sentences of their paper:

“In summary, our findings demonstrate that the use of an 80-mg dose of atorvastatin to reduce LDL cholesterol levels to 77 mg per deciliter provides additional clinical benefit in patients with stable CHD that is perceived to be well controlled at an LDL level of approximately 100 mg per deciliter. These data confirm and extend the growing body of evidence indicating that lowering LDL cholesterol levels well below currently recommended levels can have clinical benefit.”

It would be very interesting to see how the subgroup that actually achieved an LDL level of 100 or less on the 10 mg atorvastatin dose (presumably half the study group) fared in this trial. I would bet that the difference in event rates is markedly reduced or nil in this subgroup. And I would bet that this information will not be published.

 

mxyzptlk1

Take two ximelegatran and call me in the morning

[Addendum, 2/14/05: as has been pointed out by commenters, ximelagatran was turned down for approval by an FDA panel in September, 2004, because of concerns about hepato-toxicity. Whether it will ever be marketed in the US is very much in doubt. The following post has been slightly edited with this in mind.]

Coumadin is such a source of headaches for patients and physicians. Patients have their lives medicalized and mildly disrupted by the frequent blood tests necessary to monitor anticoagulation. Physicians have to deal with the constant, low-level anxiety inherent in steering between hemorrhage and thrombosis.

Enter ximelagatran, an oral thrombin inhibitor whose anticoagulant effect is not significantly influenced by diet, body-weight and drug interactions. A drug that can be given at a fixed dose without monitoring anticoagulant effect. In last week’s JAMA are three articles looking at this medication.

Ximelagatran vs low-molecular-weight heparin and warfarin for the treatment of deep vein thrombosis reports on the results of the THRIVE study of 2,489 patients with documented DVT, one third of whom also had pulmonary emboli. Patients were double-blind randomized to either ximelagatran or LMWH / coumadin and were followed for six months. There was no difference in the rates of recurrent thrombosis; major bleeding was also not statistically different (trend towards lower bleeding in ximelagatran group). Liver enzymes exceeded 3 times normal in about 10% of ximelagatran patients, vs. 2% on warfarin.

Ximelagatran vs warfarin for stroke prevention in patients with nonvalvular atrial fibrillation reported on the results of the SPORTIF V trial. Here, patients with nonvalvular atrial fibrillation were randomized to ximelegatran or placebo. Mean follow-up was 20 months. In this trial too, there was no significant difference between the two therapies, although the primary endpoint, all-cause stroke per year, was 1.6% with ximelegatran vs 1.2 % with coumadin. Elevated liver enzymes occurred in 6% of ximelagatran patients. Among the 1,960 patients assigned to ximelagatran, there was one death that seemed to be the result of drug-related liver toxicity and one that was the result of gastrointestinal hemorrhage in a patient who had developped elevated liver enzymes.

Finally, costs and effectiveness of ximelagatran for stroke prophylaxis in chronic atrial fibrillation looked at a computer model of the problem and concluded that ximelegatran only marginally improves QALY over warfarin, at a fairly high cost per QALY.

Several points occur to me in looking at these articles:

  • If it were not for the liver toxicity, ximelagatran would be a real winner. But liver toxicity, including a fatality rate on the order of one per 5,000 patients, is a very definite limiting factor. Because of this, I don’t think ximelagatran is likely to become a substitute for coumadin.
  • The coumadin monitoring that goes on in trials such as these is clearly better that that which occurs in real life, which skews the results slightly against ximelagatran.
  • Some patients are clearly problematic coumadin patients, such as those who are frequently non-compliant or have to spend prolonged periods of time away from medical care. For these patients, ximelagatran, or a similar drug, may prove to be extremely useful.
  • The cost-effectiveness article is a computer simulation of the problem and, as such, is of very limited value. Articles of this sort make multiple assumptions about the problem at hand and then plug them into a computer model. Numbers go in, the computer grinds and grinds, and out come some neatly packaged QALY’s. Common sense beats the Markov model any day, in my opinion.