ctca may be better than stress test in some patients · objective: to determine the accuracy and...

25
CTCA May Be Better Than Stress Test in Some Patients Diagnostic Accuracy and Clinical Utility of Noninvasive Testing for Coronary Artery Disease. Weustink AC, Mollet NR, et al: Ann Intern Med 2010; 152 (May 18): 630-639 CTCA may have higher clinical utility than exercise ECG testing in patients with intermediate pretest probability. Background: The diagnostic evaluation of patients with chest pain is not always straightforward. The ultimate goal is to identify patients with high-risk coronary lesions and consider appropriate individualized treatment. But which patients should undergo invasive coronary angiography (ICA) and its inherent risks? Many noninvasive diagnostic tests are available. These tests vary in many ways including cost, availability, and exposure to radiation and contrast dye. Computed tomography coronary angiography (CTCA) is a newer test that has become popular in some communities. Objective: To determine the accuracy and utility of exercise electrocardiography (ECG) and CTCA in predicting which patients with chest pain should go on to coronary angiography. Design: Observational study. Methods: Patients with chest symptoms were recruited from a referral center in the Netherlands. In the first phase, all patients underwent exercise ECG, CTCA, and ICA. In the second period, patients with a negative exercise ECG and CTCA did not receive further testing. Patients with acute coronary syndrome or prior myocardial infarction, stent placement, or coronary artery bypass graft (CABG) were not eligible for the study. The primary outcome was the diagnostic accuracy of the noninvasive tests compared to coronary angiography. Patients were classified as having low (<20%), intermediate (20% to 80%), or high (>80%) pretest probability for obstructive coronary artery disease (CAD) based on their Duke score. Clinical utility was considered a posttest probability of ≤5% or ≥90%, values that would yield no further testing or obvious ICA. Results: 517 patients (mean age, 59 years) participated in the study. The CTCA had better sensitivity, specificity, and positive and negative predictive values compared to ECG stress testing overall and in each of the pretest probability categories. The CTCA negative likelihood ratios were low in all groups, and the CTCA positive likelihood ratio was highest in the group with intermediate pretest probability. The clinical utility was equivalent for both noninvasive forms of testing for patients with a low pretest probability. The clinical utility of CTCA was better than exercise ECG in patients with intermediate pretest probability. Conclusions: Either exercise ECG or CTCA can be useful in determining the need for further testing in patients with chest symptoms and a low pretest probability of obstructing coronary lesions based on the clinical Duke score. CTCA has better clinically utility in intermediate risk patients because there are fewer false- positive tests. Patients who are clinically high risk should be considered for ICA as the initial test. Reviewer's Comments: This study was done in a referral population, and almost half of the patients had an intermediate pretest probability based on symptoms and patient characteristics. This does not represent the type of patients seen in typical outpatient practices. The study does help better define a subset of patients with chest pain who may benefit from CTCA. (Reviewer-Deborah L. Greenberg, MD). © 2010, Oakstone Medical Publishing Keywords: Coronary Artery Disease, Noninvasive Testing, Accuracy Print Tag: Refer to original journal article

Upload: others

Post on 28-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

CTCA May Be Better Than Stress Test in Some Patients

Diagnostic Accuracy and Clinical Utility of Noninvasive Testing for Coronary Artery Disease.

Weustink AC, Mollet NR, et al:

Ann Intern Med 2010; 152 (May 18): 630-639

CTCA may have higher clinical utility than exercise ECG testing in patients with intermediate pretest probability.

Background: The diagnostic evaluation of patients with chest pain is not always straightforward. The ultimate goal is to identify patients with high-risk coronary lesions and consider appropriate individualized treatment. But which patients should undergo invasive coronary angiography (ICA) and its inherent risks? Many noninvasive diagnostic tests are available. These tests vary in many ways including cost, availability, and exposure to radiation and contrast dye. Computed tomography coronary angiography (CTCA) is a newer test that has become popular in some communities. Objective: To determine the accuracy and utility of exercise electrocardiography (ECG) and CTCA in predicting which patients with chest pain should go on to coronary angiography. Design: Observational study. Methods: Patients with chest symptoms were recruited from a referral center in the Netherlands. In the first phase, all patients underwent exercise ECG, CTCA, and ICA. In the second period, patients with a negative exercise ECG and CTCA did not receive further testing. Patients with acute coronary syndrome or prior myocardial infarction, stent placement, or coronary artery bypass graft (CABG) were not eligible for the study. The primary outcome was the diagnostic accuracy of the noninvasive tests compared to coronary angiography. Patients were classified as having low (<20%), intermediate (20% to 80%), or high (>80%) pretest probability for obstructive coronary artery disease (CAD) based on their Duke score. Clinical utility was considered a posttest probability of ≤5% or ≥90%, values that would yield no further testing or obvious ICA. Results: 517 patients (mean age, 59 years) participated in the study. The CTCA had better sensitivity, specificity, and positive and negative predictive values compared to ECG stress testing overall and in each of the pretest probability categories. The CTCA negative likelihood ratios were low in all groups, and the CTCA positive likelihood ratio was highest in the group with intermediate pretest probability. The clinical utility was equivalent for both noninvasive forms of testing for patients with a low pretest probability. The clinical utility of CTCA was better than exercise ECG in patients with intermediate pretest probability. Conclusions: Either exercise ECG or CTCA can be useful in determining the need for further testing in patients with chest symptoms and a low pretest probability of obstructing coronary lesions based on the clinical Duke score. CTCA has better clinically utility in intermediate risk patients because there are fewer false-positive tests. Patients who are clinically high risk should be considered for ICA as the initial test. Reviewer's Comments: This study was done in a referral population, and almost half of the patients had an intermediate pretest probability based on symptoms and patient characteristics. This does not represent the type of patients seen in typical outpatient practices. The study does help better define a subset of patients with chest pain who may benefit from CTCA. (Reviewer-Deborah L. Greenberg, MD). © 2010, Oakstone Medical Publishing

Keywords: Coronary Artery Disease, Noninvasive Testing, Accuracy

Print Tag: Refer to original journal article

Page 2: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

No Clear Benefit to Aspirin for Asymptomatic Atherosclerosis

Aspirin for Prevention of Cardiovascular Events in a General Population Screened for a Low Ankle Brachial Index: A

Randomized Controlled Trial.

Fowkes FGR, Price JF, et al:

JAMA 2010; 303 (March 3): 841-848

Aspirin is no better than placebo at preventing vascular events in the general population among patients with a low ABI.

Background: Early identification of patients with subclinical atherosclerosis could save lives. A low ankle brachial index (ABI) has been shown in prior studies to be related to atherosclerosis as well as cardiovascular and cerebrovascular events. Screening the general population for a low ABI might identify a higher risk group who may benefit from preventive measures. Objective: To assess the effectiveness of aspirin in preventing vascular events among patients from the general population who have low ABI upon screening. Design: Intention-to-treat, double-blind, randomized controlled trial from 1998 to 2008. Participants: 28,980 men and women (age range, 50 to 75 years) in Scotland were involved in the Aspirin for Asymptomatic Atherosclerosis trial. Subjects had no known cardiovascular disease and were not actively on antiplatelet agents. Methods: Potential subjects were recruited from a community health registry and underwent voluntary ABI screening. Those with a low ABI (≤0.95) were entered into the study (n=3,350). Subjects were randomized to receive enteric-coated aspirin 100 mg daily or placebo. The primary end point was coronary event, stroke, or revascularization. Secondary end points were all-cause mortality and other initial vascular events (including angina, claudication, and transient ischemic attack). Results: The mean follow-up time was 8.2 years. A total of 357 participants had a primary end point event, with no statistically significant difference of the likelihood of events between the 2 groups (13.7 events/1000 person-years in the aspirin arm vs 13.3 in the placebo group; HR, 0.03). Secondary end point events occurred in 578 participants, again with no difference between groups (22.8 events/1000 person-years in the aspirin arm vs 22.9 in the placebo group; HR, 1.00). No difference in all-cause mortality was noted between groups. Major bleeding requiring hospital admission occurred in 34 participants (2.5/1000 person-years) in aspirin group versus 20 (1.5/1000 person-years) in the placebo group (HR, 1.71). Conclusions: Among patients with a low ABI found by screening in the general population, aspirin was no better than placebo at reducing vascular events. Reviewer's Comments: This large study was well designed, with a robust follow-up time of >8 years. Unfortunately, subjects identified to have a low ABI did not appear to benefit from aspirin for the prevention of vascular events. What is worse is that the risk for major hemorrhage (including intracranial) was substantially (though not quite significantly) higher in the aspirin group (2.0% vs 1.2%). The search continues for better methods of screening for and protecting from underlying vascular disease in the general population. (Reviewer-Molly Blackley Jackson, MD). © 2010, Oakstone Medical Publishing

Keywords: Cardiovascular Events, Ankle Brachial Index, Prevention, Aspirin

Print Tag: Refer to original journal article

Page 3: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Talk With Patients About CIED Management as End of Life Nears

HRS Expert Consensus Statement on the Management of Cardiovascular Implantable Electronic Devices (CIEDs) in

Patients Nearing End of Life or Requesting Withdrawal of Therapy.

Lampert R, Hayes DL, et al:

Heart Rhythm 2010; May 13 (): epub ahead of print

Consensus recommendations exist to help clinicians hold conversations with patients about CIED therapy near the EOL.

Background: Implantable cardioverter-defibrillators (ICDs), pacemakers, and cardiac resynchronization therapy devices can save lives. What happens, however, when patients with these cardiovascular implantable electronic devices (CIEDs) near the end of life (EOL)? Currently, 20% of patients with ICDs receive device-generated shocks in the last weeks of life. These shocks can be uncomfortable to patients and distressing to family/caregivers. Physicians and other clinicians are often uneasy with device management near EOL, and few patients discuss device deactivation with providers even if a do-not-resuscitate order exists. Objective: To make clinicians aware of legal, ethical, and religious principles underlying decisions to withdraw life-sustaining therapies (including device deactivation), to highlight the importance of proactive communication by clinicians to minimize suffering near EOL, and to provide a guide for clinicians helping patients consider CIED therapy withdrawal. Methods: A consensus statement was created by experts in electrophysiology, geriatrics, palliative care, psychiatry, pediatrics, nursing, law, ethics, and divinity with additional input from patients and industry. Recommendations were confirmed through the Heart Rhythm Society's established consensus statement process. Agreement was >90% on all recommendations. Results/Recommendations: Patients or surrogate decision-makers have the right to refuse/withdraw any medical treatment regardless of whether or not the patient is terminally ill and regardless of whether the refusal/withdrawal results in death. Ethically and legally, there are no differences between refusing CIED therapy and withdrawal therapy. Carrying out a request to deactivate a CIED is neither physician-assisted suicide nor euthanasia. A clinician cannot be compelled to carry out CIED deactivation if this violates his/her personal values, but that clinician also cannot abandon the patient and should find a willing clinician to perform the deactivation. Deactivation rather than surgical removal of CIEDs is recommended. Discussions about potential future deactivation of CIEDs should begin when device implantation is considered and should be part of a bigger, ongoing conversation about patients' overall goals of care as their health changes. Clinicians should help patients determine how the benefits and burdens of device therapy align with desired health goals. Advance care planning (eg, health-care durable power of attorney or a living will) can make decisions near EOL easier. Deactivation should be accompanied by an order documenting the goals-of-care discussion held with the patient or surrogate decision-maker. The deactivation process should involve preplanning for management of any resultant symptoms. Multiple levels of CIED deactivation are possible, including only disabling specific features such as shock therapy. Antitachycardia therapies may be deactivated by reprogramming or magnet application. Pacing therapies can be deactivated by reprogramming to specific modes or subthreshold outputs. Any uncertainties should be clarified with an electrophysiology specialist. Care systems should develop clear policies surrounding deactivation of CIEDs. Reviewer's Comments: This consensus statement includes helpful sample phrases on how to begin conversations with patients about CIED therapy at the EOL. (Reviewer-Melissa Hagman, MD). © 2010, Oakstone Medical Publishing

Keywords: End-of-Life Care, Cardiovascular Implantable Electronic Devices, Management

Print Tag: Refer to original journal article

Page 4: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

How Risky Is Abdominal Surgery After Age 85?

How Do Patients Aged 85 and Older Fare With Abdominal Surgery?

Mirbagheri N, Dark JG, Watters DAK:

J Am Geriatr Soc 2010; 58 (January): 104108

Abdominal surgery in the very old (≥85 years of age) carries substantial but probably not prohibitive risk.

Background: Individuals ≥85 years of age are the fastest growing segment of the population in developed countries. It is likely that the frequency of surgical procedures in the very old will grow at a similar rate. Objective: To explore outcomes after abdominal surgery in people aged ≥85 years. Design: Retrospective cohort study. Participants/Methods: 179 patients ≥85 years of age who underwent abdominal surgery in a tertiary care hospital in Victoria, Australia, between 1998 and 2008 were followed for complications, in-hospital mortality, and change in residential status after their surgery. American Society of Anesthesiologists (ASA) categories were used to assess patient health status prior to surgery: I - healthy (2% of study subjects); II - mild systemic disease (18%); III - severe systemic disease (46%); IV - severe disease that is a constant threat to life (31%); and V - moribund and not expected to survive without surgery (2%). Results: The 107 women and 72 men had a mean age of 88.6 years. Prior to hospital admission, 83.2% lived at home, 11.2% were in assisted living care, and 5.6% were in nursing homes. Seventy-eight percent of the patients had an ASA score of 3 or 4. Cholecystectomy (11.2%), colorectal surgery (31.3%), laparotomy (25.1%), and hernia repair (13.4%) comprised the majority of surgeries; 64% of surgeries were emergent and carried a 22.6% inpatient mortality rate. The mortality rate for elective abdominal surgery was 7.8%. The complication rate was 62.8%, and <50% of these were serious in nature. Most patients were discharged to their prior residential status; 28.4% required a rehabilitation stay, and 3.4% were discharged to nursing homes. ASA score was the most consistent predictor of complications, mortality, or discharge to a higher level of care. Emergency surgery and premorbid residential status were associated with complications and discharge location. Age per se was not a predictor of complications or mortality. Conclusions: Overall, the inpatient mortality rate among patients ≥85 years old and undergoing abdominal surgery was 17%. ASA score and premorbid residential status are more important factors than age in predicting postoperative morbidity and mortality. Reviewer's Comments: This study suggests that advanced age alone is not a contraindication to abdominal surgery, and that the ASA score is a useful predictor of postoperative morbidity and mortality. The primary limitation of this retrospective study is that it could only include patients who were deemed suitable and underwent surgery. It is possible that the denominator of very old patients in need of abdominal surgery was much greater, and that many were treated conservatively. Still, a mortality rate of roughly 20%, even among 89 year olds in need of emergency surgery, indicates that, after due consideration of preoperative risk factors, it is likely often reasonable to proceed with surgical interventions, even in the very old. (Reviewer-Jeff Wallace, MD, MPH). © 2010, Oakstone Medical Publishing

Keywords: Age 85+, Abdominal Surgery, Outcomes

Print Tag: Refer to original journal article

Page 5: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Caring for Spouse With Dementia May Increase One's Own Risk of Dementia

Greater Risk of Dementia When Spouse Has Dementia? The Cache County Study.

Norton MC, Smith KR, et al:

J Am Geriatr Soc 2010; 58 (May): 895-900

The risk of dementia is substantially increased among spousal caregivers of dementia patients.

Background: Although the psychological and physical adverse effects of being a caregiver for a spouse with dementia are well described, potential adverse effects on caregiver cognition are not well known. Objective: To explore whether being a caregiver for a spouse with dementia increases the risk of dementia in the caregiver. Design/Participants: Population-based, longitudinal study of incident dementia among 1200 married couples ≥65 years of age residing in a rural county in northern Utah (Cache County). Methods: Married couples without dementia at baseline were evaluated every 3 years for up to12 years to detect new onset dementia based on DSM-III-R criteria. Spouses of patients with newly diagnosed dementia were similarly evaluated to detect incident dementia after they became caregivers. Regression analyses were used to adjust for potential confounders (age, sex, apolipoprotein E genotypes, and measures of socioeconomic status) and to test the effect of time-dependent exposure of caring for a spouse with dementia. Results: The mean baseline age of husbands and wives was 76 and 73 years, respectively. The risk of incident dementia was associated with having a spouse with dementia. Spouses of patients with dementia had a 6-fold increase in risk of developing dementia relative to spouses of patients who remained free of dementia (HR, 6.0; 95% CI, 2.2 to 16.2). This effect was greater in husbands of affected spouses (HR, 11.9; 95% CI, 1.7 to 85.5) than in wives of affected patients (HR, 3.7; 95% CI, 1.2 to 11.6), although this different effect by gender was not statistically significant. Conclusions: Being a caregiver for a spouse with dementia may substantially increase the risk of developing dementia. Reviewer's Comments: Dementia care giving has been associated with increased rates of depression, physical health problems, and mortality. This study adds a potential increase in the likelihood of developing dementia oneself as another potential hazard of caring for an impaired spouse. The authors offer several potential mechanisms that range from true adverse effects of chronic physiological stress to potential confounding due to shared environmental factors. The authors attempted to adjust for such confounding, but it is possible that increased dementia rates among caregiver spouses are due to similar dietary, smoking, alcohol, or other environmental exposures. If the association between being a caregiver for a spouse with dementia and subsequent development of dementia is causal, it would be important to understand if chronic stress is the causal pathway and explore how best to reduce the excess risk of being a caregiver. At a minimum, this study should spur medical providers to re-double their efforts to support spousal caregivers of dementia patients. Specifically, diagnosing and treating depression, encouraging breaks to engage in physical and social activity, and providing respite care are helpful short-term measures that perhaps may help ameliorate longer term risks of being a spousal caregiver. (Reviewer-Jeff Wallace, MD, MPH). © 2010, Oakstone Medical Publishing

Keywords: Spousal Dementia, Caregiver Cognition

Print Tag: Refer to original journal article

Page 6: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Advance Care Planning Can Enhance End-of-Life Care

The Impact of Advance Care Planning on End of Life Care in Elderly Patients: Randomised Controlled Trial.

Detering KM, Hancock AD, et al:

BMJ 2010; March 23 (): epub ahead of print

Advance care planning increases a patient's wishes being followed as well as both patient and family satisfaction with care.

Background: Prior studies suggest that end-of-life care is often inadequate, and that focusing on the completion of advance care directives alone does not improve end-of-life care. Objective: To evaluate the effect of advance care planning on end-of-life care in older adults. Design: Randomized controlled trial. Participants: 309 medical inpatients ≥80 years of age at a University Hospital in Melbourne, Australia. Patients were excluded if they were not competent, did not speak English, or had no family. Methods/Interventions: Study subjects were randomized to receive usual care with or without formal advance care planning. The intervention consisted of advance care planning facilitated by a trained nurse or allied health worker whose focus was to clarify patient goals, values, and beliefs, to assist patients in considering and documenting future medical treatment preferences, and to encourage patients to appoint a surrogate. The primary outcome measure was whether patients’ end-of-life preferences were known and followed. Secondary measures included patient and family satisfaction and stress, anxiety, and depression in the family of patients who died. Results: The median patient age was 85 years; roughly 20% had do-not-resuscitate forms at the time of admission, and 10% had previously appointed surrogates. Of the 154 (81%) subjects randomized to the intervention group, 125 (81%) received advance care planning. Of these, 108 (86%) expressed care preferences and/or appointed a surrogate. Advance care plan discussions lasted a median of 60 minutes, and families were present at 72% of the discussions. Complete documentation of care preferences correlated strongly with family member presence. Fifty-six patients died over the 6-month follow-up period. The intervention group was more likely to have had end-of-life wishes known and followed (25/29 deaths vs 8/27 among controls; P <0.001) and reported greater satisfaction with care. Compared to controls, family members of intervention subjects reported significantly less stress (17% vs 55%), anxiety (0% vs 11%), depression (0% vs 19%), and higher satisfaction with the quality of the patient’s death (83% vs 48%). Conclusions: The advance care planning intervention improved end-of-life care for both patients and their families. Reviewer's Comments: Studies of advance care planning in the United States have often failed to demonstrate significant improvement in end-of-life care, perhaps owing to patient wishes not being fully explored, wishes not always being followed, and/or failure to have a well-informed surrogate decision-maker. This Australian study is a welcome exception that indicates that trained non-physician facilitators can improve end-of-life care outcomes for patients and their families by enabling patient's wishes to be determined, documented, and respected. This reviewer is hopeful that this model of advance care planning will prove to be reproducible and exportable to the United States. (Reviewer-Jeff Wallace, MD, MPH). © 2010, Oakstone Medical Publishing

Keywords: Advance Care Planning, End-of-Life Care, Elderly

Print Tag: Refer to original journal article

Page 7: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Are Antipsychotics Associated With Increased Risk of CAP in Elderly Patients?

Association of Community-Acquired Pneumonia With Antipsychotic Drug Use in Elderly Patients: A Nested Case-Control

Study.

Trifirò G, Gambassi G, et al:

Ann Intern Med 2010; 152 (April 6): 418-425, W139-W140

Elderly patients who take antipsychotics are more likely to be diagnosed with community-acquired pneumonia.

Background: Antipsychotic medications have been associated with an increased risk of death when used to treat behavioral problems in patients with dementia. The mechanism of this association is not clear; one hypothesis is that antipsychotics increase the risk of fatal pneumonia. Objective: To evaluate whether antipsychotic use in patients aged ≥65 years is associated with an increased risk of fatal or nonfatal pneumonia. Methods: This population-based, nested, case-control study used a Dutch general practice database. The cohort included patients ≥65 years of age (without lung cancer) receiving their first prescription for an antipsychotic between January 1996 and December 2006. Cases had a new diagnosis of community-acquired pneumonia (CAP) that was validated by manual review of the electronic medical records. Controls were matched by year of birth, sex, and index date. Results: 88% of the cohort used typical antipsychotics, with a mean duration of use of approximately 4 or 5 months. Only about 30% of the patients were taking antipsychotics to control behavioral symptoms of dementia; other indications for antipsychotic use included anxiety disorders and delirium. Case patients were more likely to be housebound, have chronic obstructive pulmonary disease, and be taking corticosteroids. Both case and control patients had high use of benzodiazepines (approximately 30% of each group). Current use of antipsychotics was associated with an increased risk of pneumonia as compared to past use of antipsychotics (OR, 1.76; 95% CI, 1.22 to 2.53 for typical antipsychotics and OR, 2.61; 95% CI, 1.48 to 4.61 for atypical agents). Use of atypical (but not typical) antipsychotics was associated with fatal pneumonia (OR, 5.97; 95% CI, 1.49 to 23.98). The highest risk of pneumonia was observed during the first week of treatment. The risk was dose dependent, with patients receiving more than the median dose of either typical or atypical agents having higher risk for pneumonia than those receiving less than the median dose. Conclusions: The authors conclude that the use of atypical or typical antipsychotics in the elderly is associated with an increased risk of CAP. Reviewer's Comments: This study adds to a body of evidence suggesting that antipsychotic medications may be harmful to elderly patients. We are not likely to have the luxury of large, randomized, controlled trials to give us a more definitive answer. This is a study to be aware of, but it will not change my practice. Unfortunately, we do not have other good pharmacologic options for delirium or dementia with behavioral disturbance. I will continue to prescribe antipsychotics sparingly and at low doses in patients with dementia with severe behavioral disturbance refractory to conservative management after a risk-benefit discussion with the family, or for a short period of time for management of severe, refractory delirium in hospitalized patients. (Reviewer-Susan E. Merel, MD). © 2010, Oakstone Medical Publishing

Keywords: Antipsychotics, Dementia, Elderly, Community-Acquired Pneumonia

Print Tag: Refer to original journal article

Page 8: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Older Adults at Risk for Cognitive Decline After Hospitalization

Association Between Acute Care and Critical Illness Hospitalization and Cognitive Function in Older Adults.

Ehlenbach WJ, Hough CL, et al:

JAMA 2010; 303 (February 24): 763-770

Older adults who are hospitalized may be at increased risk for cognitive decline and the development of dementia.

Background: There is a growing body of literature suggesting that survivors of critical illness are at risk for cognitive decline that may persist for years after hospital discharge. Because older patients may have baseline cognitive impairment, it is important to follow an objective measure of cognitive ability when studying the effects of an acute illness on cognition. Objective: To determine whether older adults hospitalized in an acute care or critical care setting are at increased risk for cognitive decline or incident dementia. Design/Participants: A retrospective analysis of data from the Adult Changes in Thought (ACT) study, a prospective cohort study of individuals aged ≥65 years belonging to a large health maintenance organization in Seattle was made. Methods: The ACT study enrolled community-dwelling individuals without dementia and evaluated them every 2 years between 1994 and 2007. The Cognitive Abilities Screening Instrument (CASI), a 25-item screening test for dementia, was administered as the initial screening test to exclude dementia and subsequently at each visit. If an individual screened positive for dementia, a full standardized clinical examination was conducted, and the diagnosis of dementia was reached by consensus. Data from the ACT study was linked with claims data from hospitalizations. Hospitalization for critical illness was defined by ICD-9 code. Individuals who were hospitalized for primary brain injury were excluded. Results: Data for 2929 individuals were analyzed. The mean (SD) follow-up was 6.1 (3.2) years. A total of 1287 individuals were hospitalized for noncritical illness, and 41 were hospitalized for critical illness. Adjustments were made for variables including baseline CASI score, age, and education. Individuals hospitalized in the acute care setting had an adjusted median decline in CASI score of 1.01 points (95% CI, 1.33 to 0.70; P <0.001) at the study visit following the hospitalization as compared to the preceding interval. For individuals hospitalized for critical illness, the adjusted median decline in CASI score was 2.14 points (95% CI, 4.24 to 0.03; P =0.047). The adjusted hazard ratio for incident dementia was 1.4 following a noncritical illness hospitalization (95% CI, 1.1 to 1.7; P =0.001); this association was not significant for critical illness hospitalization. Conclusions: Older adults who were hospitalized were more likely to experience cognitive decline. Those hospitalized for acute, but not critical, illness were more likely to be diagnosed with dementia after the hospitalization. Reviewer's Comments: This is a very interesting association. It is not clear whether the hospitalizations contributed to the cognitive decline; individuals with cognitive impairment may be at greater risk for hospitalization. The study was not powered to detect an association between critical illness and incident dementia. Clinicians should routinely assess cognition in their elderly patients and should be aware that patients who have been hospitalized may be at increased risk for cognitive decline. (Reviewer-Susan E. Merel, MD). © 2010, Oakstone Medical Publishing

Keywords: Dementia, Critical Illness, Cognitive Impairment

Print Tag: Refer to original journal article

Page 9: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

High-Dose PPI Tx No Better Than Non--High-Dose PPI Tx

High-Dose vs Non–High-Dose Proton Pump Inhibitors After Endoscopic Treatment in Patients With Bleeding Peptic Ulcer:

A Systematic Review and Meta-Analysis of Randomized Controlled Trials.

Wang C-H, Ma MH-M, et al:

Arch Intern Med 2010; 170 (May 10): 751-758

High-dose PPI therapy is no better than non–high-dose PPIs in the prevention of recurrent bleeding, avoidance of surgery, and mortality outcomes in patients with confirmed bleeding peptic ulcers.

Background: It is standard practice in my institution to use high-dose proton pump inhibitors (PPIs) in patients with active upper gastrointestinal bleeds. Evidence shows superior outcomes in avoidance of surgical intervention, re-bleeding, and mortality for this practice compared to placebo. However, similar outcomes have been reported comparing lower dose PPI therapy to histamine receptor blockers. Objective: To compare high-dose PPI therapy and non–high-dose PPI therapy in a patient with a confirmed bleeding peptic ulcer after endoscopic therapy. Outcomes of interests were rebleeding, surgical intervention, and mortality. Design: Meta-analysis of randomized controlled trials (RCTs). Methods: A meta-analysis was performed in the usual fashion looking at RCTs that directly compared a high-dose and non–high-dose PPI approach to patients with a confirmed bleeding peptic ulcer after endoscopic intervention. High-dose PPI therapy was defined as an 80-mg IV bolus, with 8 mg/h continuous infusion for 72 hours or >192 mg/day; non–high-dose therapy was defined as any dose less than "high dose." Results: 7 high quality RCTs were identified representing 1157 patients. Compared to non–high-dose PPIs, high-dose PPI therapy did not change post-procedural bleeding rates (OR, 1.3; 95% CI, 0.88 to 1.91), the need for post-endoscopic surgical intervention (OR, 1.49; 95% CI, 0.66 to 3.37), or mortality (OR, 0.89; 95% CI, 0.37 to 2.13). These results did not vary with severity of the bleeding event, route of PPI therapy, or dose of PPI. Conclusions: The authors concluded that high-dose PPI therapy did not further reduce rates of rebleeding, surgical intervention, or mortality as compared to non–high-dose PPI therapy after endoscopic treatment for bleeding peptic ulcer. Reviewer's Comments: This study strengthens the conclusions of a prior meta-analysis and calls to question our usual practice of high-dose PPI therapy in hospitalized patients with confirmed peptic ulcer disease. Because of methodological differences among the 7 studies, my confidence in the strength of the meta-analysis conclusions is slightly lower. However, because several studies did not use intention-to-treat, we would expect the effect size of high-dose PPIs to be over-estimated, and yet no superiority. The inclusion of 2 studies with Asian populations (who are shown to have a higher response to PPIs vs non-Asians), and studies that may have selected less ill subjects may dilute the potential effect of high-dose PPI therapy. Despite this, I find myself compelled to question the added value of high-dose PPIs given the higher costs conferred. Ideally a large, head-to-head, intention-to-treat protocol RCT could strengthen this evidence further. Institutions would be more apt to change policy if cost savings without harm could be shown with lower dose PPI intervention. Further studies should also focus on the ideal range for PPI therapy in this population. (Reviewer-Genevieve L. Pagalilauan, MD). © 2010, Oakstone Medical Publishing

Keywords: Peptic Ulcers, Endoscopic Tx, Bleeding, Proton Pump Inhibitors

Print Tag: Refer to original journal article

Page 10: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Certain Probiotics May Improve IBS Symptoms

The Efficacy of Probiotics in the Treatment of Irritable Bowel Syndrome: A Systematic Review.

Moayyedi P, Ford AC, et al:

Gut 2010; 59 (March): 325-332

Probiotics appear to improve pain and global symptoms in patients with irritable bowel syndrome, but the magnitude of benefit and the most effective probiotic species is uncertain.

Background: Available treatment options for irritable bowel syndrome (IBS) are limited and often less effective than desired. Bulking agents (eg, psyllium) and antispasmodics provide some symptom relief. Lubiprostone (Amitiza®), a chloride channel subtype-2 agonist, provides some benefit in constipation-predominant IBS, but it is costly. Other treatment options are needed. Objective: To systematically review available data to determine if probiotics improve IBS symptoms. Design: Systematic review of the available literature from 1966 to 2008. Methods: The authors searched MEDLINE, EMBASE, and the Cochrane Controlled Trials Register, as well as abstracts from Digestive Diseases Week and United European Gastroenterology Week. To be included, studies needed to be parallel group, randomized controlled trials (RCTs) with ≥1 week of comparison between a probiotic and placebo (or no treatment) in adults with IBS. Study outcomes needed to include changes in abdominal pain or global IBS symptoms. Dichotomous outcomes were synthesized using relative risk (RR) for symptoms not improving, and continuous data were evaluated using standardized mean difference (SMD) with random effects models. Results: Overall, 19 RCTs involving 1650 patients were identified. The studies were generally of good quality with 11 RCTs scoring 4 out of 5 on the Jadad scale. Ten of the 19 RCTs representing 918 patients had outcomes with a dichotomous variable. These studies demonstrated significant improvement in IBS symptoms with probiotics compared to placebo (RR of IBS not improving with probiotics 0.71, 95% CI, 0.57 to 0.88l; number needed to treat=4). Unfortunately, there was significant heterogeneity in the studies and possible funnel plot asymmetry suggesting a bias toward publication of small positive studies. Fifteen of the 19 RCTs including 1351 patients reported improvement in IBS scores as a continuous variable, with probiotics leading to benefit in symptoms compared to placebo with a SMD of –0.34 (95% CI, –0.61 to –0.07). Again, there was statistically significant heterogeneity among the studies, but this was explained by a single outlying trial. When this trial was excluded, the benefit of probiotics persisted (SMD, –0.18; 95% CI, –0.29 to –0.06). There was no significant funnel plot asymmetry in the continuous variable studies. No difference was found between the various types of probiotics used, including Lactobacillus, Bifidobacterium, Streptococcus, and combinations of these. All showed improvement in IBS symptoms. Conclusions: While probiotics appear to reduce IBS symptoms, the magnitude of benefit and the most effective species and strain remain uncertain. Reviewer's Comments: While the data available to evaluate the efficacy of probiotics in treating IBS are imperfect, the probable benefit is consistent with the effects of probiotics in other gastrointestinal conditions. Other studies have shown that probiotics reduce the risk of antibiotic-associated gastrointestinal symptoms and traveler's diarrhea. Probiotics also shorten the duration of symptoms in infectious diarrhea. (Reviewer-Melissa Hagman, MD). © 2010, Oakstone Medical Publishing

Keywords: Irritable Bowel Syndrome, Probiotics

Print Tag: Refer to original journal article

Page 11: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Rifaximin Reduces Recurrence of Acute Hepatic Encephalopathy

Rifaximin Treatment in Hepatic Encephalopathy.

Bass NM, Mullen KD, et al:

N Engl J Med 2010; 362 (March 25): 1071-1081

Rifaximin added to lactulose can reduce the recurrence of acute encephalopathy in certain patients with cirrhosis.

Background: Hepatic encephalopathy is a significant complication of chronic liver disease and has a major impact. Treatment of acute episodes with oral antibiotics to reduce the production of ammonia and lactulose to reduce the absorption of ammonia is common practice. Long-term prevention with lactulose is also commonly prescribed to reduce the frequency of acute episodes; however, side effects often limit regular adherence. Safety issues have traditionally limited the use of long-term oral antibiotics for prevention. Rifaximin is an oral antibiotic that is minimally absorbed from the gastrointestinal tract and has been used successfully in the treatment of acute hepatic encephalopathy. Objective: To determine if rifaximin taken daily can safely reduce the frequency and severity of acute hepatic encephalopathy in patients with cirrhosis when compared to placebo. Design: Multicenter, randomized, double-blind, placebo-controlled study. Methods: Adult patients with cirrhosis and encephalopathy in remission, at least 2 episodes of acute encephalopathy in the prior 6 months, and Model for End-Stage Liver Disease (MELD) scale scores <26 were considered for enrollment. There were numerous exclusion criteria. Eligible patients were randomized to rifaximin 550 mg twice daily or placebo for 6 months or until a breakthrough episode of encephalopathy. Patients could use lactulose during the study. The primary end point was an episode of acute hepatic encephalopathy defined as a Conn score of ≥2 and asterixis. The secondary end point was hospitalization involving hepatic encephalopathy. Patients were evaluated in the clinic on days 7 and 14 and then every 2 weeks. They were contacted by telephone during each week that they were not directly observed. Results: 299 patients from 70 sites were included. The majority of participants were white males, and the average age was 56 years. Over 90% in each group were taking lactulose at enrollment. Breakthrough episodes of acute encephalopathy occurred in 22.1% of patients on rifaximin and 45.9% of patients taking placebo. The risk of an acute episode in those taking rifaximin compared to placebo was 0.42 (95% CI, 0.28 to 0.64; P <0.001). The risk of hospitalization for encephalopathy in those taking rifaximin was also greatly reduced by 50% compared to placebo. There were no significant differences in adverse events between the 2 groups. Conclusions: Rifaximin greatly reduced the recurrence of acute encephalopathy and hospitalization involving encephalopathy in patients with cirrhosis. The number needed to treat to prevent 1 exacerbation during a 6-month period is 4 patients. Reviewer's Comments: Hepatic encephalopathy in patients with chronic liver disease causes significant burden for the patient and the health-care system. This well-done study suggests that adding rifaximin to lactulose for the maintenance of remission results in decreased exacerbations and hospitalizations. There were many exclusion criteria used in this trial, thus these results on safety and efficacy apply only to a subset of our patients with chronic liver disease. (Reviewer-Deborah L. Greenberg, MD). © 2010, Oakstone Medical Publishing

Keywords: Cirrhosis, Hepatic Encephalopathy, Rifaximin

Print Tag: Refer to original journal article

Page 12: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Consider Using Vitamin E Tx for NASH in Nondiabetic Adults

Pioglitazone, Vitamin E, or Placebo for Nonalcoholic Steatohepatitis.

Sanyal AJ, Chalasani N, et al:

N Engl J Med 2010; 362 (May 6): 1675-1685

In nondiabetic persons with NASH, vitamin E 800 IU/day improves liver histology and serum aminotransferase levels. Pioglitazone may also lead to some improvement, but it causes undesirable weight gain.

Background: Nonalcoholic steatohepatitis (NASH) is common and progresses to cirrhosis in ≥15% of individuals. There are few treatment options other than lifestyle modification. Objective: To evaluate the efficacy of 2 separate interventions (vitamin E or pioglitazone) for the treatment of NASH. Design/Participants: Multicenter, prospective, randomized, placebo-controlled, double-blinded trial of vitamin E or pioglitazone for treatment of NASH in 247 nondiabetic adults with biopsy-proven NASH. Exclusion criteria included cirrhosis, hepatitis C or other liver disease, heart failure, significant alcohol consumption, and use of medications known to cause steatohepatitis. Methods: Participants were randomized to pioglitazone 30 mg daily plus a vitamin E-like placebo versus vitamin E 800 IU daily plus pioglitazone-like placebo versus both placebos for 96 weeks. The primary outcome was improvement in liver histology as measured by a composite of steatosis, lobular inflammation, hepatocellular ballooning, and fibrosis. Initial biopsies were performed within 6 months of study randomization, and follow-up biopsies were completed in week 96. Secondary outcomes included individual histology features, changes in serum aminotransferase levels, anthropometric measures, insulin resistance, lipid profiles, and health-related quality of life. Results: Vitamin E produced improvement in the primary outcome of overall NASH liver histology compared to placebo (43% vs 19%; P =0.001; number needed to treat [NNT], 4.2). The improvement in overall histology with pioglitazone compared to placebo did not reach statistical significance (34% vs 19%; P =0.04, with P value <0.025 considered significant due to the 2 planned primary comparisons in the study; NNT, 6.9). Both vitamin E and pioglitazone did significantly reduce serum aminotransferase levels (P <0.001 for both), steatosis (vitamin E, P =0.005; pioglitazone, P <0.001), and lobular inflammation (vitamin E, P =0.02; pioglitazone, P =0.004). Neither agent improved hepatic fibrosis. Persons receiving pioglitazone gained a significant amount of weight (mean, 4.7 kg) compared to no significant weight gain in the other groups. Pioglitazone was associated with significant improvement in insulin resistance and high-density lipoprotein. Quality-of-life scores and medication side effects were not different among the groups, though the study was not powered to test any safety-related hypotheses. Conclusions: Vitamin E is superior to placebo for the treatment of NASH in adults without diabetes. Although pioglitazone did not meet statistical significance for the prespecified primary outcome measure, it was associated with a significant improvement in other histologic features of NASH and may have efficacy as well. Reviewer's Comments: I had dismissed vitamin E as a general therapeutic agent years ago based on a meta-analysis by Miller and colleagues (Annals Intern Med 2005;142:37-46) suggesting that there was an increase in all-cause mortality in persons who took ≥400 IU/day of vitamin E. Now, although the long-term effects of vitamin E in NASH are unknown, I will strongly consider using vitamin E in nondiabetic NASH patients while continuing to not recommend vitamin E in other patients. (Reviewer-Melissa Hagman, MD). © 2010, Oakstone Medical Publishing

Keywords: Nonalcoholic Steatohepatitis, Vitamin E, Pioglitazone

Print Tag: Refer to original journal article

Page 13: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

PPI Use During C. difficile Tx Appears to Increase Recurrent Infection

Proton Pump Inhibitors and Risk for Recurrent Clostridium difficile Infection.

Linsky A, Gupta K, et al:

Arch Intern Med 2010; 170 (May 10): 772-778

The use of PPIs during treatment of C. difficile colitis appears to increase the risk of recurrent CDI.

Background: Proton pump inhibitors (PPIs) have been accused of increasing the risk of hip fracture, pneumonia, and Clostridium difficile infection (CDI). Moreover, multiple studies have shown that PPIs are often prescribed without clear indications. Objective: To determine the association between PPI use and the risk of recurrent CDI. Design/Participants: Retrospective, cohort study of 1166 inpatients and outpatients identified in the administrative databases of the New England Veterans Healthcare System from October 2003 through September 2008 with an initial diagnosis of CDI. Methods: Initial CDI was defined as a positive stool toxin test and a prescription for either metronidazole or oral vancomycin. Persons who received any PPI within 14 days after their CDI diagnosis were classified as PPI-exposed. PPI use was identified by review of inpatient pharmacy dispensing records and filled outpatient prescriptions. The primary outcome was a positive C. difficile toxin test 15 to 90 days after initial CDI diagnosis. Outcomes were adjusted for comorbid illness, medication use, initial antibiotic treatment (metronidazole vs vancomycin), use of non–C. difficile-targeted antibiotics, hospitalization, and duration of hospitalization after CDI diagnosis. Results: 97% of participants were men, and the average participant age was 74 years. A total of 527 (45.2%) participants had PPI exposure in the 14 days after CDI diagnosis; 639 (54.8%) persons were not exposed. Almost all (96.7%) PPI-exposed individuals received omeprazole 20 mg orally daily or equivalent. The PPI-exposed group had more ischemic heart disease, chronic obstructive pulmonary disease, esophageal disease, peptic ulcer disease, and rheumatologic disease based on ICD-9-CM codes than did the non–PPI-exposed group. Individuals receiving PPIs were also more likely to be inpatients, to have hospital stays >14 days, to take systemic corticosteroids, and to be exposed to antibiotics not targeted at CDI. After adjusting for these differences, recurrent CDI was more common in PPI-exposed participants than in non–PPI-exposed participants (25.2% vs 18.5%; HR. 1.42; 95% CI, 1.11 to 1.82). The HR remained significant (1.52) even after data from persons with positive toxin tests 15 to 90 days after treatment, but no pharmacy data to confirm additional antibiotic treatment for CDI (and thus perhaps no recurrent clinical disease), were excluded. Conclusions: PPI use during treatment of initial diagnosis of CDI is associated with a 42% increase in the risk of recurrent CDI. Reviewer's Comments: While not definitive, this study adds to the proposed detrimental effects of PPIs by suggesting that PPIs make treatment of CDI more difficult. At the very least, this study should urge providers to discontinue PPIs in patients without clear indications for their use. Providers could also consider suspending PPI use during CDI treatment even in those patients with indications for PPIs based on consideration of the risks/benefits. (Reviewer-Melissa Hagman, MD). © 2010, Oakstone Medical Publishing

Keywords:Clostridium difficile, Infection Recurrence, Proton Pump Inhibitors

Print Tag: Refer to original journal article

Page 14: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

How Safe Is the Herpes Zoster Virus Vaccine?

Safety of Herpes Zoster Vaccine in the Shingles Prevention Study: A Randomized Trial.

Simberkoff MS, Arbeit RD, et al:

Ann Intern Med 2010; 152 (May 4): 545-554

The herpes zoster vaccine causes minor inoculation-site effects (redness) but no more serious adverse events than placebo.

Background: Vaccination against herpes zoster is recommended for preventing shingles and postherpetic neuralgia in immunocompetent adults, but many physicians are unsure about its safety profile. Objective: To describe side effects and adverse events of the herpes zoster vaccine in immunocompetent older adults. Design: Randomized, placebo-controlled trial from 1998 through 2001 in 22 U.S. academic medical centers (many Veterans Affairs). Participants: 38,546 immunocompetent adults ≥60 years old were enrolled. The adverse events substudy included 6,616 subjects. Methods: Subjects were given a single dose of herpes zoster vaccine or blinded placebo. For the first 42 days after inoculation, vaccination-related adverse events (minor and serious) were recorded. In addition, a separate (voluntary, nonrandomized) substudy of adverse events was conducted. Results: Serious events were reported in 255 (1.4%) vaccine recipients and 254 (1.4%) placebo recipients. In the substudy, inoculation-site reactions were reported by 1604 (48%) of vaccine recipients and 539 (16%) of placebo recipients. More local side effects were noted among younger (60 to 69 years old) rather than older (aged >70 years) vaccine recipients. After inoculation, a herpes zoster rash occurred in 24 placebo recipients and only 7 vaccine recipients. Long-term follow-up (mean, 3.39 years) revealed no difference in rates of hospitalization or death between vaccine and placebo recipients. Conclusions: The herpes zoster vaccine appears safe and well tolerated in immunocompetent older adults. Reviewer's Comments: Herpes zoster vaccination rates among target patients in the U.S. (immunocompetent, ≥60 years old) are still very low. Though many physicians cite the high cost of the vaccine (often >$200), some may be wary of using the herpes zoster vaccine due to safety concerns. This follow-up report (from the initial Shingles Prevention Study) revealed no difference in serious adverse events between vaccination and placebo arms, though there were more local effects, such as redness and tenderness, in the vaccine arm. Older patients (>70 years old) seem less susceptible to these local effects than patients aged 60 to 69 years old. It is worth noting that the Shingles Prevention Study does NOT include subjects who are nonambulatory, immunocompromised, or have cognitive impairment. This limits generalizability of this study, and is a reminder that a decision to vaccinate should be made after a careful review of the risks and benefits to any one individual. (Reviewer-Molly Blackley Jackson, MD). © 2010, Oakstone Medical Publishing

Keywords: Herpes Zoster, Shingles, Vaccination

Print Tag: Refer to original journal article

Page 15: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Atypical Antipsychotic Medications Double Risk of OSA

Atypical Antipsychotic Medications Are Independently Associated With Severe Obstructive Sleep Apnea.

Rishi MA, Shetty M, et al:

Clin Neuropharm 2010; 33 (May/June): 109-113

Patients on atypical antipsychotic medications should be periodically assessed for signs of metabolic syndrome, including symptoms or signs of possible moderate to severe obstructive sleep apnea.

Background: Atypical antipsychotic (AA) medications are widely prescribed for both on- and off-label use, including for their sedating properties. AAs can cause weight gain and increase the risk of diabetes. Objective: To study the effect of AA medications on breathing during sleep. Design/Participants: Retrospective analysis of overnight diagnostic sleep studies performed on adults for any reason at one community hospital over a nearly 2-year period (n=842). Methods: Patients took their usual bedtime medications per sleep center protocol. Eight percent of patients (n=68) took AA; overall, the age, gender, and body mass index (BMI) of this group did not differ from the other 92%. One-third of those taking AA medications also took other sedating medications. Overall, the 30% of patients who took any sedating medication were older and more likely to be female than the 70% who did not. Results: Increasing age, BMI, neck circumference, and being male were each associated with a greater risk of moderate to severe obstructive sleep apnea (OSA) as has been well-established in the past. Controlling for these factors, use of AA medications increased the odds ratio for moderate to severe OSA 2-fold, but use of a non-AA sedating medication did not. The increased risk from AA medication use was similar in degree to that of gender, and was greater than that conveyed by obesity or by large neck circumference. Central apneas did not differ between groups. Conclusions: Use of an AA medication may increase the risk of OSA independent of any effect on weight. The researchers warn that this study was fairly small, and that people symptomatic from AA-related OSA might be more likely to have a sleep study than the AA-using population in general. Reviewer's Comments: This finding is interesting, although these researchers rightly point out a potential source of referral bias. However, a link is biologically plausible; AA medications are known to increase the risk of metabolic syndrome, including weight gain, diabetes mellitus, and adverse changes in lipids, and obstructive sleep apnea is strongly associated with metabolic syndrome. At this time, it seems prudent to minimize the use of AA medications when alternative medications as effective and potentially safer could be used and to periodically monitor patients on AA medications for symptoms and signs of obstructive sleep apnea and for adverse changes in weight, lipids, blood pressure, and glucose control. (Reviewer-Eliza L. Sutton, MD). © 2010, Oakstone Medical Publishing

Keywords: Obstructive Sleep Apnea, Atypical Antipsychotic Medications

Print Tag: Refer to original journal article

Page 16: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Dopamine Agonists for RLS May Induce Impulse Control Disorders in Patients

Impulse Control Disorders With the Use of Dopaminergic Agents in Restless Legs Syndrome: A Case-Control Study.

Cornelius JR, Tippmann-Peikert M, et al:

Sleep 2010; 33 (January 1): 81-87

Approximately one-third of patients on DAs for RLS report compulsive behaviors including compulsive eating, shopping, sex, and gambling.

Background: Impulse control disorders (ICD) occur in approximately 6% of people with Parkinson disease (PD); that rate doubles with dopamine agonists (DA). Pathologic gambling can occur with DA use for restless legs syndrome (RLS), but the prevalence and range of ICD behaviors are not known. Objective: To describe ICD in people with RLS taking DA medications (RLS-DA). Design/Participants: Consecutive enrollment of patients presenting to a tertiary sleep center: 100 people with RLS who had been treated with DAs (RLS-DA); 275 people with obstructive sleep apnea (OSA) but no RLS or DA use (OSA controls); and 52 people with RLS who had never taken DA (RLS controls). Parkinsonism by prior diagnosis, obsessive-compulsive disorder, and mania were disqualifying conditions. Methods: Patients were administered a questionnaire that screened for compulsive behaviors around shopping, eating, gambling, sexuality, and "punding" (intrusive, repetitive actions). Those screening positive for any ICD were contacted by phone, informed of the findings, offered follow-up care, and studied further in a semi-structured interview. Results: ICD was present in 32% of the RLS-DA group. Compulsive eating occurred in 12% each of the OSA control group and the RLS-DA group; these were excluded. Twenty percent of the RLS-DA group had at least 1 other ICD; compulsive shopping and gambling each occurred significantly more commonly among RLS-DA patients. All RLS-DA patients with an ICD had taken, or were taking, pramipexole—two thirds currently and one third previously (all of whom currently took ropinirole). DA doses were typical for RLS; the mean dose of pramipexole in the RLS-DA group was 1.25 mg/day for those with any ICD versus 0.67 mg/day for those with no ICD. No dose association was found for ropinirole or levodopa. Symptoms appeared on average 9.5 months after initiation of the triggering medication (range, 2 weeks to 3 years). The researchers included case reports from 4 subjects detailing serious financial, marital, and/or legal consequences of ICD. Of the 8 RLS-DA patients with ICD who stopped their medication as part of the study, the ICD resolved completely within 7 weeks, and improved significantly in the last patient. Conclusions: RLS patients commonly develop ICD when treated with modest doses of DA agents; physicians should screen for this. Reviewer's Comments: This information is important for anyone who prescribes dopamine agonists. The prevalence of ICDs was high, the range of behaviors was wide, the consequences could be severe, and the onset was as late as 3 years. The study questionnaire, an appendix to the paper, may be useful in screening patients on dopamine agonists in clinical practice. (Reviewer-Eliza L. Sutton, MD). © 2010, Oakstone Medical Publishing

Keywords: Restless Legs Syndrome, Dopamine Agonists, Impulsive Control Disorders

Print Tag: Refer to original journal article

Page 17: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Another Potential Role for HPV Testing -- Evaluating AGCs

Relationship of Atypical Glandular Cell Cytology, Age, and Human Papillomavirus Detection to Cervical and Endometrial

Cancer Risks.

Castle PE, Fetterman B, et al:

Obstet Gynecol 2010; 115 (February): 243-248

In a woman with AGCs on Pap smear, a positive test result for high-risk HPV significantly increases the likelihood that the AGCs are from cervical (rather than endometrial) dysplasia or cancer.

Background: Testing for oncogenic strains of human papillomavirus (HPV) is now established as an adjunct in cervical cancer screening and in the evaluation of atypical squamous cells identified on Pap smears. Atypical glandular cells (AGCs) are a less common cytologic finding that may be from the cervix or the endometrium, which is currently evaluated by colposcopy and often by endometrial biopsy. The role, if any, for HPV testing in the evaluation of AGCs is not established. Objective: To determine the risk, by HPV results, of cervical versus endometrial neoplasia in women with AGCs. Design/Participants: Cross-sectional study of 1442 women with AGC in a large health-maintenance organization that had introduced HPV testing into screening algorithms in 2003. Methods: Outcomes were diagnoses of dysplasia or cancer after evaluation for AGC. Results: Nearly 17% of women had dysplasia or malignancy. Ninety-five percent of cervical dysplasia/cancer cases (cervical intraepithelial neoplasia-2 or worse) had tested positive for high-risk HPV, and 85% of endometrial atypia/cancer cases had tested negative for high-risk HPV. After AGC plus positive high-risk HPV, cervical dysplasia/cancer was diagnosed in 39% and endometrial atypia/cancer in 0.8%, a ratio of almost 50:1. After AGC plus negative high-risk HPV, cervical dysplasia/cancer was diagnosed in 2.5% and endometrial atypia/cancer in 6.9%, a ratio of 1:2.75. Among women age >50 years, 8.5% of the 497 women with AGCs had cancer: 420 were HPV negative (of whom 10.5% had endometrial cancer and 0.4% had cervical cancer), and 77 were HPV positive (of whom 10.4% had cervical cancer and none had endometrial cancer). Conclusions: The authors conclude that testing for high-risk HPV may help discriminate between cervical and endometrial neoplasia risk when AGCs are found, especially in women >50 years of age. Reviewer's Comments: The authors' conclusion that HPV testing better distinguishes between cervical and endometrial neoplasia in women >50 years old than in younger women rests on 3 cases of endometrial cancer diagnosed in younger women who had tested positive for HPV. Overall, the risk of cervical dysplasia or cancer was approximately 50:1 for HPV positivity and approximately 1:3 for HPV negativity in this study, showing that a positive HPV results does convey useful information about the likelihood of a cervical source. HPV testing may soon more accurately direct initial diagnostic testing in cases of AGCs, but currently, AGCs should still prompt further evaluation with colposcopy (including endocervical biopsy) and with endometrial biopsy (except in women age <35 years who have no risk factors for endometrial neoplasia). (Reviewer-Eliza L. Sutton, MD). © 2010, Oakstone Medical Publishing

Keywords: Papanicolau Screening, HPV, Endometrial Ca, Cervical Ca

Print Tag: Refer to original journal article

Page 18: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

A Balanced View of Menopausal Hormone Therapy for 2010

Estrogen and Progestogen Use in Postmenopausal Women: 2010 Position Statement of the North American Menopause

Society.

North American Menopause Society:

Menopause 2010; 17 (March): 242-255

Hormone therapy is a safe and effective option for many (but not all) women experiencing symptoms of the menopause transition.

Background/Objective: Prescription of hormone therapy (HT) declined precipitously after large trials showing HT (initiated on average 10 years after menopause at an average age of >60 years) conveyed significant risks. More recently, risks and benefits of HT have been studied in women closer to menopause. The North American Menopause Society updated its 2008 recommendations on HT in postmenopausal women to include the most recent data on various aspects of HT use, including (where available) information on the risks and benefits of HT based on time since menopause, age, and duration of use. Participants/Methods: An advisory panel of researchers and clinicians in pertinent fields evaluated recent studies, summarized current understanding of benefits and risks (and limits thereof), and made recommendations on the appropriate clinical use of HT around menopause. Results: The statement is complete in discussing the risks and benefits of HT, including conditions for which HT is not indicated, considerations around formulations and routes, and limitations in current understanding. Initiation of HT for symptoms within 10 years of menopause does not seem to increase the risk of coronary heart disease events, and earlier initiation may reduce the risk. Data from the Women's Health Initiative, when examined by age and years since menopause, show a reduction in overall mortality when HT is initiated in women age <60 years, but not for older women. HT seems to promote lung cancer, which is a topic for discussion with any smoker considering HT. Conclusions: The risk-benefit balance of HT is favorable for many women when HT is begun close to the time of menopause, but the risks are higher in older women and in women who are farther out from menopause, with no clear indication of ideal duration. HT use for symptoms around the time of menopause or for osteoporosis in some postmenopausal women is supported by current data. Medical evaluation should be performed before initiation of HT, and treatment decisions individualized, considering specific risks and benefits. The dose should be the lowest effective dose to achieve the treatment goal. Reviewer's Comments: This paper offers a useful updated review, particularly in the clinical situations in which HT is most likely to be initiated—in women undergoing the menopause transition or within a few years after menopause who have symptoms or medical issues for which HT may offer benefit but who are not at high risk for any particular complication. The statement ends with 2 pages of recent references, for those interested in delving deeper into the recent literature. (Reviewer-Eliza L. Sutton, MD). © 2010, Oakstone Medical Publishing

Keywords: Menopause, Hormone Therapy

Print Tag: Refer to original journal article

Page 19: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Primary HPV DNA Screening Plus Cytology Triage Detects Severe Dysplasia

Rate of Cervical Cancer, Severe Intraepithelial Neoplasia, and Adenocarcinoma In Situ in Primary HPV DNA Screening

With Cytology Triage: Randomised Study Within Organised Screening Programme.

Antilla A, Kotaniemi-Talonen L, et al:

BMJ 2010; April 27 (): epub ahead of print

Primary HPV DNA screening combined with cytology triage is more sensitive at detecting severe dysplasia than Pap smear alone.

Background: Human papillomavirus (HPV) DNA (HPV DNA) testing is commonly used as a reflexive test for borderline cytopathology (atypical squamous cells of undetermined significance). However, recent studies suggest primary HPV testing alone or in combination with traditional cytology testing increases sensitivity for both low- and high-grade cytopathology in cervical cancer screening compared with conventional Pap smear alone. Objective: Comparison of primary HPV DNA screening plus cytology triage versus conventional cytology on cervical intraepithelial neoplasia III (CIN III) and above lesions. Design: Finnish national database; randomized controlled trial. Participants: Women 30 to 60 years of age. Methods: 58,000 women were randomized individually to HPV plus cytology triage versus conventional cytology screening. In Finland, women are invited every 5 years for cervical cancer screening, and this study included 1 screening cycle (2003 to 2007). National databases for population registry, screening, and cancer were used. In the intervention arm, HPV testing and Pap smears were collected. In HPV-negative subjects, no further testing was done, but for HPV-positive subjects, an unblinded cytological evaluation was conducted by conventional means. Management was similarly determined between the intervention and conventional arms based on cytopathology. Notable differences were that for the intervention arm, HPV-positive subjects with negative cytology had intensified screening recommended, and those with repeated borderline cytology or 3 consecutive positive HPV results were referred to colposcopy. Results: Results were reported as relative risks (RR) between arms instead of as sensitivity, specificity, or negative/positive predictive values. More lesions CIN III and above (CIN III+) were identified with the HPV testing arm (RR, 2.17). Similarly, more CIN III+ lesions were identified in the HPV arm and were triaged to intensified screening (RR, 2.67). A nonstatistically significant trend for lower CIN III+ lesions in negatively screened subjects was found for the HPV arm (RR, 0.28) suggesting a better negative predictive value. Interestingly, 11 cases of CIN III were picked up because subjects were HPV-positive despite cytopathology being negative. Conclusions: In an organized screening program, primary HPV DNA screening plus cytology triage was more sensitive at detecting CIN III+ lesions than conventional cytology. Reviewer's Comments: This Finnish study supports a growing body of evidence for primary HPV testing. However, I find it interesting that the authors made conclusions about sensitivity when it was never reported out in the study. Based on the RR, it appears the intervention was superior to Pap smear alone. Before implementing such models in the United States, where the magnitude of our screening efforts are amplified by the size of our population and our more aggressive screening guidelines, we need to consider the concerns for low positive predictive value for CIN3+ lesions in women <35 years of age, and the potential cost (QALY) of primary HPV testing. (Reviewer-Genevieve L. Pagalilauan, MD). © 2010, Oakstone Medical Publishing

Keywords: Cervical Ca, Screening, HPV DNA Screening, Cytology

Print Tag: Refer to original journal article

Page 20: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Once Yearly Vitamin D Dose Increases Risk for Falls, Fractures

Annual High-Dose Oral Vitamin D and Falls and Fractures in Older Women: A Randomized Controlled Trial.

Sanders KM, Stuart AL, et al:

JAMA 2010; 303 (May 12): 1815-1822

Once yearly dosing of vitamin D may improve medication adherence but does not reduce the incidence of falls or fractures in community-dwelling elderly women.

Background: Vitamin D supplementation has been shown in some studies to reduce falls and fractures in elderly patients, especially when taken regularly. As with any medication, adherence to daily administration can be difficult for many patients. Objective: To determine if a once yearly large dose of vitamin D given to older women in the community would reduce falls and fractures. Design: Single-center, double-blind, randomized, placebo-controlled trial. Methods: Women ≥70 years of age living in the community in southern Victoria, Australia, were recruited to participate in the Vital D study through mailed invitations. Exclusion criteria included current vitamin D use, low risk for hip fracture, current treatment for osteoporosis, renal insufficiency, and elevated baseline calcium. Participants were randomly assigned to receive 500,000 IU cholecalciferol in a single daily dose (10 50,000 IU pills taken at once) or placebo in the fall or winter each year for 3 to 5 years. They were followed for 1 year after their last medication dose. The primary outcomes were falls and fractures. The end point was collected monthly throughout the study. Results: 2258 women (average age, 76 years) were randomized. Calcium intake at baseline was similar in the 2 groups. The vitamin D group had a slightly higher rate of falls (83.4 per 100 person-years) than the placebo group (72.7 per 100 person-years) with a RR of 1.15 (95% CI, 1.02 to 1.30; P =0.03). The Vitamin D group also experienced more fractures. The RR for incidence of any fracture was 1.26 (95% CI, 1.00 to 1.59; P =0.047), and the RR for nonvertebral fracture was 1.28 (95% CI, 1.00 to 1.65; P = 0.06). The excess risk for falls and fractures was seen primarily in the 3 months following dosing each year. In a subgroup of patients, baseline vitamin D levels were normal, typical for community-dwelling elderly, and did not differ in the 2 groups. Conclusions: A high annual dose of oral vitamin D given to community-dwelling older women at higher risk for fracture actually increased their risk for falls by 15% and for fracture by 26%. The increased risk was seen temporally in the 3 months following each annual dose. Reviewer's Comments: The appropriate dosing of vitamin D in an attempt to reduce falls and fractures in the elderly remains uncertain. But clearly, once yearly high dose oral therapy is not the solution. (Reviewer-Deborah L. Greenberg, MD). © 2010, Oakstone Medical Publishing

Keywords: Vitamin D, Falls, Fractures, Older Women

Print Tag: Refer to original journal article

Page 21: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Age-Adjusted Cut-Offs for D-Dimer Improve NPV for Ruling Out PE in Older Patients

Potential of an Age Adjusted D-Dimer Cut-Off Value to Improve the Exclusion of Pulmonary Embolism in Older Patients: A

Retrospective Analysis of Three Large Cohorts.

Douma RA, le Gal G, et al:

BMJ 2010; March 30 (): epub ahead of print

Age-adjusted cut-offs for D-dimer improve the negative predictive value for ruling out pulmonary embolism in patients aged >50 years.

Background: D-dimer is pervasively used to help exclude the diagnosis of pulmonary embolism (PE) in low-probability patients. However, the utility in older patients with the conventional cut-off of 500 μg/L (0.5 μg/mL) has been shown to have lower cost effectiveness. Objective: To assess a new age-adjusted cut-off for D-dimer use in patients aged >50 years. Design/Methods: 2 European prospective multi-center cohort studies (1700 patients) involving consecutive outpatients suspected of having PE were combined. Receiver operator curves were created for the study population and optimal cut-offs for D-dimer were determined based on age in patients >50 years old. Linear regression was performed, and a regression co-efficient was determined. Two other large prospective cohorts, with 3300 and 1700 patients, were used as validation. Outcomes of interest were the proportion of patients with a negative D-dimer, proportion of patients in whom a PE could be excluded, and false-negative rates. Results: PEs were found in 24% of the study group. The cut-off points varied from 512 μg/L in patients <50 years to 934 μg/L in >80 year olds, correlating to a regression co-efficient of 11.2 μg/L per year. This was translated to an age-adjusted D-dimer cut-off of age x 10 μg/L for patients >50 years. In this derivation cohort, PE was excluded in 42% of subjects compared to 36% using conventional D-dimer cut-offs. In the 2 validation sets, PE could be excluded in 5% and 6% more subjects with the age-adjusted D-dimer than with the conventional cut-off. This effect was largest in subjects >70 years in whom 13% to 16% more patients were ruled out for PE among all 3 data sets. The failure rate for all comers was 0.2% to 0.6%. Conclusions: The age-adjusted D-dimer cut-off, when added to validated clinical probability data, significantly improved the proportion of patients aged >50 years in whom PE could be excluded safely. Reviewer's Comments: This well-done study has validated a useful and easy-to-remember refinement for use of D-dimer for the exclusion of PE. Even in a busy emergency department setting, most providers should remember the simple formula for the age-adjusted D-dimer cut-off (example: in an 84-year-old patient, D-dimer cut-off = 84 x 10 μg/L= 840 μg/L or 0.84 μg/dL). The use of CT angiography in our emergency departments seems ubiquitous. However, older patients with multiple comorbidities are more likely to develop contrast-induced nephropathies as a consequence. Avoidance of unnecessary CT scans in this age group may cause less harm and less expense for our patients. Caution should be used before extrapolating these cut-offs to patients in ambulatory settings as they were validated for emergency department patients specifically. (Reviewer-Genevieve L. Pagalilauan, MD). © 2010, Oakstone Medical Publishing

Keywords: Pulmonary Embolism, Screening, D-Dimer, Older Patients

Print Tag: Refer to original journal article

Page 22: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Nut Consumption Decreases Cholesterol

Nut Consumption and Blood Lipid Levels: A Pooled Analysis of 25 Intervention Trials.

Sabaté J, Oda K, Ros E:

Arch Intern Med 2010; 170 (May 10): 821-827

Nut consumption significantly lowers lipids, especially in thin patients with high baseline cholesterol.

Background: Nut intake has been associated with decreased coronary heart disease risk, and multiple studies have looked at nut intake effect on lipid levels. Objective: To determine the effect of nut consumption on blood lipid levels. Design: Pooled analysis of crossover, parallel, and consecutive design studies. This study was partially industry sponsored. Methods: A MEDLINE search identified articles studying nut consumption (walnut, almond, other) and cholesterol effect from 2002 to 2004. Criteria included a control group or stable baseline lipid panel, no lipid-lowering medications, and >3 week duration. Usual methods were used to aggregate individual subject data with an interest in gender, age, body mass index (BMI), study design, type of funding, type of nut, quantity of nut intake, and end of study lipid levels. Results: 25 studies totaling 307 men and 276 women were pooled. Nut consumption reduced total cholesterol (TC), LDL, and the ratio of LDL to HDL, but did not significantly change HDL or triglyceride levels (TG). There was a dose response to the quantity of nuts consumed. When 20% of the diet was from nuts (2.5 oz in a 2000 kcal diet), TC decreased by 9.9 mg/dL (4.5% change), and LDL decreased by 9.5 mg/dL (6.5% change). Diets with 12% (1.5 oz) and 10% (1.2 oz) nut consumption showed less change in TC; TC decreased by 7.1 mg/dL (3.2%) and 6.1 mg/dL (2.8%) and LDL decreased by 7.2 mg/dL (4.9%) and 6.2 mg/dL (4.2%), respectively. The highest response was seen in individuals with a low BMI (<25), high cholesterol (LDL >160), and consumption of a Western diet (fat >30%) and saturated fat (>10%). The least effect was found in obese subjects (BMI >30), baseline low cholesterol (LDL <130), and consumption of a Mediterranean diet (monounsaturated fat >20%, saturated fat <7%). No effect was based on gender, type of nut, or type of study. Industry-sponsored studies reported a larger decrease in triglyceride levels. Conclusions: Increasing dietary nut consumption improves TC and LDL in a dose-related fashion; this is most notable for people with high LDL and low BMI. Reviewer's Comments: This study supports a more recent trial on walnuts that showed similar effects on lipid levels. It also contributes to our growing understanding that obese patients have less response to dietary attempts at lipid lowering. From a practical perspective, it helps me target my counseling during preventive visits to those most likely to gain improvement with increased intake of nuts in their diet. I would still like to see a definitive study that correlated nut consumption with reductions in hard end points such as heart attack, stroke, and death. Nonetheless, there is little harm for increased nut consumption barring allergy concerns, and the benefits are on the order of dietary fiber intervention. (Reviewer-Genevieve L. Pagalilauan, MD). © 2010, Oakstone Medical Publishing

Keywords: Lipid Lowering, Nuts, Dietary Interventions

Print Tag: Refer to original journal article

Page 23: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

An Hour of Exercise a Day May Keep the Pounds Away

Physical Activity and Weight Gain Prevention.

Lee I-M, Djoussé L, et al:

JAMA 2010; 303 (March 24/31): 1173-1179

Women who are a normal body weight may need to exercise an hour a day to prevent weight gain.

Background: It is no secret that a large proportion of the U.S. population is overweight and obese. Despite increasing public awareness and education on diet and exercise, many Americans continue to gain weight year after year. In most patients, weight gain is a combination of excess calories and a sedentary lifestyle. Guidelines currently recommend 150 minutes of moderate-intensity exercise per week for overall health. Is this level of activity sufficient to prevent weight gain? Objective: To determine the amount of exercise needed in women to prevent weight gain. Design: Prospective cohort study. Methods: The Women's Health Study followed almost 40,000 women for 13 years. Subjects completed questionnaires that included activity level and weight on an annual basis. Participants from the original study who developed heart disease or cancer were excluded from this analysis. Activity level was reported in average time per week spent in various pursuits. Average weekly expenditure in metabolic equivalent (MET) hours per week was calculated for each subject based on baseline reports of exercise. Participants were divided into 3 groups based on MET hours per week: group 1, <7.5 (roughly 150 minutes per week of moderate intensity); group 2, 7.5 to <21 MET hours per week; and group 3, >21 MET hours per week (60 minutes per day of moderate exercise). Results: 34,079 women were included with an average age of 54.2 years at baseline. Weight and body mass index (BMI) were inversely related to activity level at baseline. The average weight gain over the course of the study was 2.6 kg. In multivariate analyses, women who had a BMI <25 at baseline and exercised >420 minutes per week were least likely to gain weight over time. In the subgroup of normal-weight women who did not gain weight over the study period, the average activity level was 21.5 MET hours per week. In women with a BMI >25, there was no relationship between exercise and weight gain. Conclusions: If you are a middle-aged woman of normal weight, you need to exercise an hour a day to prevent weight gain over time. If you are already overweight, you can exercise as much as you like and you will still gain weight. Reviewer's Comments: Although this is an observational study based on self-reporting, a large number of women were studied, and the results reflect what we see in our daily practice of medicine. These results are sobering when you consider that most patients have difficulty with the currently recommended regimen of 30 minutes of exercise 5 times a week. Based on this study, women should be exercising almost 3 times this amount to prevent weight gain and its health consequences over time. (Reviewer-Deborah L. Greenberg, MD). © 2010, Oakstone Medical Publishing

Keywords: Exercise, Weight Gain

Print Tag: Refer to original journal article

Page 24: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Nicotine Patch -- Is Longer Duration of Use Better?

Effectiveness of Extended-Duration Transdermal Nicotine Therapy: A Randomized Trial.

Schnoll RA, Patterson F, et al:

Ann Intern Med 2010; 152 (February 2): 144-151

Nicotine patch use for 24 weeks versus standard 8 weeks results in better smoking cessation rates at 24 weeks, but not longer; benefits persist only as long as treatment is maintained.

Background: The nicotine patch is widely used to aid in smoking cessation, but the optimal duration of its use is not clear. Objective: To evaluate whether extended duration of the nicotine patch improves abstinence from tobacco use among adult smokers compared to a standard 8-week duration. Design: Parallel, blinded, randomized, placebo-controlled trial. The study was conducted from September 2004 through February 2008. Participants: 568 adult tobacco smokers. Methods: Potential research participants were recruited by advertisements for a free smoking cessation program. Participants were considered for the study if they were aged 18 to 65 years and smoked ≥10 cigarettes daily for at least 1 year. Patients were randomized to standard therapy (nicotine patch for 8 weeks, then placebo for 16 weeks) or extended therapy (nicotine patch for 24 weeks). Primary outcome was biochemically confirmed tobacco abstinence at weeks 24 and 52; secondary outcomes were continuous and prolonged abstinence, lapse and recovery events, cost per additional quitter, side effects, and adherence. Results: At week 24, extended therapy produced significantly higher rates of abstinence (31.6% vs 20.3%; OR, 1.81), prolonged abstinence (41.5% vs 26.9%; OR, 1.97), and continuous abstinence (19.2% vs 12.6%; OR, 1.64) than standard therapy. Extended therapy also reduced the risk of relapse to smoking (HR, 0.77) and improved the chance of recovery from smoking relapses (HR, 1.47; P <0.001). Time to relapse was slower (HR, 0.50; P <0.001). However, at week 52, similar numbers in each group were abstinent (approximately 14%). Conclusions: Longer duration of nicotine patch use (24 weeks) improved abstinence from smoking, reduced the risk for smoking lapses, and improved the chance of recovery to abstinence after a smoking lapse compared to standard duration therapy of 8 weeks. Reviewer's Comments: When using a nicotine patch, the U.S. Public Health Service Clinical Practice Guideline recommends 8 weeks of patch therapy, but this article shows a significant (short-term) benefit of a longer duration of patch use. Generalizability from this study is limited as it included patients with few medical co-morbidities. Subjects also sought participation in this study, and thus were probably more invested than the average patient in smoking cessation. Perhaps most importantly, the cessation benefits persisted only for as long as the nicotine patch was maintained (the benefits were lost after the patch was discontinued). This seriously brings into question the long-term clinical effectiveness of extended therapy versus standard therapy. (Reviewer-Molly Blackley Jackson, MD). © 2010, Oakstone Medical Publishing

Keywords: Smoking Cessation, Nicotine Tx, Extended-Duration, Transdermal

Print Tag: Refer to original journal article

Page 25: CTCA May Be Better Than Stress Test in Some Patients · Objective: To determine the accuracy and utility of exercise electrocard iography (ECG) and CTCA in predicting which patients

Consider Topical Capsaicin Jelly for Migraines

Capsaicin Jelly Against Migraine Pain.

Cianchetti C:

Int J Clin Pract 2010; 64 (March): 457-459

Capsaicin jelly may be a useful treatment of migraine and scalp arterial tenderness.

Background: Migraines are common and can be debilitating. Recent research suggests that perivascular afferent fibers in the scalp (eg, around the superficial temporal artery) are involved in a substantial percentage of patients with migraine. Capsaicin jelly is effective for treatment of some neuropathic pain but has not been well studied in migraine. Objective: To assess whether topical periarterial capsaicin cream could decrease pain in patients with history of migraine. Design: Single-blind, placebo-controlled, crossover study. Methods: Successive patients presenting to an ambulatory care center with a history of migraine without aura and with 1 or more scalp arteries painful to digital pressure were invited to participate. Subjects were randomized to topical capsaicin 0.1% jelly or vaseline, applied with light massage to painful scalp arteries. If the patient reported <50% pain reduction, the first jelly was washed, and an alternative jelly was applied. If the patient had >50% pain reduction, the alternative jelly was not applied. Patients who reported improvement with capsaicin jelly (but not vaseline) were invited to continue in the trial during an acute headache (either by presenting quickly to the research clinic or, if the subject lived distantly, by using jars of blinded jellies). Subjects were asked to record which, if any, jelly was more efficacious. Results: 23 patients were enrolled (20 females, 3 males; mean age, 24.7 years). In the absence of headache, 17 of 23 subjects reported a decrease of at least 50% in local pain upon digital pressure after capsaicin compared with 2 of 23 patients after vaseline (P <0.001). Of the 17 patients who moved on to testing during headache, 8 returned to the clinic during an acute headache. Vaseline was administered initially (>50% improvement in 0 patients), followed by capsaicin jelly (>50% improvement in 5 of 8 patients (P <0.05). Of the 9 subjects using jellies at home, 6 reported >50% improvement with capsaicin versus 1 with vaseline. Capsaicin did not result in noticeable pain reduction in severe migraine. Conclusions: The use of topical capsaicin jelly in patients with a history of migraine and active scalp arterial tenderness may result in relief of scalp arterial pain and in mild to moderate migraine headache pain. Reviewer's Comments: This is a very small study, and the results need to be supported by a larger randomized, controlled trial. Nonetheless, I like the exploration of alternative treatment options for migraine. Recent migraine research has shown that scalp perivascular structures (not just intracranial vasculature) may play a role in migraine development. This is a nice initial study to support the possibility that local treatment at the scalp may be effective for pain relief in some patients. Furthermore, capsaicin jelly is inexpensive and has few, if any, real adverse effects, making it worth a try in patients with mild to moderate migraine pain. (Reviewer-Molly Blackley Jackson, MD). © 2010, Oakstone Medical Publishing

Keywords: Migraines, Capsaicin Jelly

Print Tag: Refer to original journal article