Statistical Thinking through Media Examples

Excerpts

Chapter 1: Statistical Thinking: Why Is It Important?
  • Unmasking the Controversial MMR Vaccine – Autism Study
  • Hydroxychloroquine and COVID-19: A Lack of Statistical Thinking
Chapter 3: Assessing the Quality of Polls and Surveys
  •  Analyzing the 2020 US Election Polling: Missing the Mark in Key Swing States
Chapter 4: Measuring Uncertainty with Probability
  • Election Fraud Claims: Falling Prey to Confirmation Bias
Chapter 10: Integrity in Research
  • Vioxx and Heart Attacks : Hiding the Evidence in Plain Sight
  • Oxycontin and The Opioid Crisis : How The Sackler Family Got Away with It
BOOK EXCERPTS: TABLE OF CONTENTS 
CHAPTER 1: STATISTICAL THINKING: WHY IS IT IMPORTANT?

 

1.2 The MMR Vaccine-Autism Link

STUDY 1.1: The Controversial Andrew Wakefield Study

1.3 Samples and Populations

 

1.4 Selecting a Representative Sample

STUDY 1.2: How the Case Against the MMR Vaccine Was Fixed

MEDIA 1.4: Exposed: Andrew Wakefield and the MMR-Autism Fraud

MEDIA 1.5: Cuomo Says 21% of Those Tested in NYC Had Virus Antibodies

1.7 The Story of Hydroxychloroquine and COVID-19 

MEDIA 1.14: Fox News Stars Trumpeted a Malaria Drug, Until They Didn’t 

STUDY 1.4: Observational Study of Hydroxychloroquine in Hospitalized Patients with COVID-19 

MEDIA 1.15: Malaria Drug Taken by Trump Is Tied to Increased Risk of Heart
Problems and Death in New Study 

MEDIA 1.16: Two Huge COVID-19 Studies Are Retracted After Scientists Sound Alarms 

MEDIA 1.17: Malaria Drug Promoted by Trump Did Not Prevent COVID Infections, Study Finds 

STUDY 1.5: Effect of Hydroxychloroquine in Hospitalized Patients with COVID-19

CHAPTER 3: ASSESSING THE QUALITY OF POLLS AND SURVEYS

 

3.3 Polling for the 2020 US Election

MEDIA 3.6: “A ‘Black Eye’”: Why Political Polling Missed the Mark. Again

MEDIA 3.7: FiveThirtyEight Polling

STUDY 3.8: Florida Election Poll—Quinnipiac University

MEDIA 3.9: Key Things to Know About Election Polling in the United States

MEDIA 3.10: The American Trends Panel Survey Methodology

CHAPTER 4: MEASURING UNCERTAINTY WITH PROBABILITY

 

4.2 Chance Events and Confirmation Bias

MEDIA 4.3: Fox News Is Debunking Election Fraud Claims Made by Its Own Anchors in Response to a Legal Threat

MEDIA 4.4: Fox News Is Sued by Election Technology Company for Over $2.7 Billion

MEDIA 4.5: Confirmation Bias and Media Literacy

CHAPTER 10: INTEGRITY IN RESEARCH

 

10.5 Vioxx and Heart Attacks

MEDIA 10.7: Merck Agrees to Settle Vioxx Suits for $4.85 Billion

MEDIA 10.8: Scientists Again Defend Study on Vioxx

STUDY 10.6: Comparison of Upper Gastrointestinal Toxicity of Rofecoxib and Naproxen in Patients with Rheumatoid Arthritis

10.6 Oxycontin and the Opioid Crisis

MEDIA 10.9: Sacklers Directed Efforts to Mislead Public About Oxycontin, New Documents Indicate

MEDIA 10.10: Understanding the Epidemic

STUDY 10.7: The Promotion and Marketing of Oxycontin: Commercial Triumph, Public Health Tragedy

MEDIA 10.11: Timeline of Selected FDA Activities and Significant Events Addressing Opioid Misuse and Abuse

MEDIA 10.12: The History of OxyContin, Told Through Unsealed Purdue Documents

MEDIA 10.13: McKinsey Proposed Paying Pharmacy Companies Rebates for OxyContin Overdoses

MEDIA 10.14: Big pharma executives mocked ‘pillbillies’ in emails, West Virginia opioid trial hears

MEDIA 10.15: The Sacklers’ Last Poison Pill

MEDIA 10.16: Martin Luther King’s Last Speech: “I’ve Been to the Mountaintop”

Book stack Available in Multiple Formats – Published by – Respected in the Industry for over 30 years. 

Statistical Thinking through Media Examples (Third Edition)  by Anthony Donoghue – ©2022346 pages

Choose the format that suits you best:

  • Paperback List Price: $106.95
  • Cognella Direct Ebook: $80.95 – You save $26 (24%)
  • Paperback and Ebook Bundle: $104.95

Carts  Free 3–7 day delivery · ‎30-day returns

Also available on Amazon – Paperback: $114.21 Hardcover: $139.00 

Chapter 1 : Statistical Thinking: Why Is It Important?

 

1.2 THE MMR VACCINE-AUTISM LINK

In 1998, Andrew Wakefield, a well-respected doctor at the time, presented the results of his research in the Lancet, one of the world’s leading peer-reviewed medical journals, suggesting a link between the measles, mumps, and rubella (MMR) vaccine and autism. The story did not get much media attention until Wakefield held a press conference. The story went on to be a media sensation. However, a simple review by the media of the actual journal article where Wakefield presented his findings would have ended the debate at that time. The title of the journal article presented in the Lancet was “Ileal-Lymphoid-Nodular Hyperplasia, Non-Specific Colitis, and Pervasive Developmental Disorder in Children.”

First, Wakefield strongly suggests an association between the MMR vaccine and autism but does not make a causal conclusion. As we will learn, an association does not necessarily mean a causal relationship exists. In this case, it does not mean the MMR vaccine causes autism. In the journal article, he states:

Rubella virus is associated with autism and the combined measles, mumps, and rubella vaccine (rather than monovalent measles vaccine) has also been implicated. Fudenberg noted that for 15 of 20 autistic children, the first symptoms developed within a week of vaccination.

Second, the research was based on only 12 children vaccinated with MMR:

12 children (mean age 6 years [range 3–10], 11 boys) were referred to a paediatric gastroenterology unit with a history of normal development followed by loss of acquired skills, including language, together with diarrhea and abdominal pain. Children underwent gastroenterological, neurological, and developmental assessment and review of developmental records. 

Wakefield found that nine of the children went on to develop autism soon after receiving the vaccination. Although his conclusions were speculative and based on only 12 children, this did not stop the media from widely reporting that Wakefield had found a causal link between the vaccine and autism.

Parents depend on the media to act as gatekeepers when it comes to these sorts of controversial claims made by researchers. They lead busy lives, so they need to trust that the media is questioning the quality of research before presenting the findings to the public. When it comes to the health and well-being of their children, a proper critique and communication of the findings is more helpful than a sensational story.

If the news media had taken the time to read in the journal article that Wakefield’s conclusions were speculative and based on only 12 children, they would (or should) have concluded that the research was not worth reporting on. Strong claims require strong evidence. A claim that the MMR vaccine is associated with autism based on just 12 children is not strong evidence.

1.3 SAMPLES AND POPULATIONS

Using statistics and statistical thinking, we analyze and interpret data to gain an understanding of the characteristics of populations. We select a sample from a well-defined population. From the sample, we calculate a sample statistic, which is an estimate of a population characteristic known as the population parameter. For example, the population could be defined as all adults in the US with high cholesterol. Researchers may want to test a drug for lowering cholesterol. They select a sample of adults with high cholesterol, give them the drug, and calculate the sample average cholesterol level (the sample statistic). The sample average cholesterol level is considered an estimate of the population average cholesterol level (the population parameter). In other words, if every adult in the population were to take the drug, the sample statistic estimates what the average cholesterol level would be for the entire population.

In the Wakefield study, the population of interest was all children in the UK who had received the MMR vaccine. Wakefield wanted to estimate the percentage of these children who went on to develop autism after receiving the MMR vaccine. In order to estimate this percentage, a sample of children who received the MMR vaccine was selected. The percentage of children in this sample who developed autism was the sample statistic. It was an estimate of the true percentage of children in the population of MMR-vaccinated children who went on to develop autism (the population parameter).

Researchers are interested in looking for relationships between characteristics in the population of interest. A variable is simply a characteristic about an individual. In this study, the characteristics were MMR vaccine (yes or no) and autism (yes or no). Whether or not a child received the MMR vaccine is called the explanatory variable. It is used to try and explain the outcome or response variable, which is whether or not the child went on to develop autism. Wakefield was interested in the relationship between autism and exposure to the MMR vaccine in the population of children in the UK. Are children who receive the vaccine more likely to get autism?

Wakefield found that 75%, or nine, of the 12 MMR-vaccinated children he sampled in this study went on to develop autism. If this percentage were anywhere close to the true percentage in the population, then there would have been a noticeable increase in the incidence of autism after the vaccine was introduced. Wakefield discusses this fact near the end of the journal article:

If there is a causal link between measles, mumps, and rubella vaccine and this syndrome, a rising incidence might be anticipated after the introduction of this vaccine in the UK in 1988. Published evidence is inadequate to show whether there is a change in incidence or a link with measles, mumps, and rubella vaccine.

The fact there was no evidence of an increase in the overall percentage of children with autism since the vaccine was introduced is another reason why the news media should not have reported on this research in the way that it did. If the media had applied basic critical and statistical thinking skills to reading the journal article, they would have quickly surmised that the scientific evidence presented in the paper suggesting a link between the vaccine and autism was weak and insufficient. Instead, the media simply took what Wakefield stated in his press conference and ran with what they saw as a sensational story. The media, more than any other group, need the critical and statistical thinking skills to question the quality of the data upon which the science is based. They are the gatekeepers of truth for the general public, holding an immense power and responsibility in determining how we view the world around us. However, if we learn these necessary critical and statistical thinking, we can take that power into our own hands to some degree and hold the media (and researchers) to account.

A small sample size will (more often than not) result in a sample statistic that is far from parameters when the sample size is small. However, as the sample size increases, we expect our sample statistic to converge toward the population parameter of interest. This should make intuitive sense. The more (quality) data upon which a sample statistic is based, the closer we expect it to be to the truth in the population.

Andrew Wakefield was interested in determining the percentage of children in the population who received the MMR vaccine who went on to develop autism. A sample size of 12 children, even if it were a properly selected representative sample of children from the population, is unlikely to give a reliable estimate of that percentage. As we learn in our next section, Wakefield’s sample was far from representative.

1.4 SELECTING A REPRESENTATIVE SAMPLE

When selecting a sample from some population, ideally we want to select a representative sample, a sample that provides us with an unbiased estimate of a population characteristic. A (simple) random sample is expected to be a representative sample. A (simple) random sample is one in which every individual in the population has an equal chance of being selected. Obtaining a proper random sample of individuals is often easier said than done.

For example, let’s say you want to determine the average height of the population of male students at your college. At midday, you decide to stand in the middle of your campus, where students are heading to and from their classes, asking male students their height as they pass by. You believe that all male students walk past (where you are standing) at some point during any given day. Therefore, you feel that your sample should be random and therefore a representative sample of all male students.

However, what if on that particular day and time the male basketball team just got back from a game and were passing by? Including these men in your sample would result in an overrepresentation of tall men in your sample. In other words, there would be a higher proportion of tall men in your sample than in the population. The sample would result in an estimated average height well above the population average. The resulting sample average height would be a biased estimate of the average height of male students in the population.

A random sample of men at your college should result in (or is expected to be) a representative sample of individual men’s heights. This representative sample of individual heights should ensure we obtain a sample average height closer to the population average height than if we were to include the basketball team in the calculation. The resulting sample average height is an unbiased estimate of the population average height of male students. With a properly selected random sample, the larger the sample size, the closer we expect the sample average height to be to the population average height.

This is the power of a random sample of data as a means for pursuing the truth in populations, and it is one of the most important concepts at the heart of statistical thinking. In addition, the size of the population does not matter. It can be one hundred thousand, a million, a billion, or even a trillion. How close we expect our sample statistic (say a sample average) to be to what is known as the population parameter (say a population average) is driven by the size of a properly selected random sample.

In the MMR-autism study, the sample size was small and far from representative. In January 2011, the investigative journalist Brian Deer published a paper in the British Medical Journal titled “How the Case Against the MMR Vaccine Was Fixed,” presenting the results of his investigation into Wakefield.

Brian Deer did an exhaustive job of investigating the truth in this case. A summary of his findings can be found on his website. He found that two years before Wakefield published his findings (and had selected the 12 children for his study), he was hired by a lawyer named Richard Barr to attack the MMR vaccine. Brian Deer also discusses how Wakefield self-selected his sample of 12 children.

The type of sample that Wakefield selected is known as a convenience sample. In his case, the sample was conveniently chosen in a way to ensure he showed evidence of an association between autism and the MMR vaccine. As we will learn, random samples are difficult to obtain, and researchers will often have to rely on convenience samples. How useful a convenience sample is in estimating population characteristics depends on how far from representative of the population the sample is. There might be factors about the sample selected that differ from the population affecting our estimate of the population characteristic in which we are interested. Depending on how the convenience sample was selected, it may be very difficult to know the effect these factors have on the sample estimate of the population characteristic of interest. We might not even know what these factors are.

For example, on April 23, 2020, at the height of the first wave of COVID-19 pandemic in New York City (NYC), the New York Times published an article titled “Cuomo Says 21% of Those Tested in NYC Had Virus Antibodies.” As the news article points out, the headline statistic translates to 1.7 million New Yorkers having already contracted the virus. However, the official case count for NYC at the time was around 200,000 cases. So, which of these two statistics were closer to the true number of cases?

In a sample of 1,300 NYC residents, 21% were found to have coronavirus antibodies. However, the sample of residents were selected from supermarkets, making it a convenience sample. At the time, going to supermarkets felt like a high-risk activity. Lower-income New Yorkers were hit hardest by the pandemic and were more likely to have to do their own shopping. For that reason alone, it is very likely there would have been a higher proportion of NYC residents with coronavirus antibodies shopping in supermarkets than in the general population. Also, as the news article points out, the accuracy of the antibody tests used at the time were questionable, which could have also resulted in inflating the positivity rate among those tested.

Statistics calculated from convenience samples can be misleading due to the fact that the sample is not representative of the population. Determining what characteristics about the convenience sample adversely affecting the accuracy of the sample statistics calculated from the data can be challenging. The official case count of approximately 200,000 cases at the time was an underestimate of the true number of cases for several reasons: lack of availability of testing, underreporting, and a high percentage of cases that were asymptomatic going undetected. The estimate of 1.7 million cases was an overestimate of the true case count for reasons already discussed. For what it is worth, the truth—the true number of cases at that time in NYC—was somewhere in between these two numbers.

The lesson to be learned from this example is how close a statistic is to the truth in a population depends on the quality of the data upon which the statistic is based. Only a random sample is expected to be representative of the population, resulting in a sample statistic that is an unbiased estimate of the population parameter of interest. 

Book stack Available in Multiple Formats – Published by – Respected in the Industry for over 30 years. 

Statistical Thinking through Media Examples (Third Edition)  by Anthony Donoghue – ©2022346 pages

Choose the format that suits you best:

  • Paperback List Price: $106.95
  • Cognella Direct Ebook: $80.95 – You save $26 (24%)
  • Paperback and Ebook Bundle: $104.95

Carts  Free 3–7 day delivery · ‎30-day returns

Also available on Amazon – Paperback: $114.21 Hardcover: $139.00 

1.7 THE STORY OF HYDROXYCHLOROQUINE AND COVID-19

On March 28, 2020, the Food and Drug Administration (FDA) granted emergency use authorization (EUA) for a drug called hydroxychloroquine to treat severely ill patients with COVID-19. The drug had been around for 40 to 50 years and given mainly to people with malaria or some forms of lupus. The most severe known adverse event was an increased risk of death over prolonged use for patients with irregular heart rhythms. There was very little evidence at the time that it was effective in treating COVID-19 besides small and questionable studies out of China and France. The White House administration at the time put a lot of political pressure on the FDA to authorize anything that might help in the treatment of COVID-19. In a response to a request by the Department of Health and Human Services to grant an EUA for the drug, the FDA stated the following:

Based upon limited in-vitro and anecdotal clinical data in case series, chloroquine phosphate and hydroxychloroquine sulfate are currently recommended for treatment of hospitalized COVID-19 patients in several countries, and a number of national guidelines report incorporating recommendations regarding use of chloroquine phosphate or hydroxychloroquine sulfate in the setting of COVID-19.

The FDA approved a drug for use based on nothing more than anecdotal clinical data. This is not how the FDA normally works, and it testifies to the enormous political pressure the agency was under at the time to approve the drug.

On April 22, 2020, the New York Times reported in an article titled “Fox News Stars Trumpeted a Malaria Drug, Until They Didn’t” about the monthlong effort by Fox News reporter Laura Ingraham to promote the use of hydroxychloroquine in the treatment of COVID-19. The New York Times reporter criticized Laura Ingraham’s assertion that the drug was effective by pointing to a study that showed evidence of an increased risk of death from taking hydroxychloroquine.

However, the study was nonpeer-reviewed, poorly conducted observational study by Veterans Affairs, in which the sickest COVID-19 patients were put on hydroxychloroquine. Therefore, it is not surprising that a higher percentage of those patients died while on the drug. Yes, Laura Ingraham exhibited a willful ignorance in her understanding of the quality of the scientific evidence. However, the New York Times reporter should have been aware of the known risk of death from the drug (for a particular type of patient) and that the study to which he was pointing (to back up his claims) was a poor-quality observational study. As the saying goes, people in glass houses shouldn’t throw stones.

On May 7, 2020, the New England Journal of Medicine published a well-conducted, peer-reviewed, observational study titled “Observational Study of Hydroxychloroquine in Hospitalized Patients with COVID-19.” The study found no statistically significant association between hydroxychloroquine and intubation or death. What made this a well-conducted study is that the researchers designed the study using a matched control sample. What this means is that they matched patients who were receiving hydroxychloroquine to patients who were not by numerous potential confounding factors that might have affected their response. These factors included age, race and ethnic group, body mass index, underlying kidney disease, chronic lung disease, hypertension, diabetes, inflammatory markers of the severity of illness, and baseline vital signs. Both groups were very well matched in terms of many of the confounding factors that could affect the outcome—intubation or death—in both groups. In the discussion section of the journal article, the researchers pointed out that causal conclusions could not be made due to the possibility of unmeasured confounding factors. However, because the study was so well-designed and conducted, it was the strongest evidence up to this point that hydroxychloroquine was not an effective treatment for severely ill COVID-19 patients and did not increase the risk of death for those patients.

On May 22, 2020, the New York Times reported in an article titled “Malaria Drug Taken by Trump Is Tied to Increased Risk of Heart Problems and Death in New Study” about a large observational study by Harvard University that claimed to have found evidence that hydroxychloroquine was not effective in treating COVID-19 and also increased the risk of heart attack and death. The research was based on 15,000 patients who received hydroxychloroquine and 81,000 patients who did not. Results of the research based on these data were published in both the Lancet and the New England Journal of Medicine, two of the world’s most prestigious journals.

However, there was one major problem with this study. On June 4, 2020, the New York Times reported in an article titled “Two Huge COVID-19 Studies Are Retracted After Scientists Sound Alarms” that the database upon which the research was based could not be verified. As the article points out, critics of the research pointed to anomalies in the data that should have been detected during the peer review process. The fact that statistical analysis based on unverified data made its way into two of the top journals in the world demands some reflection on the part of those involved in the entire process. The Harvard researcher simply submitted a brief apology to the journals, and the Lancet put out a statement saying it would review its peer review process. It is important to point out that the Lancet is the same journal that published the Wakefield study more than 20 years previous. Yes, 2020 was an emotional year, and there was a lot of pressure on researchers (and journals) to publish research related to the COVID-19 pandemic. However, in good times or bad, the need to publish should not be to the detriment of the scientific process. Scientific (and statistical) thinking is about trying to objectively reason with the data and evidence. It is important that everyone involved (researcher, journal, media reporter, government agency) not let subjective (or emotional) reasoning cloud their judgement when deciding what is true. Cool heads should prevail at all times, and politics should have no part to play in science or in public health.

On June 4, 2020, the New York Times reported in an article titled “Malaria Drug Promoted by Trump Did Not Prevent COVID Infections, Study Finds” the results of a randomized experiment comparing hydroxychloroquine with a placebo. The research received a lot of media attention. As the reporter points out, it was the first carefully controlled (randomized) trial of hydroxychloroquine “considered the most reliable way to measure the safety and effectiveness of a drug.” However, the study looked at the use of hydroxychloroquine for preventing COVID-19 and not its effectiveness as a treatment for seriously ill COVID-19 patients. The controversy regarding the use of hydroxychloroquine was about its efficacy (and safety). It was not about its ability to prevent someone getting infected.

On June 15, 2020, the FDA revoked the EUA for hydroxychloroquine. Emerging evidence from randomized experiments clearly showed that hydroxychloroquine was not an effective treatment for severely ill COVID-19 patients. Once the EUA was revoked for the drug, the media moved on. Several months later, on November 19, 2020, the New England Journal of Medicine published a journal article titled “Effect of Hydroxychloroquine in Hospitalized Patients with COVID-19.” The study was a randomized experiment comparing the safety and efficacy of hydroxychloroquine to placebo. There was no statistically significant difference found in outcomes (including risk of death) between patients receiving hydroxychloroquine or placebo. There were some issues with the randomization process that we will discuss in Chapter 2. However, overall, this was a well designed randomized experiment through which causal conclusions could be made about the safety and efficacy of the drug.
The story of hydroxychloroquine and COVID-19 shows that good use of statistical methods led to the truth in the end. However, it also shows that government agencies, media reporters, journals, and researchers need to maintain their objectivity and integrity no matter how heated the political debates become. Media reporting on important issues such as public health should be based on facts. Scientific results should be based on quality data. In a world where decision-making is more and more driven by data and statistics, quality data along with good critical and statistical thinking skills have become a necessity.

Book stack Available in Multiple Formats – Published by – Respected in the Industry for over 30 years. 

Statistical Thinking through Media Examples (Third Edition)  by Anthony Donoghue – ©2022346 pages

Choose the format that suits you best:

  • Paperback List Price: $106.95
  • Cognella Direct Ebook: $80.95 – You save $26 (24%)
  • Paperback and Ebook Bundle: $104.95

Carts  Free 3–7 day delivery · ‎30-day returns

Also available on Amazon – Paperback: $114.21 Hardcover: $139.00 

CHAPTER 3: ASSESSING THE QUALITY OF POLLS AND SURVEYS

 

3.3 POLLING FOR THE 2020 US ELECTION

After the 2016 election polls predicted with high probability that Hillary Clinton would win the election, there was great interest in whether the polls would do a better job predicting the results of the 2020 US election. Again, the polls predicted with high probability that the democratic nominee, Joe Biden, would win the presidency. In the end, that turned out to be the case, with Biden obtaining 7 million more votes than Donald Trump. However, US presidential elections are won or lost in the swing states. In many of those states, Biden’s win was not so clear-cut. The New York Times article titled “A Black Eye’: Why Political Polling Missed the Mark. Again.” presents the final polling averages for each state alongside the actual outcome. For example, the polling averages predicted that Biden would win Wisconsin by 10% points. However, Biden ended up winning by less than 1% point. This was still a win for the candidate, but it was not a win for polling. As we will begin to understand in Chapter 6, the average of all the sample averages (at least in theory) is the truth (the population average). In a polling context, this means that if the polls conducted were of good quality, the average of all the polls should have been very close to the outcome of the election.

This was true for polls on the national level but not so true for swing states such as Florida and Wisconsin. We will go beyond the news headlines to the source of the poll to try to understand why the polling averages were so far from the actual outcome of the election in these key swing states. A key swing state in every election is Florida. Historically, the race in that state is often a tossup between the two major party candidates. On election night, one of the first states on which the media focused was Florida.

If Joe Biden were to win Florida, then he was almost certain to win the presidency. On the day of the election, the website FiveThirtyEight, which averages nationwide polls, showed that Biden was ahead by a margin of 2.5% points in Florida. However, shortly after midnight on election night, the Associated Press called the race in Florida for Trump. In the end, Trump won Florida by 3.4% points, a deviation of almost 6% points from the polling average. In the 2008 US election, FiveThirtyEight’s Nate Silver predicted the outcome of the election in the individual states and nationally with great accuracy by calculating polling averages. However, election polling has become far more difficult to conduct since then because of lower response rates, as Silver himself concedes in the New York Time article titled “What’s the Matter with Polling?

The FiveThirtyEight website states that they accounted for the differences in sample sizes and quality of the individual polls when calculating the polling averages. However, with so many election polls emerging every day leading up to the election, we can imagine how difficult it would be to critique every poll in depth to determine (and adjust) for its quality. However, FiveThirtyEight’s statistical models were built upon polls conducted by other pollsters. If these polls are of poor quality, then the statistical models for predicting the election outcome will be of poor quality. We will examine several of the polls that FiveThirtyEight used to calculate its polling average for Florida so that we can access the quality of the polls for ourselves. As with any analysis, polling averages are only as good as the quality of the individual polls and resulting data upon which they are based.

FiveThirtyEight provides links to the source of the polls on which their averages are based and includes a letter grade for each of the polls. The highest-graded poll (a B+), conducted in Florida immediately before the election, was one conducted by Quinnipiac University. The link to the webpage for the poll, conducted from October 28 to November 1, provided further details about how it was conducted. The poll results were 47% for Biden and 42% for Trump, based on a sample of 1,657 self-identified likely voters with a margin of error presented of 2.4%. In the end, Trump received 51.2% of the vote, and Biden received 47.8% of the vote. Although the poll results were close to the eventual outcome for Biden, the 51.2% for Trump was well outside the margin of error. The pollsters did not provide any information regarding the 11% of voters unaccounted for in the polls results. Based on the election results, it is likely that most of these voters in the end voted for Trump.

The margin of error of 2.4% is correct for this sample size if it were a random sample with a 100% response rate. The pollsters used random- digit dialing to contact likely voters on landlines and cell phones. It is highly unlikely (over a three-day period) the pollsters selected a random sample of 1,657 self-identified likely voters and got every one of those individuals to pick up the phone and respond. The pollsters did not provide the response rate, instead stating that the sample was weighted for known population characteristics indicating a less-than–100% response rate. After adjusting the sample for known differences between the sample and population, the margin of error increased to 3.2%. Even accounting for this adjustment, the result for Trump of 51.2% was still well outside the margin of error. The pollsters weighted the sample by county, gender, age, education, and race. Evidently, it was not enough to bring their predictions closer to the true outcome of the election in Florida.

The deviation of their results from the actual outcome could also be due to systematic bias in data collection or to other factors (not adjusted for in their analysis) driving the outcome of the election. In an election during which emotions are running high, it can be very difficult or impossible to know what these factors are, never mind adjust for them. How an individual will vote may not be easily predicted by simply knowing their county, gender, age, education, or race. Their decision on whom to vote for may be more personal to them and can’t be easily predicted from knowing their demographic characteristics. Finally, it should be mentioned that the pollsters asked several questions of the respondents, broken down by political party, age, and gender, without reporting the margin of error for each of these subgroup analyses. When a pollster breaks down polling data by subgroups, the margin of error increases because the results are based on a smaller sample size. The pollsters should make this clear by providing the margin of error for each subgroup analysis.

……………..

The debate on what went wrong with polling in 2020 died down much quicker than it did in 2016. In the end, Biden received 306 electoral votes, and Trump received 232 electoral votes. Pollsters could claim that the national polls got it right, with Biden winning 51.4% of the vote versus Trump winning 46.9%.

However, in US elections, national polls do not matter. From the polls we examined, we can see that the quality of state polls is questionable. Pollsters should focus their attention on trying to do a better job at the state level because these polls really do matter. As already stated, the polling averages had Biden 10% ahead in Wisconsin, but he ended up winning the state by less than 1%. A certain percentage of the electorate in Wisconsin would have looked at the poll results and decided they did not need to vote. In the end, Biden won Wisconsin by 20,608 votes. All it would have taken is another 21,000 of the Biden voters sitting it out for the election in Wisconsin to have gone the other way.

As with other types of research, the quality of polls varies by pollster. Research is difficult, and polls (and surveys) have become more difficult to conduct in recent years. With the many challenges of obtaining a representative sample of respondents, it has become hard to complete a quality poll due to low response rates. At the same time, it has become much easier to obtain a nonrepresentative (convenience) sample of respondents through online polling. In both scenarios, the pollsters can weight their results for known differences between the sample and the population. However, these known differences (or factors) may or may not be key factors in determining how someone will vote. As already discussed, the deciding factor(s) that affect an individual’s decision in choosing a candidate may be very different from the factors for which pollsters can adjust.

In an article titled “Key Things to Know About Polling in the United States” the Pew Research Center discusses what the title suggests. It describes the different ways in which news and polling organizations select their samples—from telephone calls to online—which affect the data quality. The ability to conduct polls online quickly and inexpensively has led to many firms conducting polls with little to no survey credentials. The phrase nationally representative is often used, but the public should ask further questions. The article points the reader to a guide published by the American Association for the Advancement of Science that lists key questions that should be asked before trusting the results of a poll, including:

  • How were the questions asked?
  • Was weighting applied? If so, how?
  • How many people were surveyed, and what was the margin of error?

The article points out (as we saw in some of the polls we critiqued) that the margin of error presented is often a underestimate due to other types of error (besides sampling error) contained in the data: error due to nonresponse, coverage error (the entire target population did not have a chance of being selected), and mismeasurement error. The authors point out that the actual margin of error in the study could be twice as large as what is reported.

The authors also point out that opt-in online polls tend to overrepresent Democrats, which may be one reason why we saw a strong win for Biden in some of the polls we examined. They talk about how membership of the Transparency and Accountability Initiative, in which pollsters agree to provide key information about how their polls are conducted, is a good sign (but no guarantee) that the poll was well conducted. We saw no reference to this initiative in any of the polls we examined. Finally, they point out that polling errors can be correlated from state to state with similar demographic characteristics. This was the case in the 2016 election and certainly seemed to be so in the 2020 election. In 2016, the overlooked factor was more noncollege-educated White voters voting for the Republican candidate in battleground states than in previous elections. Again, in the 2020 election, there were factor(s) unaccounted for in the poll results in key states predicting a sizable win for Biden when that outcome turned out not to be the case. It is debatable and would be interesting to know what those factors were.

As already stated, interest in identifying those factors and what went wrong with the polls died down much quicker in 2020 than in 2016. In the end, the polls predicted that Biden would win, and that was the outcome. Polling is big business for the media and for pollsters, so it is good for both the media and for pollsters that polling lives to see another day. However, a little more reflection on what went wrong in the 2020 US election polls would have been good for polling. If no reflection is done regarding poll quality in key swing states, a rude awakening may occur again in future elections, as it did in 2016. The results of polls can drive people’s decision as to whether to vote or not, with the Wisconsin polls being a good example. Polls are good for business, but poor-quality, misleading polls are bad for our democracy.

To summarize, the power of a random sample is in the fact that we expect the sample to be representative of the population in every possible way. Whatever the factor (or factors) that drive a group of individual’s decision-making (when deciding whom to vote for in a general election) should be reflected (at least in theory) in a random sample. That is what makes a random sample an extremely powerful mechanism for getting at the truth in a population. Adjusting convenience samples through weighting for known characteristics (or factors) about the population may be helpful, but not necessarily. Again, in a US election, why an individual or a group of individuals vote for a presidential candidate may have very little to do with the county they are from or their gender, age, education, or race. The reason why someone votes for a candidate can’t always be determined by knowing that person’s demographic characteristics. Adjusting for these characteristics may increase the pollster’s probability of making a good guess, but it is no guarantee that they will be correct. The more known factors for which a pollster can adjust, the higher the probability their predictions will be accurate. In their analysis, the research firm Gallup adjusts for eight factors, and the Pew Research Center adjusts for 12 factors. The Pew Research also selects its sample of respondents using what they call the American Trends Panel (ATP) survey methodology. Begun in 2014, the ATP is a concerted effort by Pew Research to obtain (and maintain) a representative sample of Americans who are willing to partake in various surveys. Pew Research completes extensive weighting to ensure its sample data are as representative of the population as possible. In the next section, we will discuss two surveys conducted by Pew Research related to political polarization.

The quality of polls (as with any type of research) depends on the quality of the data collected. Unfortunately for polling, collecting quality data is becoming increasingly difficult to do. For many pollsters, the methods of selecting samples (online, telephone, text message) and subsequent weighting end up with a poll result far from the outcome of the election. In March 2021, FiveThirtyEight’s Nate Silver announced that FiveThirtyEight would no longer take the methods used by pollsters into account when grading a particular poll. Instead, it would focus on grading the pollsters by their track record—or, in other words, how close the pollsters’ results were to the outcome of a particular election. This approach would certainly make the job of grading polls much easier when deciding what polls to include in statistical models for calculating averages. However, it is a questionable approach, and it remains to be seen how well it will work. The use of good statistical methods should result in good estimates of the truth in the population. However, if the data is of poor quality, then the results will be of poor quality. There is only so much weighting can be done by adjusting for known confounding factors when the data is of poor quality.  The real issue with polling is not the methods used for analyzing the data but the quality of the data itself. Unless pollsters figure out how to collect better quality data, polling will continue its decline as a means of pursuing the truth in the population.

Book stack Available in Multiple Formats – Published by – Respected in the Industry for over 30 years. 

Statistical Thinking through Media Examples (Third Edition)  by Anthony Donoghue – ©2022346 pages

Choose the format that suits you best:

  • Paperback List Price: $106.95
  • Cognella Direct Ebook: $80.95 – You save $26 (24%)
  • Paperback and Ebook Bundle: $104.95

Carts  Free 3–7 day delivery · ‎30-day returns

Also available on Amazon – Paperback: $114.21 Hardcover: $139.00 

CHAPTER 4 MEASURING UNCERTAINTY WITH PROBABILITY

 

4.2 CHANCE EVENTS AND CONFIRMATION BIAS

Finally, we will look at an example in which a major media outlet took advantage of our tendency to fall prey to confirmation bias. In Chapter 3, we discussed the result of a Pew Research Center survey titled “US. Media Polarization and the 2020 Election: A Nation Divided.” One finding in this study was that 75% of conservatives trust Fox News as a source of political and election news, giving this news network sizable control over this segment of the electorate. Does Fox News deserve this level of trust?

People want to believe their chosen source of news is telling them the truth. No one likes being lied to. However, people tend to believe what they want to believe, a bias of which a media outlet or media personality can take advantage. A classic example of this is the type of reporting is the reporting particular Fox News personalities conducted after the 2020 US election.

Trump refused to accept the fact that he lost the election and stated that the election was stolen due to voter fraud. Media personalities at Fox News such as Lou Dobbs were more than willing to sow the seeds of doubt in their viewers’ minds. They interviewed members of Trump’s team, such as Rudy Giuliani and Sidney Powell, who made numerous claims regarding how the election was stolen. Because the claim about the election was one that many Fox News viewers wanted to believe, it was not difficult to convince a sizable percentage of the electorate that the election was stolen.

The problem was that Giuliani and Powell could provide little to no evidence to back up their claims of election fraud. Of 61 court cases filed by Trump associates, all but one was dismissed due to lack of evidence. However, the Fox News hosts continued to sow the seeds of doubt in its viewership. This is an example of poor reporting at best or nefarious reporting at worst. Any news reporter working in such a high-level position should have the critical thinking skills to know that strong claims require strong evidence. If they provided a platform for Giuliani and Powell to voice these claims without requiring any evidence before doing so, then they should not be working in such a high-level position. The public needs to know that they can trust their news sources and that the news reporters have checked the claims to which they give voice. That is simply good journalism.

It took the threat of a lawsuit for Fox News to tell their media personalities to stop giving voice to these claims. In a Business insider news article titled “Fox News Is Debunking Election Fraud Claims Made by Its Own Anchors in Response to a Legal Threat” the news reporter discusses the legal threat that got Fox News to back down from the election fraud claims to which it had given voice. The threat came from the company Smartmatic, the maker of election software used in the 2020 US election. The company sent Fox News a 20-page letter demanding “a full and complete retraction of all false and defamatory statements and reports.” Several of the voter fraud claims to which Fox News gave voice were related to the Smartmatic software used for counting votes in the 2020 US election. In response, Fox News had the same news personalities who perpetuated election fraud claims read a statement debunking said these claims. The legal threat increased when Smartmatic sued Fox News for more than $2.7 billion in damages, according to the New York Times article titled “Fox News Is Sued by Election Technology Company for Over $2.7 Billion.” Another company, Dominion, whose voting systems were also used to count votes, followed with a $1.6 billion lawsuit. It stated that Fox News falsely claimed the company rigged the election results. In response to the lawsuits, Fox News stated it was simply covering the news. Opinion presented as fact without evidence is not news. It is nothing more than tabloid fodder for people who fall prey to confirmation bias.

This a is great example of a media company taking advantage of the inherent issue of confirmation bias. Fox News personalities helped water the seeds of doubt in their viewership until a sizable percentage of them truly believed the election was stolen. As stated previously, once the seeds of doubt have been sown, the resulting weeds in the mind of the believer are almost impossible to remove. Fox News may claim it was just covering the news. However, the lack of integrity of the individuals involved in promoting this narrative was glaringly apparent to anyone with basic critical thinking skills.

Unfortunately, there is not much we can do to change the ways of these types of media outlets and media personalities. If they were to disappear, other media outlets will move in to take their place as long as there is a profit to be made from taking advantage of people falling prey to confirmation bias.

However, we can give power to people in the form of critical and statistical thinking skills. As the Center for Media Literacy points out in a discussion article titled “Confirmation Bias and Media Literacy” it comes down to teaching consumers of information how to think critically. The discussion begins by asking the following questions:

What is the truth? How do we seek it out? What is censorship? What should be—or not be—censored?

What is bias? To what extent is bias present? What is character? How does character factor into our judgments? Whom do we trust? Should we trust that democracy provides us with the best path to success, freedom, fairness, and justice? Is “power to the people” worthy of our confidence?

In this book, we have begun to and will continue to try and answer many of these questions through the development of our critical and statistical thinking skills. We will ask the question “What is the truth?” and learn how to pursue the truth using data, statistics and statistical thinking. In this book’s final chapter, titled “Integrity in Research,” we will learn that maintaining one’s integrity is essential when pursuing the truth. Learning how to question the information we take in through the media (and through journal articles) will give us a greater sense of power over the information we consume every day. The article goes on to discuss the issue of confirmation bias using real-world examples to demonstrate that there can be many points of view to a particular event, and our point of view (depending on our built-in biases) may be very far from the truth. It is only natural in a world in which we are inundated with information to gravitate to news sources and stories that confirm what we want to believe is true. Thinking for ourselves by questioning the information we take in every day is hard work and time-consuming. However, as the Center for Media Literacy advocates, we need to build this time-consuming work into our educational system.

We need to provide our children with the critical and statistical thinking skills to determine which media outlets and media personalities to trust. We need to give them the tools to determine for themselves which media outlets are consistently good sources of information, and which are not. They need to be able to differentiate media personalities who do a good job at reporting the news from those who lack integrity and are simply trying to pull the wool over our eyes. If we can shine the bright light of critical and statistical thinking on these media personalities and the media outlets who support them, with time, they may scuttle back into the dark corners where they belong.

Book stack Available in Multiple Formats – Published by – Respected in the Industry for over 30 years. 

Statistical Thinking through Media Examples (Third Edition)  by Anthony Donoghue – ©2022346 pages

Choose the format that suits you best:

  • Paperback List Price: $106.95
  • Cognella Direct Ebook: $80.95 – You save $26 (24%)
  • Paperback and Ebook Bundle: $104.95

Carts  Free 3–7 day delivery · ‎30-day returns

Also available on Amazon – Paperback $114.21 Hardcover: $139.00 

CHAPTER 10: INTEGRITY IN RESEARCH

 

 10.5 VIOXX AND HEART ATTACKS

On May 20, 1999, the Food and Drug Administration (FDA) approved a drug for acute pain, Rofecoxib, marketed as Vioxx. On September 30, 2004, Vioxx was taken off the market because of increased risk of heart attack for individuals taking the drug. During the four years that Vioxx was on the market, annual sales revenue was around $2.5 billion. The results of the suits against Merck, the maker of Vioxx, are discussed in the New York Times article titled “Merck Agrees to Settle Vioxx Suits for $4.85 Billion.

Three years after withdrawing its pain medication Vioxx from the market, Merck has agreed to pay $4.85 billion to settle 27,000 lawsuits by people who claim they or their family members suffered injury or died after taking the drug, according to two lawyers with direct knowledge of the matter.

As the article points out, before the lawsuit settlement in 2007, scientists were debating whether Merck knew about the dangers of the drug before its approval in 1999. The debate centered around the results of a clinical trial published in 2000 that compared Vioxx to the painkiller naproxen, marketed as Aleve. The debate over the results of the clinical trial is discussed in the New York Times article titled “Scientists Again Defend Study on Vioxx.”

With a crucial personal-injury trial over Vioxx set to begin in New Jersey next week, the debate heated up again yesterday about whether Merck understated the drug’s risks in a journal article in November 2000. In a letter published online by the New England Journal of Medicine, 11 scientists who were co-authors of the article said they stood by its original conclusions, despite heavy criticism from the editors of the journal.

….

The trial confirmed that Vioxx seemed to be safer on the stomach, but it also showed that more patients taking Vioxx than naproxen died and that many more suffered heart attacks. As published, the article reported that 17 patients taking Vioxx and 4 taking naproxen had heart attacks during the trial.

As the article points out, the editors at the New England Journal of Medicine (where the original journal article was published) published in February 2006 an “expression of concern” regarding Merck’s failure to clearly present the risk of heart attacks from taking Vioxx. They point out that the difference in heart attack risk between Vioxx and Naproxen was too large to be due to chance alone. The question we should ask at this stage is why the journal did not express the same concern before it published the original article in November 2000.

We will take a look at the original journal article, titled “Comparison of Upper Gastrointestinal Toxicity of Rofecoxib and Naproxen in Patients with Rheumatoid Arthritis,” to see how complete a picture Merck presented on the risks of heart attack from taking Vioxx. 

As was already mentioned, Vioxx was primarily marketed as a drug for acute pain. However, this study was conducted to compare the incidence of upper gastrointestinal events for patients on rofecoxib (Vioxx) and naproxen (Aleve). The study was a randomized experiment with a total of 8,076 patients enrolled in the study: 4,047 patients were randomly assigned to receive Vioxx, and 4,029 patients received Aleve. This is considered a very large sample size. The sample statistics would be expected to be very good estimates of the population parameters of interest. The abstract on the first page of the journal article presents a summary of the results of the study:

Rofecoxib and naproxen had similar efficacy against rheumatoid arthritis. During a median follow-up of 9.0 months, 2.1 confirmed gastrointestinal events per 100 patient-years occurred with rofecoxib, as compared with 4.5 per 100 patient-years with naproxen (relative risk, 0.5; 95 percent confidence interval, 0.3 to 0.6; P = 0.005).

As a reminder, risk is a term used in epidemiology (the study and analysis of health outcomes and diseases in populations) defined as the probability that a disease (or outcome) will occur. The risk of gastrointestinal events on Vioxx was 2.1 per 100 patient-years (PY), or 0.021. The risk on Aleve was 4.5 per 100 PYs, or 0.045. Relative risk is a value that compares one group’s risk of developing a disease (or outcome) relative to another group. In this case, the relative risk for gastrointestinal events was 0.5, calculated as follows:

Relative Risk = Risk on Vioxx/Risk on Aleve

= 0.021/0.045

= 0.5

The relative risk of 0.5 means that there was a 50% decreased risk of gastrointestinal events on Vioxx compared to Aleve. In other words, the risk of gastrointestinal events on Vioxx was half the risk it was on Aleve. The p-value of 0.005 means that the researchers found strong statistical evidence of a difference between Vioxx and Aleve (regarding the risk of gastrointestinal events) in the population. The 95% confidence interval was [0.3,0.6]. This means that the researchers were 95% confident that the true relative risk in the population could be anywhere from 0.3 to 0.6.

The analysis of the rates of complicated confirmed events, were presented in a similar and consistent way:

The respective rates of complicated confirmed events (perforation, obstruction, and severe upper gastrointestinal bleeding) were 0.6 per 100 patient-years and 1.4 per 100 patient-years (relative risk, 0.4; 95 percent confidence interval, 0.2 to 0.8; P = 0.005). 

In this case, the relative risk was 0.4 for complicated confirmed events, calculated as follows:

Relative Risk = Risk on Vioxx/Risk on Aleve

= 0.006/0.014

= 0.4

However, when comparing the incidence of myocardial infarction (heart attack), the way in which the relative risk was calculated was not consistent with how it was calculated for the primary results:

The incidence of myocardial infarction was lower among patients in the naproxen group than among those in the rofecoxib group (0.1 percent vs. 0.4 percent; relative risk, 0.2; 95 percent confidence interval, 0.1 to 0.7); the overall mortality rate and the rate of death from cardiovascular causes were similar in the two groups.

The risk of heart attack on Vioxx was 0.004, or 0.4 percent, representing the 17 patients who had heart attacks while on Vioxx. The risk of heart attack on Aleve was 0.001, or 0.1 percent, representing the 4 patients who had heart attacks while on Aleve. In this case, the relative risk for heart attacks was presented as 0.2, calculated as follows:

Relative Risk = Risk on Aleve/Risk on Vioxx

= 0.001/0.004

= 0.25

The relative risk was calculated by dividing 0.001 by 0.004, the risk on Aleve over the risk on Vioxx, the reverse of how it was calculated for the primary analysis of gastrointestinal events. Both ways of calculating relative risk are valid. However, the researchers should be consistent in how they calculate relative risk for each outcome or event of interest to avoid any confusion or misunderstanding of the results. It is accepted (and proper) practice to list the primary drug of interest first (on top) and then the comparator drug (on bottom) when calculating relative risk.

Why did the language change when it came to describing the incidence of heart attacks?

Why did the researchers not provide the rate of death from cardiovascular diseases instead of simply stating it was similar in both groups?

The reported relative risk of 0.2 means that there is an 80% decreased risk of heart attack on Aleve compared to Vioxx. In other words, the risk of heart attack on Aleve was one-fifth the risk of heart attack on Vioxx. As shown in our calculation, the actual relative risk was equal to 0.25 (to two decimal places), but the researchers rounded the value to 0.2! If the relative risk for heart attack was calculated in the same way it was for the primary analysis, it would have been equal to 4, a value that is much easier to understand and explain. It simply means (what the data show very clearly) that there was four times the risk of getting a heart attack on Vioxx (17 patients) compared to Aleve (4 patients). A relative risk of 4 would have been much more alarming to the reader of the paper than 0.2 or 0.25.

The 95% confidence interval for the relative risk goes from 0.1 to 0.7. The confidence interval does not include 1, a value that would indicate that the risk of heart attack on both drugs is the same in the population. Therefore, the confidence interval provides statistical evidence that risk of heart attack is significantly higher for patients on Vioxx compared to patients on Aleve. The corresponding p-value would have been less than 0.05. However, the p-value was not presented in the summary of the results or anywhere else in the journal article.

The researchers did give the following explanation for the differences in the rates of heart attacks observed:

The rate of myocardial infarction was significantly lower in the naproxen group than in the rofecoxib group (0.1 percent vs. 0.4 percent). This difference was primarily accounted for by the high rate of myocardial infarction among the 4 percent of the study population with the highest risk of a myocardial infarction, for whom low-dose aspirin is indicated. The difference in the rates of myocardial infarction between the rofecoxib and naproxen groups was not significant among the patients without indications for aspirin therapy as secondary prophylaxis.

The researchers state that the rate of heart attacks was “significantly lower” among patients on Aleve, but they do not state that it was statistically significantly lower. They explain that the higher rate of heart attacks among patients on Vioxx was due to the higher risk of heart attacks among “4 percent of the study population.” However, the study was a randomized experiment.

The random assignment of patients to treatments should have ensured that approximately the same percentage of patients with high risk of heart attack were assigned to each treatment group. This is a very weak explanation for the differences in the percentage of heart attacks in each group, that should have been questioned during peer review.

The questions that come to mind are as follows:

  • Why cause confusion by not consistently calculating relative risk?
  • Why would Merck present the relative risk as 0.2 (instead of 0.25), making Vioxx look worse when to comes to the risk of heart attack than it was?
  • Why did Merck present the p-values for the primary analysis and not for the analysis of heart attacks?
  • Why did peer review at the New England Journal of Medicine not question how the results were presented?
  • How come the FDA did not catch the problems with how the results were presented?
  • Did Merck know the drug was causing heart attacks before the drug was brought to market?

If you search for news on Merck and Vioxx, you will find a consensus in the media from the evidence presented after Vioxx was taken off the market that Merck knew the drug was causing heart attacks before it was brought to market. You will find that an estimated 60,000 people died from taking Vioxx, more than the number of Americans who died in the Vietnam War. When an individual kills another person in cold blood, we are shocked and outraged. The individual will be sent to prison as punishment for their crimes. The individuals in this case made a deliberate and calculated effort to hide from the FDA that Vioxx was causing heart attacks. That is a very cold thing to do given that your decisions are going to cause the deaths of innocent individuals whom you don’t even know. However, nobody went to prison in this case. In fact, many of the individuals involved went on to have prestigious careers.

The questions we must ask ourselves are, where was the collective moral outrage? How many people even know that this occurred? Why are we ok with individuals working for and on behalf of large corporations making these types of decisions without any real consequences for their actions?

When we take a moment to internalize the case of Vioxx, we start to understand the scope of what the individuals working for (or on behalf of) Merck did. None of the people who died because of taking the drug deserved to die in this way. The fact it was left to chance for Vioxx to find its victims means that any one of those people could have been your father, mother, sister, or brother. It could have been you. The fact that there were no consequences for Merck (and associates) beyond a hit to its bottom line means that this will (and has) happened again. The hit to the bottom line is simply the cost of doing business.

Book stack Available in Multiple Formats – Published by – Respected in the Industry for over 30 years. 

Statistical Thinking through Media Examples (Third Edition)  by Anthony Donoghue – ©2022346 pages

Choose the format that suits you best:

  • Paperback List Price: $106.95
  • Cognella Direct Ebook: $80.95 – You save $26 (24%)
  • Paperback and Ebook Bundle: $104.95

Carts  Free 3–7 day delivery · ‎30-day returns

Also available on Amazon – Paperback: $114.21 Hardcover: $139.00 

10.6 OXYCONTIN AND THE OPIOID CRISIS

Another pain medication that has led to far more deaths than Vioxx is called Oxycontin, approved for use by the FDA in 1996. As was the case with Vioxx, Purdue Pharma, the pharmaceutical company that manufactured the drug, went on a massive marketing campaign to physicians and the public without disclosing the fact that the drug was highly addictive. Oxycontin went on to become the most highly abused painkiller in the United States and a major contributor to the opioid epidemic. As the New York Times article titled “Sacklers Directed Efforts to Mislead Public About Oxycontin, New Documents Indicate” points out, the owners of Purdue Pharma were directly involved in the efforts to mislead physicians and the public about the dangers of the drug. Richard Sackler, son of the founder, advised doctors to prescribe the highest (and most profitable) doses and pushed the blame onto patients who became addicted, stating, “They are the culprits and the problem. They are reckless criminals.”

The Sacklers’ aggressive marketing campaign continued for more than two decades, long after it became clear the devastation Oxycontin was causing, mainly to poor rural white communities across the US. According to the Centers for Disease Control and Prevention (CDC) website, “From 1999–2018, almost 450,000 people died from an overdose involving any opioid, including prescription and illicit opioids.”

In the American Journal of Public Health article titled “The Promotion and Marketing of Oxycontin: Commercial Triumph, Public Health Tragedy,” the researcher, Art Van Zee, discusses the massive marketing campaign conducted by Purdue Pharma to promote the use of Oxycontin:

When Purdue Pharma introduced OxyContin in 1996, it was aggressively marketed and highly promoted. Sales grew from $48 million in 1996 to almost $1.1 billion in 2000. The high availability of OxyContin correlated with increased abuse, diversion, and addiction, and by 2004 OxyContin had become a leading drug of abuse in the United States.

When discussing the origins of the FDA approval of Oxycontin, the researcher states the following:

Randomized double-blind studies that compared OxyContin with controlled-release morphine for cancer-related pain also found comparable efficacy and safety. The FDA’s medical review officer, in evaluating the efficacy of OxyContin in Purdue’s 1995 new drug application, concluded that OxyContin had not been shown to have a significant advantage over conventional, immediate-release oxycodone taken 4 times daily other than a reduction in frequency of dosing.

Why did the FDA approve Oxycontin when it was found to no more effective than the pain medications already on the market and when it was found to be highly addictive? The FDA website provides an explanation of its reasoning at the time.

At the time of approval, FDA believed the controlled-release formulation of OxyContin would result in less abuse potential, since the drug would be absorbed slowly and there would not be an immediate “rush” or high that would promote abuse. In part, FDA based its judgment on the prior marketing history of a similar product, MS Contin, a controlled-release formulation of morphine approved by FDA and used in the medical community since 1987 without significant reports of abuse and misuse.

The FDA go on to state that there was no evidence at the time that crushing the drug to be snorted or ingested would become widespread. Unfortunately, they were wrong in their assessment of what would occur.

The STAT News article titled “The History of OxyContin, Told Through Unsealed Purdue Documents,” provides details on Purdue Pharma’s marketing strategy from 1993, before the drug was approved, to 2014. Although the drug was approved for cancer-related pain, the article discusses how the company (from very early on) planned to move into the more lucrative market of drugs for nonmalignant pain conditions.

In December 1994, Michael Friedman, the sale and marketing executive (who would become the chief executive officer), sent an email to Richard Sackler and two other members of the Sackler family, stating the following:

“Our current MS Contin business has created ‘a franchise’ with certain physicians who routinely write prescriptions for the drug,” Friedman wrote. These family physicians, general physicians, and internists “may be the bridge that we can use to expand the use of OxyContin beyond Cancer patients to chronic non-malignant pain”—a market that he noted accounted for 68.7 million prescriptions a year.

From very early on, the Sackler family planned to push a drug that they knew was highly addictive on as many people as they possibly could. This type of behavior is no different from your street corner drug pusher, except that the Sacklers had a much more sophisticated operation for massive distribution and sales.

The November 2020 New York Times article titled “McKinsey Proposed Paying Pharmacy Companies Rebates for OxyContin Overdoses” discusses how in 2017, McKinsey, the world’s largest consulting firm, laid out different options to Purdue Pharma for increasing sales of Oxycontin. One of the options was “to give Purdue’s distributors a rebate for every OxyContin overdose attributable to pills they sold.” As late as 2017, when hundreds of thousands of Americans had already died from overdosing on Oxycontin, the Sackler family were working with their distributors to squeeze every last dollar they could out of pushing the drug. How much more callous and calculating could the individuals involved in these sorts of decisions be?

In a May 2021 news article in The Guardian titled “Big pharma executives mocked ‘pillbillies’ in emails, West Virginia opioid trial hears”, the author discusses how executives at AmerisourceBergen, a major distributor of Oxycontin, mocked the people who got addicted to the drug. They call them “Pillbillies” and “described Kentucky as “Oxycontinville” because of the high use of the drug in the poor rural east of the state.” The disdain these corporate executives show for poor rural people who got addicted to the product they were pushing shows clearly the type of individuals that often make it to the top of the ladder in corporate America.

The Sackler family are one of the richest families in the US, estimated to have made $10.7 billion in profits from the sales of Oxycontin over 20 years. However, according to the New York Times article (from December 2020) titled “The Sackler’s Last Poison Pill,” they may never spend any time in jail. The Sackler legal team managed to push back a meeting with the House Committee on Oversight and Reform (the main investigative committee of the United States House of Representatives) to January 2020. In essence, through legal maneuvering and members of the House of Representatives buoying to pressure, the Sackler family may never face real accountability for what they did:

By then, a bankruptcy plan to reorganize Purdue will probably have been proposed. If, as expected, the plan seeks to release the Sacklers from liability, it will become practically impossible to uncover the full truth about the Sacklers’ role in the opioid crisis.

Why were the Sackler legal team allowed to do this? Do the politicians in the US represent the people or large corporations? Are particular politicians lining their pockets by giving large corporations what they want at the expense of the people? Are we all just pawns in their game?

The truth is that the Sackler family have made so much money from pushing their drug that they may never serve a day in prison. However, they are and should always be guilty in the court of public opinion. What they did was premeditated and planned. The level of death and devastation they brought on so many American families is second only to the COVID-19 pandemic. They spread their virus (in the form of a pill) across the landscape of American life.

Moral outrage was mounting up against the Sackler family before the COVID-19 pandemic hit. Hopefully, moral outrage will rise again, and the public demands that the punishment fit the horrendous crimes these individuals committed. The COVID-19 pandemic awakened many of us to the fact that life is short and precious. No one deserves to lose everything they are and everything they could be in this way. Their families and loved ones should not have to suffer such a premature and painful loss. Punishment for these sorts of crimes should amount to more than a hit to the almighty bottom line. The punishment should fit the crime. If that is not the truth, then I don’t know what is.

In both the case of Vioxx and Oxycontin, the individuals involved were willing to sacrifice their integrity to ensure that as many Americans as possible consumed a flawed and potentially deadly product. When the truth regarding these dangerous products eventually came to light, they knew they could hide behind their company names, agreeable politicians, and lawyers, who were also willing to sacrifice their integrity for a sizable paycheck. The truth and the lives lost and destroyed did not matter to any of these individuals.

As we have learned throughout this book, the path to truth is a challenging one. There will always be opportunities to take the easier path to “success,” often defined in terms of monetary gain. However, at the end of the day, you will have to contend with where those paths have led you, no matter how much money you make.

One of the greatest of American leaders, Dr. Martin Luther King, who followed his path to truth with passion and integrity, had the following words of advice to share with future leaders:

May I stress the need for courageous, intelligent, and dedicated leadership…. Leaders of sound integrity. Leaders not in love with publicity, but in love with justice. Leaders not in love with money, but in love with humanity. Leaders who can subject their particular egos to the greatness of the cause.

Dr. Martin Luther King used his natural intelligence and leadership skills to move forward with a cause he truly believed in. He had the courage and commitment to follow his path to truth until he reached the mountaintop (and was rewarded for his efforts). Watch the last three minutes of his final speech where he tries to express the inner reward and insight he received from living a life dedicated to the “greatness of the cause”. I hope you will have the courage to follow in his footsteps. America, now more than ever, needs great young leaders willing to take on challenging causes, maintain their sense of integrity, and follow their path to truth. It is hard and courageous work, but it is worth it and a very American thing to do. So, work hard, be brave and show the world what it truly means to be an American!

Latest News 

August 2023: New York Times: Supreme Court Pauses Settlement With Sacklers Pending Review. 

December 2023: What to Know About The Purdue Pharma Case Before the Supreme Court

Book stack Available in Multiple Formats – Published by – Respected in the Industry for over 30 years. 

Statistical Thinking through Media Examples (Third Edition)  by Anthony Donoghue – ©2022346 pages

Choose the format that suits you best:

  • Paperback List Price: $106.95
  • Cognella Direct Ebook: $80.95 – You save $26 (24%)
  • Paperback and Ebook Bundle: $104.95

Carts  Free 3–7 day delivery · ‎30-day returns

Also available on Amazon – Paperback: $114.21 Hardcover: $139.00 

Scroll to Top