Oh, I enjoyed this book so much. I am on a roll with choosing non-fiction books that delight me. I strongly recommend this book for its exploration of various ways different areas of mathematics can help us understand the world around us. This book also delivers a whole bunch of (previously unknown to me) biases, all dealing with math, giving me even more joy.
In a very approachable way giving many examples from the real world and history (none of these "two trains are on tracks going in opposite directions at forty kilometers an hour" problems), the various chapters discuss exponential growth and decay (Chapter 1), sensitivity and specificity specifically in medicine (Chapter 2), how math (statistics, in particular) is used in legal matters (Chapters 3 and 4), different numbering systems (Chapter 5), algorithms in general and how they can apply to one's life (the 37% rule is a very common algorithm used to illustrate how algorithms can make our lives better, see also Algorithms to Live By), and the most relevant topic of 2020: mathematical epidemiology, or the topic of epidemics, pandemics, and the spread of disease.
I mean, discussions of false positives and false negatives, how one can intimidate jurors with numbers, how to interpret stats you read in the news (hint: context matters a LOT), an overview of virus transmission (asshat anti-vaxers not understanding that vaccines don't cause autism, a leaky intestinal system causes autism, but that's another line of research that didn't get earlier funding because Jenny McCarthy decided murdering thousands of children was a better "mother feeling," leaving scientists to debunk her shit first for public health before finding the true cause of autism, but here we are), and ideas that can help people live better lives.
One of the good things, for some people anyway, about the book is that there are very few equations in the book, making the book approachable to anyone who doesn't like, or thinks he can't do, math.
Strongly recommended. Fun, informative read.
The application of the God Equation can be seen as an attempt to take difficult life and death decisions out of our subjective hands and place them under the control of an objective mathematical formula. This point of view plays on the seeming impartiality and objectivity of mathematics, but neglects to recognize that the subjective decisions are simply being diverted out of sight in the form of judgments on quality of life and cost effectiveness thresholds at earlier stages of the decision making process.
Chapter 2, English version, not in the American version of the book
Mathematics, at its most fundamental, is pattern. Every time you look at the world you are building your own model of the patterns you observe.
Refusing to believe reports of the core’s explosion, Akimov relayed incorrect information about the reactor’s state, delaying vital containment efforts. Upon eventually realizing the full extent of the destruction, he worked, unprotected, with his crew to pump water into the shattered reactor.
The greater our acquaintance with the routines of everyday life, the quicker we perceive time to pass, and generally, as we age, this familiarity increases. This theory suggests that, to make our time last longer, we should fill our lives with new and varied experiences, eschewing the time-sapping routine of the everyday. Neither of the above ideas explains the almost perfectly regular rate at which our perception of time seems to accelerate. That the length of a fixed period of time appears to reduce continually as we age suggests an “exponential scale” to time.
Today, more people in the world die from being overweight than from being underweight.
The main problem with BMI is that it can’t distinguish between muscle and fat. This is important because excess body fat is a good predictor of cardiometabolic risk. BMI is not. If the definition of obesity were instead based on high-percentage body fat, between 15 and 35 percent of men with non-obese BMIs would be reclassified as obese. For example, “skinny-fat” individuals, with low muscle but high levels of body fat and consequently normal BMI, fall into the undetected “normal-weight obesity” category. A recent cross-population study of forty thousand individuals found that 30 percent of people with BMI in the normal range were cardiometabolically unhealthy. The obesity crisis may be much worse than our BMI-based figures suggest. However, BMI both under- and over-diagnoses obesity. The same study found that up to half of the individuals that BMI classified as overweight and over a quarter of BMI-obese individuals were metabolically healthy.
Alternatively, by blowing as much air as you can into an empty airtight bag and then sealing and submersing it in water, you can use Archimedes’s principle to estimate your lung capacity a few weeks into your new exercise program.
Using this idea, all Archimedes needed to do was to take a pan balance with the crown on one side and an equal mass of pure gold on the other. In air, the pans would balance. However, when the scales were placed underwater, a fake crown (which would be larger in volume than the same mass of denser gold) would experience a larger buoyant force as it displaced more water, and its pan would consequently rise. This principle from Archimedes is used to accurately calculate body fat percentage. A subject is first weighed in normal conditions, then reweighed while sitting completely submerged on an underwater chair attached to a set of scales. The differences in the dry and underwater weight measurements can be used to calculate the buoyant force acting on the individual while underwater, which can in turn be used to determine the person’s volume, given the known density of water. This volume, in conjunction with figures for the density of fat and lean components of the human body, can be used to estimate the body fat percentage and provide more accurate assessments of health risks.
False alarms typically refer to an alarm triggered by something other than the expected stimulus. A staggering 98 percent of all burglar-alarm activations in the United States are thought to be false alarms. This prompts the question, Why have an alarm at all? As we get used to incorrect alerts, we become more reluctant to investigate their causes. Burglar alarms are by no means the only warnings with which we have become overfamiliar. When the smoke detector goes off, we are usually already opening a window and scraping the soot off our toast. When we hear a car alarm outside, very few of us will even get off the sofa and stick our heads outside to investigate. When alarms become an inconvenience rather than an aid, and when we no longer trust their output, we are said to be suffering alarm fatigue. This is a problem because situations in which alarms become so routine that we ignore them, or disable them completely, can be less sensible than not having the alarm in the first place,
This trade-off exists because we are typically testing for proxies rather than the phenomena themselves. The test that misdiagnosed Mark Stern as HIV positive does not test for the HIV virus. Rather, it tests for antibodies that the body’s immune system raises in an attempt to fight off the virus. However, high HIV-associated antibody loads can be raised by something as innocuous as the flu vaccination. Similarly, most home pregnancy tests do not look for the presence of a viable embryo implanted in the woman’s womb. Typically, these tests look for elevated levels of the hormone HCG, produced after implantation of the embryo. Such proxy indicators are often called surrogate markers. Tests can be wrong because markers similar to the surrogate can trigger a positive result.
For some tests, a more accurate version is not available. In these cases, we should remember that even a second run of the same test can dramatically improve the precision of its results. We should never be afraid to ask for a second opinion. Clearly, even doctors—the perceived experts—don’t always have the firmest grasp of the figures, despite the illusion of confidence they exude. Before you start to worry yourself unduly based on assertions of a single test, find out its sensitivity and specificity and work out the likelihood of an incorrect result. Question the illusion of certainty and take the power of interpretation back into your own hands.
Two events are dependent if knowledge of one event influences the probability of the other. Otherwise they are independent. When presented with the probabilities of individual events, common practice is to multiply these probabilities together to find the probability of the combination of the events occurring.
A suspect is found whose license plate matches the five digits remembered by the witness. If the suspect is innocent, then only ninety-nine other cars are out there, out of the 10 million cars on the road, whose plates match the first five digits. Therefore, the probability that the witness observed such a license plate if the suspect is innocent is 99/10,000,000, less than one in one hundred thousand (1/100,000). This tiny probability of seeing the evidence if the suspect is innocent seems to overwhelmingly indicate the suspect’s guilt. However, to assume so is to commit the prosecutor’s fallacy. The probability of seeing the evidence if the suspect is innocent is not the same as the probability of the suspect being innocent, once that piece of evidence has been observed. Recall that ninety-nine of the hundred cars that match the witness’s description do not belong to the suspect. The suspect is just one of a hundred people who drive such a car. The probability of the suspect’s guilt given their license plate, therefore, is just one in a hundred—exceedingly unlikely. Other evidence tying the suspect to the area of the crime or eliminating the other cars from being in the area would increase the probability of the suspect’s guilt. However, based on the single piece of evidence, the overwhelmingly likely conclusion should be that the suspect is innocent. The prosecutor’s fallacy is only truly effective when the chance of the innocent explanation is extremely small, otherwise it is too easy to see through the fallacious argument.
than ever. Simultaneously, there is a concomitant increase in the numerical skills required to interpret their findings. In many cases there is no hidden agenda, the statistics are just difficult to interpret. However, for many reasons it might benefit one party or another to put a spin on a particular finding.
Small, unrepresentative, or biased samples, in conjunction with leading questions and selective reporting, can all make for unreliable statistics. More subtle still are the statistics used out of context so that we have no way to judge whether, for example, a 300 percent increase in cases of a disease represents an increase from one patient to four or from half a million patients to 2 million. Context is important. It’s not that these different interpretations of numbers are lies—each one is a small piece of the true story on which someone has shone a light from a preferred direction—it’s just that they are not the whole truth. We are left to try to piece together the true story behind the hyperbole.
Advertisers know that numbers are widely perceived as being indisputable facts. Adding a figure to an ad can be extremely persuasive and lend power to the promoter’s argument.
The apparent objectivity of statistics seems to say, “Don’t just trust what we’re saying, trust this piece of indisputable evidence.”
A more appropriate question for Liddle to ask might have been, “If a black US citizen comes across someone while out walking alone, who should they be more scared will kill them: another black person, or a law enforcement officer?” To find out the answer we need to compare the per capita rates of black-victim killings perpetrated by black people and by police officers. We find the per capita rates, as presented in table 11, by dividing the total number of black victims killed by a particular group (black people or police officers) by the size of the group. Black people were responsible for 2,380 killings of other black people in 2015, but with over 40.2 million black US citizens, the per capita rate is relatively small—around one in seventeen thousand. Police officers were “rightly or wrongly” responsible for killing 307 black people in 2015. With 635,781 police officers, this amounts to a per capita killing rate that is just below one killing per two thousand police officers—over eight times higher than the rate for black US citizens. It seems that a black person walking down the street should be more alarmed to see a police officer approaching than another black person.
Of course we have not accounted for the fact that encounters with the police are often confrontational, and US police are typically armed. It’s perhaps not surprising that those authorized to wield lethal force do so more frequently than the general population at large. By exactly the same mathematics, we can show that white people should also be more scared of law enforcement officers (per capita white killing rate of one per thousand officers) than other white people (per capita white killing rate of one per ninety thousand white people), despite more white people killing other white people than police officers killing white people. That police officers have twice as high a per capita rate of killing white people than black people is because the country has more white people.
a statistic that is at the heart of the Black Lives Matter movement: that the 12.6 percent of the population who are black account for 26.8 percent of police killings, while the 73.6 percent who are white account for just 51.0 percent.
Other subtle signs can indicate a manipulated statistic. If presenters are confident of the veracity of their figures, then they won’t be afraid to give the context and the source for others to check. As with Gorka’s terrorism tweet, a contextual vacuum is a red flag when it comes to believability. Lack of details on survey results, including the sample size, the questions asked, and the source of the sample—as we saw in L’Oréal’s advertising campaign—is another warning sign. Mismatched framing, percentages, indexes, and relative figures without the absolutes, as in the NCI’s Breast Cancer Risk Tool, should set alarm bells ringing. The spurious inferences of a causative effect from uncontrolled studies or subsampled data—as often seen in the conclusions drawn from trials of alternative medicine—are yet more tricks to watch out for. If an initially extreme statistic suddenly rises or falls—as with gun crime in the United States—be on the lookout for regression to the mean. More generally, when a statistic is pushed your way, ask yourself the questions “What’s the comparison?” “What’s the motivation?” and “Is this the whole story?” Finding the answers to these three questions should take you a long way toward determining the veracity of the figures. Not being able to find the answers tells its own story.
When you arrive at the cinema and need to pay to park at the meter, the ticket machine probably won’t provide change. If you have enough coins in your pocket, you probably want to make up the exact price as quickly as possible. In one greedy algorithm, which many of us will reach for intuitively, we insert coins sequentially, each time adding the largest-value coin that is less than the remaining total.
For all the 1-2-5 currencies, as well as the US coinage system, the greedy algorithm described
above does make up the total using the smallest number of coins.
Any currency for which each coin or note is at least twice as valuable as the next smallest denomination will satisfy the greedy property.
Measures implemented to reduce employee absence, including reducing paid sick leave, are causing a marked rise in people coming to work regardless of how bad they might be feeling, leading unintentionally to more illness and overall lowered rates of efficiency. Presenteeism is particularly prevalent in health care and teaching. Ironically, nurses, doctors, and teachers feel so obligated to the large numbers of people they safeguard that they often put them at risk by coming in to work while under the weather.
Perhaps surprisingly, diseases with high case fatality rates tend to be less infectious. If a disease kills too many of its victims too quickly, then it reduces its chances of being passed on. Diseases that kill most of the people they infect and also spread efficiently are very rare and are usually confined to disaster movies. Although a high case fatality rate significantly raises the fear associated with an outbreak, diseases with high R0 but lower case fatality may end up killing more people by virtue of the larger numbers they infect.
One of the most effective options for reducing disease spread is vaccination. By taking people directly from susceptible to removed, bypassing the infective state, it effectively reduces the size of the susceptible population. Vaccination, however, is typically a precautionary measure applied in an attempt to reduce the probability of outbreaks. Once
outbreaks are in full swing, it is often impractical to develop and test an effective vaccine in time.
Similarly, it’s impractical in the real world to quarantine a high proportion of the population for a long time. Running a mathematical model presents no such concerns. We can test models in which everyone is quarantined or no one, or anywhere in between, in an attempt to balance the economic impact of this enforced isolation with the effect it has on the progression of the disease. This is the real beauty of mathematical epidemiology—the ability to test out scenarios that are infeasible in the real world, sometimes with surprising and counterintuitive results. Math has, for example, shown that for diseases such as chicken pox (varicella) isolation and quarantine may be the wrong strategy. Trying to segregate children with and without the disease will undoubtedly lead to numerous missed schooldays and workdays to avoid what is widely considered to be a relatively mild disease. Perhaps more significant, though, mathematical models prove that quarantining healthy children can defer their catching the disease until they are older, when the complications from chicken pox can be far more serious. Such counterintuitive effects of a seemingly sensible strategy such as isolation might never have been fully understood if not for mathematical interventions.
Childhood diseases show these typical periodic outbreak patterns because the effective reproduction number varies over time with the population of susceptible individuals. After a big outbreak has affected large swathes of the unprotected-child population, a disease such as scarlet fever doesn’t just disappear. It persists in the population, but with an effective reproduction number that hovers around 1. The disease only just sustains itself. As time goes by, the population ages, and new, unprotected children are born. As the unguarded fraction of the population grows, the effective reproduction number becomes higher and higher, making new outbreaks increasingly likely. When an outbreak finally takes off, the victims to whom the disease spreads are usually at the unprotected younger end of the demographic, because most of the older populace are already immune through experiencing the disease. Those people who didn’t get the disease as children are typically afforded some protection because they fraternize with fewer of the infected age group.
The most effective way to reduce the size of the susceptible population is through vaccination. The question of how many to vaccinate to achieve herd immunity relies on reducing the effective reproduction number to below 1.
In general we can only afford to leave 1/R0 of the population unvaccinated and must protect the remaining fraction (1−1/R0 of the population) if we are to achieve the herd immunity threshold.