
Полная версия
Maths on the Back of an Envelope
Finally, the returning officer was able to confirm the result: Labour’s Emma Dent Coad had defeated Victoria Borwick of the Conservatives.
The margin, however, was tiny. Coad won by just 20 votes, with 16,333 to Borwick’s 16,313.
You might expect that if there is one number of which we can be certain, down to the very last digit, it is the number we get when we have counted something.
Yet the truth is that even something as basic as counting the number of votes is prone to error. The person doing the counting might inadvertently pick up two voting slips that are stuck together. Or when they are getting tired, they might make a slip and count 28, 29, 40, 41 … Or they might reject a voting slip that another counter would have accepted, because they reckon that marks have been made against more than one candidate.
As a rule of thumb, some election officials reckon that manual counts can only be relied on within a margin of about 1 in 5,000 (or 0.02%), so with a vote like the one in Kensington, the result of one count might vary by as many as 10 votes when you do a recount.6
And while each recount will typically produce a slightly different result, there is no guarantee which of these counts is actually the correct figure – if there is a correct figure at all. (In the famously tight US Election of 2000, the result in Florida came down to a ruling on whether voting cards that hadn’t been fully punched through, and had a hanging ‘chad’, counted as legitimate votes or not.)
Re-counting typically stops when it is becoming clear that the error in the count isn’t big enough to affect the result, so the tighter the result, the more recounts there will be. There have twice been UK General Election votes that have had seven recounts, both of them in the 1960s, when the final result was a majority below 10.
All this shows that when it is announced that a candidate such as Coad has received 16,333 votes, it should really be expressed as something vaguer: ‘Almost certainly in the range 16,328 to 16,338’ (or in shorthand, 16,333 ± 5).
If we can’t even trust something as easy to nail down as the number of votes made on physical slips of paper, what hope is there for accurately counting other things that are more fluid?
In 2018, the two Carolina states in the USA were hit by Hurricane Florence, a massive storm that deposited as much as 50 inches of rain in some places. Among the chaos, a vast number of homes lost power for several days. On 18 September, CNN gave this update:
511,000—this was the number of customers without power Monday morning—according to the US Energy Information Administration. Of those, 486,000 were in North Carolina, 15,000 in South Carolina and 15,000 in Virginia. By late Monday, however, the number [of customers without power] in North Carolina had dropped to 342,884.
For most of that short report, numbers were being quoted in thousands. But suddenly, at the end, we were told that the number without power had dropped to 342,884. Even if that number were true, it could only have been true for a period of a few seconds when the figures were collated, because the number of customers without power was changing constantly.
And even the 486,000 figure that was quoted for North Carolina on the Monday morning was a little suspicious – here we had a number being quoted to three significant figures, while the two other states were being quoted as 15,000 – both of which looked suspiciously like they’d been rounded to the nearest 5,000. This is confirmed if you add up the numbers: 15,000 + 15,000 + 486,000 = 516,000, which is 5,000 higher than the total of 511,000 quoted at the start of the story.
So when quoting these figures, there is a choice. They should either be given as a range (‘somewhere between 300,000 and 350,000’) or they should be brutally rounded to just a single significant figure and the qualifying word ‘roughly’ (so, ‘roughly 500,000’). This makes it clear that these are not definitive numbers that could be reproduced if there was a recount.
And, indeed, there are times when even saying ‘roughly’ isn’t enough.
Every month, the Office for National Statistics publishes the latest UK unemployment figures. Of course this is always newsworthy – a move up or down in unemployment is a good indicator of how the economy is doing, and everyone can relate to it. In September 2018, the Office announced that UK unemployment had fallen by 55,000 from the previous month to 1,360,000.
The problem, however, is that the figures published aren’t very reliable – and the ONS know this. When they announced those unemployment figures in 2018, they also added the detail that they had 95% confidence that this figure was correct to within 69,000. In other words, unemployment had fallen by 55,000 plus or minus 69,000. This means unemployment might actually have gone down by as many as 124,000, or it might have gone up by as many as 14,000. And, of course, if the latter turned out to be the correct figure, it would have been a completely different news story.
When the margin of error is larger than the figure you are quoting, there’s barely any justification in quoting the statistic at all, let alone to more than one significant figure. The best they can say is: ‘Unemployment probably fell slightly last month, perhaps by about 50,000.’
It’s another example of how a rounded, less precise figure often gives a fairer impression of the true situation than a precise figure would.
SENSITIVITY
We’ve already seen that the statistics should really carry an indication of how much of a margin of error we should attach to them.
An understanding of the margins of error is even more important when it comes to making predictions and forecasts.
Many of the numbers quoted in the news are predictions: house prices next year, tomorrow’s rainfall, the Chancellor’s forecast of economic growth, the number of people who will be travelling by train … all of these are numbers that have come from somebody feeding numbers into a spreadsheet (or something more advanced) to represent this mathematically, in what is usually known as a mathematical model of the future.
In any model like this, there will be ‘inputs’ (such as prices, number of customers) and ‘outputs’ that are the things you want to predict (profits, for example).
But sometimes a small change in one input variable can have a surprisingly large effect on the number that comes out at the far end.
The link between the price of something and the profit it makes is a good example of this.
Imagine that last year you ran a face-painting stall for three hours at a fair. You paid £50 for the hire of the stand, but the cost of materials was almost zero. You charged £5 to paint a face, and you can paint a face in 15 minutes, so you did 12 faces in your three hours, and made:
£60 income – £50 costs = £10 profit.
There was a long queue last year and you were unable to meet the demand, so this year you increase your charge from £5 to £6. That’s an increase of 20%. Your revenue this year is £6 × 12 = £72, and your profit climbs to:
£72 income – £50 costs = £22 profit.
So, a 20% increase in price means that your profit has more than doubled. In other words, your profit is extremely sensitive to the price. Small percentage increases in the price lead to much larger percentage increases in the profit.
It’s a simplistic example, but it shows that increasing one thing by 10% doesn’t mean that everything else increases by 10% as a result.7
EXPONENTIAL GROWTH
There are some situations when a small change in the value assigned to one of the ‘inputs’ has an effect that grows dramatically as time elapses.
Take chickenpox, for example. It’s an unpleasant disease but rarely a dangerous one so long as you get it when you are young. Most children catch chickenpox at some point unless they have been vaccinated against it, because it is highly infectious. A child infected with chickenpox might typically pass it on to 10 other children during the contagious phase, and those newly infected children might themselves infect 10 more children, meaning there are now 100 cases. If those hundred infected children pass it on to 10 children each, within weeks the original child has infected 1,000 others.
In their early stages, infections spread ‘exponentially’. There is some sophisticated maths that is used to model this, but to illustrate the point let’s pretend that in its early stages, chickenpox just spreads in discrete batches of 10 infections passed on at the end of each week. In other words:
N = 10T,
where N is the number of people infected and T is the number of infection periods (weeks) so far.
After one week: N = 101 = 10.
After two weeks: N = 102 = 100.
After three weeks: N = 103 = 1,000,
and so on.
What if we increase the rate of infection by 20% to N = 12, so that now each child infects 12 others instead of 10? (Such an increase might happen if children are in bigger classes in school or have more playdates, for example.)
After one week, the number of children infected is 12 rather than 10, just a 20% increase. However, after three weeks, N = 123 = 1,728, which is heading towards double what it was for N = 10 at this stage. And this margin continues to grow as time goes on.
CLIMATE CHANGE AND COMPLEXITY
Sometimes the relationship between the numbers you feed into a model and the forecasts that come out are not so direct. There are many situations where the factors involved are inter-connected and extremely complex.
Climate change is perhaps the most important of these. Across the world, there are scientists attempting to model the impact that rising temperatures will have on sea levels, climate, harvests and animal populations. There is an overwhelming consensus that (unless human behaviour changes) global temperatures will rise, but the mathematical models produce a wide range of possible outcomes depending on how you set the assumptions. Despite overall warming, winters in some countries might become colder. Harvests may increase or decrease. The overall impact could be relatively benign or catastrophic. We can guess, we can use our judgement, but we can’t be certain.
In 1952, the science-fiction author Raymond Bradbury wrote a short story called ‘A Sound of Thunder’ in which a time-traveller transported back to the time of the dinosaurs accidentally kills a tiny butterfly, and this apparently innocuous incident has knock-on effects that turn out to have changed the modern world they return to. A couple of decades later, the mathematician Edward Lorenz is thought to have been referencing this story when he coined the phrase ‘the butterfly effect’ as a way to describe the unpredictable and potentially massive impact that small changes in the starting situation can have on what follows.
These butterfly effects are everywhere, and they make confident long-term predictions of any kind of climate change (including political and economic climate) extremely difficult.
MAD COWS AND MAD FORECASTS
In 1995, Stephen Churchill, a 19-year-old from Wiltshire, became the first person to die from Variant Creutzfeldt–Jakob disease (or vCJD). This horrific illness, a rapidly progressing degeneration of the brain, was related to BSE, more commonly known as ‘Mad Cow Disease’, and caused by eating contaminated beef.
As more victims of vCJD emerged over the following months, health scientists began to make forecasts about how big this epidemic would become. At a minimum, they reckoned there would be at least 100 victims. But, at worst, they predicted as many as 500,000 might die – a number of truly nightmare proportions. 8
Nearly 25 years on, we are now able to see how the forecasters did. The good news is that their prediction was right – the number of victims was indeed between 100 and 500,000. But this is hardly surprising, given how far apart the goalposts were.
The actual number believed to have died from vCJD is about 250, towards the very bottom end of the forecasts, and about 2,000 times smaller than the upper bound of the prediction.
But why was the predicted range so massive? The reason is that, when the disease was first identified, scientists could make a reasonable guess as to how many people might have eaten contaminated burgers, but they had no idea what proportion of the public was vulnerable to the damaged proteins (known as prions). Nor did they know how long the incubation period was. The worst-case scenario was that the disease would ultimately affect everyone exposed to it – and that we hadn’t seen the full effect because it might be 10 years before the first symptoms appeared. The reality turned out to be that most people were resistant, even if they were carrying the damaged prion.
It’s an interesting case study in how statistical forecasts are only as good as their weakest input. You might know certain details precisely (such as the number of cows diagnosed with BSE), but if the rate of infection could be anywhere between 0.01% and 100%, your predictions will be no more accurate than that factor of 10,000.
At least nobody (that I’m aware of) attempted to predict a number of victims to more than one significant figure. Even a prediction of ‘370,000’ would have implied a degree of accuracy that was wholly unjustified by the data.
DOES THIS NUMBER MAKE SENSE?
One of the most important skills that back-of-envelope maths can give you is the ability to answer the question: ‘Does this number make sense?’ In this case, the back of the envelope and the calculator can operate in harmony: the calculator does the donkey work in producing a numerical answer, and the back of the envelope is used to check that the number makes logical sense, and wasn’t the result of, say, a slip of the finger and pressing the wrong button.
We are inundated with numbers all the time; in particular, financial calculations, offers, and statistics that are being used to influence our opinions or decisions. The assumption is that we will take these figures at face value, and to a large extent we have to. A politician arguing the case for closing a hospital isn’t going to pause while a journalist works through the numbers, though I would be pleased if more journalists were prepared to do this.
Often it is only after the event that the spurious nature of a statistic emerges.
In 2010, the Conservative Party were in opposition, and wanted to highlight social inequalities that had been created by the policies of the Labour government then in power. In a report called ‘Labour’s Two Nations’, they claimed that in Britain’s most deprived areas ‘54% of girls are likely to fall pregnant before the age of 18’. Perhaps this figure was allowed to slip through because the Conservative policy makers wanted it to be true: if half of the girls on these housing estates really were getting pregnant before leaving school, it painted what they felt was a shocking picture of social breakdown in inner-city Britain.
The truth turned out to be far less dramatic. Somebody had stuck the decimal point in the wrong place. Elsewhere in the report, the correct statistic was quoted, that 54.32 out of every 1,000 women aged 15 to 17 in the 10 most deprived areas had fallen pregnant. Fifty-four out of 1,000 is 5.4%, not 54%. Perhaps it was the spurious precision of the 54.32’ figure that had confused the report writers.
Other questionable numbers require a little more thought. The National Survey of Sexual Attitudes has been published every 10 years since 1990. It gives an overview of sexual behaviour across Britain.
One statistic that often draws attention when the report is published is the number of sexual partners that the average man and woman has had in their lifetime.
The figures in the first three reports were as follows:
Average (mean) number of opposite-sex partners in lifetime (ages 16–44) Men Women 1990–91 8.6 3.7 1999–2001 12.6 6.5 2010–2012 11.7 7.7The figures appear quite revealing, with a surge in the number of partners during the 1990s, while the early 2000s saw a slight decline for men and an increase for women.
But there is something odd about these numbers. When sexual activity happens between two opposite-sex people, the overall ‘tally’ for all men and women increases by one. Some people will be far more promiscuous than others, but across the whole population, it is an incontravertible fact of life that the total number of male partners for women will be the same as the number of women partners for men. In other words, the two averages ought to be the same.
There are ways you can attempt to explain the difference. For example, perhaps the survey is not truly representative – maybe there is a large group of men who have sex with a small group of women that are not covered in the survey.
However, there is a more likely explanation, which is that somebody is lying. The researchers are relying on individuals’ honesty – and memory – to get these statistics, with no way of checking if the numbers are right.
What appears to be happening is that either men are exaggerating, or women are understating, their experience. Possibly both. Or it might just be that the experience was more memorable for the men than for the women. But whatever the explanation, we have some authentic-looking numbers here that under scrutiny don’t add up.
THE CASE FOR BACK-OF-ENVELOPE THINKING
I hope this opening section has demonstrated why, in many situations, quoting a number to more than one or two significant figures is misleading, and can even lull us into a false sense of certainty. Why? Because a number quoted to that precision implies that it is accurate; in other words, that the ‘true’ answer will be very close to that. Calculators and spreadsheets have taken much of the pain out of calculation, but they have also created the illusion that any numerical problem has an answer that can be quoted to several decimal places.
There are, of course, situations where it is important to know a number to more than three significant figures. Here are a few of them:
In financial accounts and reports. If a company has made a profit of £2,407,884, there will be some people for whom that £884 at the end is important.
When trying to detect small changes. Astronomers looking to see if a remote object in the sky has shifted in orbit might find useful information in the tenth significant figure, or even more.
Similarly in the high end of physics there are quantities linked to the atom that are known to at least 10 significant figures.
For precision measurements such as those involved in GPS, which is identifying the location of your car or your destination, and where the fifth significant figure might mean the difference between pulling up outside your friend’s house and driving into a pond.
But take a look at the numbers quoted in the news – they might be in a government announcement, a sports report or a business forecast – and you’ll find remarkably few numbers where there is any value in knowing them to four or more significant figures.
And if we’re mainly dealing with numbers with so few significant figures, the calculations we need to make to find those numbers are going to be simpler. So simple, indeed, that we ought to be able to do most of them on the back of an envelope or even, with practice, in our heads.
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.