The Perfect Bet Read online

Page 2


  We could take another approach, however. We could try to understand the effect of the molecules bouncing into each other without studying the minutiae of the interactions between them. If we look at all the particles together, we will be able to see them mix together until—after a certain period of time—the paint spreads evenly throughout pool. Without knowing anything about the cause, which is too complex to grasp, we can still comment on the eventual effect.

  The same can be said for roulette. The trajectory of the ball depends on a number of factors, which we might not be able to grasp simply by glancing at a spinning roulette wheel. Much like for the individual water molecules, we cannot make predictions about a single spin if we do not understand the complex causes behind the ball’s trajectory. But, as Poincaré suggested, we don’t necessarily have to know what causes the ball to land where it does. Instead, we can simply watch a large number of spins and see what happens.

  That is exactly what Albert Hibbs and Roy Walford did in 1947. Hibbs was studying for a math degree at the time, and his friend Walford was a medical student. Taking time off from their studies at the University of Chicago, the pair went to Reno to see whether roulette tables were really as random as casinos thought.

  Most roulette tables have kept with the original French design of thirty-eight pockets, with numbers 1 to 36, alternately colored black and red, plus 0 and 00, colored green. The zeros tip the game in the casinos’ favor. If we placed a series of one-dollar bets on our favorite number, we could expect to win on average once in every thirty-eight attempts, in which case the casino would pay thirty-six dollars. Over the course of thirty-eight spins, we would therefore put down thirty-eight dollars but would only make thirty-six dollars on average. That translates into a loss of two dollars, or about five cents per spin, over the thirty-eight spins.

  The house edge relies on there being an equal chance of the roulette wheel producing each number. But, like any machine, a roulette table can have imperfections or can gradually wear down with use. Hibbs and Walford were on the hunt for such tables, which might not have produced an even distribution of numbers. If one number came up more often than the others, it could work to their advantage. They watched spin after spin, hoping to spot something odd. Which raises the question: What do we actually mean by “odd”?

  WHILE POINCARÉ WAS IN France thinking about the origins of randomness, on the other side of the English Channel Karl Pearson was spending his summer holiday flipping coins. By the time the vacation was over, the mathematician had flipped a shilling twenty-five thousand times, diligently recording the results of each throw. Most of the work was done outside, which Pearson said “gave me, I have little doubt, a bad reputation in the neighbourhood where I was staying.” As well as experimenting with shillings, Pearson got a colleague to flip a penny more than eight thousand times and repeatedly pull raffle tickets from a bag.

  To understand randomness, Pearson believed it was important to collect as much data as possible. As he put it, we have “no absolute knowledge of natural phenomena,” just “knowledge of our sensations.” And Pearson didn’t stop at coin tosses and raffle draws. In search of more data, he turned his attention to the roulette tables of Monte Carlo.

  Like Poincaré, Pearson was something of a polymath. In addition to his interest in chance, he wrote plays and poetry and studied physics and philosophy. English by birth, Pearson had traveled widely. He was particularly keen on German culture: when University of Heidelberg admin staff accidently recorded his name as Karl instead of Carl, he kept the new spelling.

  Unfortunately, his planned trip to Monte Carlo did not look promising. He knew it would be near impossible to obtain funding for a “research visit” to the casinos of the French Riviera. But perhaps he didn’t need to watch the tables. It turned out that the newspaper Le Monaco published a record of roulette outcomes every week. Pearson decided to focus on results from a four-week period during the summer of 1892. First he looked at the proportions of red and black outcomes. If a roulette wheel were spun an infinite number of times—and the zeros were ignored—he would have expected the overall ratio of red to black to approach 50/50.

  Out of the sixteen thousand or so spins published by Le Monaco, 50.15 percent came up red. To work out whether the difference was down to chance, Pearson calculated the amount the observed spins deviated from 50 percent. Then he compared this with the variation that would be expected if the wheels were random. He found that a 0.15 percent difference wasn’t particularly unusual, and it certainly didn’t give him a reason to doubt the randomness of the wheels.

  Red and black might have come up a similar number of times, but Pearson wanted to test other things, too. Next, he looked at how often the same color came up several times in a row. Gamblers can become obsessed with such runs of luck. Take the night of August 18, 1913, when a roulette ball in one of Monte Carlo’s casinos landed on black over a dozen times in a row. Gamblers crowded around the table to see what would happen next. Surely another black couldn’t appear? As the table spun, people piled their money onto red. The ball landed on black again. More money went on red. Another black appeared. And another. And another. In total, the ball bounced into a black pocket twenty-six times in a row. If the wheel had been random, each spin would have been completely unrelated to the others. A sequence of blacks wouldn’t have made a red more likely. Yet the gamblers that evening believed that it would. This psychological bias has since been known as the “Monte Carlo fallacy.”

  When Pearson compared the length of runs of different colors with the frequencies that he’d expect if the wheels were random, something looked wrong. Runs of two or three of the same color were scarcer than they should have been. And runs of a single color—say, a black sandwiched between two reds—were far too common. Pearson calculated the probability of observing an outcome at least as extreme as this one, assuming that the roulette wheel was truly random. This probability, which he dubbed the p value, was tiny. So small, in fact, that Pearson said that even if he’d been watching the Monte Carlo tables since the start of Earth’s history, he would not have expected to see a result that extreme. He believed it was conclusive evidence that roulette was not a game of chance.

  The discovery infuriated him. He’d hoped that roulette wheels would be a good source of random data and was angry that his giant casino-shaped laboratory was generating unreliable results. “The man of science may proudly predict the results of tossing halfpence,” he said, “but the Monte Carlo roulette confounds his theories and mocks at his laws.” With the roulette wheels clearly of little use to his research, Pearson suggested that the casinos be closed down and their assets donated to science. However, it later emerged that Pearson’s odd results weren’t really due to faulty wheels. Although Le Monaco paid reporters to watch the roulette tables and record the outcomes, the reporters had decided it was easier just to make up the numbers.

  Unlike the idle journalists, Hibbs and Walford actually watched the roulette wheels when they visited Reno. They discovered that one in four wheels had a bias of some sort. One wheel was especially skewed, so betting on it caused the pair’s initial one-hundred-dollar stake to grow rapidly. Reports of their final profits differ, but whatever they made, it was enough to buy a yacht and sail it around the Caribbean for a year.

  There are plenty of stories about gamblers who’ve succeeded using a similar approach. Many have told the tale of the Victorian engineer Joseph Jagger, who made a fortune exploiting a biased wheel in Monte Carlo, and of the Argentine syndicate that cleaned up in government-owned casinos in the early 1950s. We might think that, thanks to Pearson’s test, spotting a vulnerable wheel is fairly straightforward. But finding a biased roulette wheel isn’t the same as finding a profitable one.

  In 1948, a statistician named Allan Wilson recorded the spins of a roulette wheel for twenty-four hours a day over four weeks. When he used Pearson’s test to find out whether each number had the same chance of appearing, it was clear the wheel was b
iased. Yet it wasn’t clear how he should bet. When Wilson published his data, he issued a challenge to his gambling-inclined readers. “On what statistical basis,” he asked, “should you decide to play a given roulette number?”

  It took thirty-five years for a solution to emerge. Mathematician Stewart Ethier eventually realized that the trick wasn’t to test for a nonrandom wheel but to test for one that would be favorable when betting. Even if we were to look at a huge number of spins and find substantial evidence that one of the thirty-eight numbers came up more often than others, it might not be enough to make a profit. The number would have to appear on average at least once every thirty-six spins; otherwise, we would still expect to lose out to the casino.

  The most common number in Wilson’s roulette data was nineteen, but Ethier’s test found no evidence that betting on it would be profitable over time. Although it was clear the wheel wasn’t random, there didn’t seem to be any favorable numbers. Ethier was aware that his method had probably arrived too late for most gamblers: in the years since Hibbs and Walford had won big in Reno, biased wheels had gradually faded into extinction. But roulette did not remain unbeatable for long.

  WHEN WE ARE AT our deepest level of ignorance, with causes that are too complex to understand, the only thing we can do is look at a large number of events together and see whether any patterns emerge. As we’ve seen, this statistical approach can be successful if a roulette wheel is biased. Without knowing anything about the physics of a roulette spin, we can make predictions about what might come up.

  But what if there’s no bias or insufficient time to collect lots of data? The trio that won at the Ritz didn’t watch loads of spins, hoping to identify a biased table. They looked at the trajectory of the roulette ball as it traveled around the wheel. This meant escaping not just Poincaré’s third level of ignorance but his second one as well.

  This is no small feat. Even if we pick apart the physical processes that cause a roulette ball to follow the path it does, we cannot necessarily predict where it will land. Unlike paint molecules crashing into water, the causes are not too complex to grasp. Instead, the cause can be too small to spot: a tiny difference in the initial speed of the ball makes a big difference to where it finally settles. Poincaré argued that a difference in the starting state of a roulette ball—one so tiny it escapes our attention—can lead to an effect so large we cannot miss it, and then we say that the effect is down to chance.

  The problem, which is known as “sensitive dependence on initial conditions,” means that even if we collect detailed measurements about a process—whether a roulette spin or a tropical storm—a small oversight could have dramatic consequences. Seventy years before mathematician Edward Lorenz gave a talk asking “Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?” Poincaré had outlined the “butterfly effect.”

  Lorenz’s work, which grew into chaos theory, focused chiefly on prediction. He was motivated by a desire to make better forecasts about the weather and to find a way to see further into the future. Poincaré was interested in the opposite problem: How long does it take for a process to become random? In fact, does the path of a roulette ball ever become truly random?

  Poincaré was inspired by roulette, but he made his breakthrough by studying a much grander set of trajectories. During the nineteenth century, astronomers had sketched out the asteroids that lay scattered along the Zodiac. They’d found that these asteroids were pretty much uniformly distributed across the night sky. And Poincaré wanted to work out why this was the case.

  He knew that the asteroids must follow Kepler’s laws of motion and that it was impossible to know their initial speed. As Poincaré put it, “The Zodiac may be regarded as an immense roulette board on which the Creator has thrown a very great number of small balls.” To understand the pattern of the asteroids, Poincaré therefore decided to compare the total distance a hypothetical object travels with the number of times it rotates around a point.

  Imagine you unroll an incredibly long, and incredibly smooth, sheet of wallpaper. Laying the sheet flat, you take a marble and set it rolling along the paper. Then you set another going, followed by several more. Some marbles you set rolling quickly, others slowly. Because the wallpaper is smooth, the quick ones soon roll far into the distance, while the slow ones make their way along the sheet much more gradually.

  The marbles roll on and on, and after a while you take a snapshot of their current positions. To mark their locations, you make a little cut in the edge of the paper next to each one. Then you remove the marbles and roll the sheet back up. If you look at the edge of the roll, each cut will be equally likely to appear at any position around the circumference. This happens because the length of the sheet—and hence the distance the marbles can travel—is much longer than the diameter of the roll. A small change in the marbles’ overall distance has a big effect on where the cuts appear on the circumference. If you wait long enough, this sensitivity to initial conditions will mean that the locations of the cuts will appear random. Poincaré showed the same thing happens with asteroid orbits. Over time, they will end up evenly spread along the Zodiac.

  To Poincaré, the Zodiac and the roulette table were merely two illustrations of the same idea. He suggested that after a large number of turns, a roulette ball’s finishing position would also be completely random. He pointed out that certain betting options would tumble into the realm of randomness sooner than others. Because roulette slots are alternately colored red and black, predicting which of the two appears meant calculating exactly where the ball will land. This would become extremely difficult after even a spin or two. Other options, such as predicting which half of the table the ball lands in, were less sensitive to initial conditions. It would therefore take a lot of spins before the result becomes as good as random.

  Fortunately for gamblers, a roulette ball does not spin for an extremely long period of time (although there is an oft-repeated myth that mathematician Blaise Pascal invented roulette while trying to build a perpetual motion machine). As a result, gamblers can—in theory—avoid falling into Poincaré’s second degree of ignorance by measuring the initial path of the roulette ball. They just need to work out what measurements to take.

  THE RITZ WASN’T THE first time a story of roulette-tracking technology emerged. Eight years after Hibbs and Walford had exploited that biased wheel in Reno, Edward Thorp sat in a common room at the University of California, Los Angeles, discussing get-rich-quick schemes with his fellow students. It was a glorious Sunday afternoon, and the group was debating how to beat roulette. When one of the others said that casino wheels were generally flawless, something clicked in Thorp’s mind. Thorp had just started a PhD in physics, and it occurred to him that beating a robust, well-maintained wheel wasn’t really a question of statistics. It was a physics problem. As Thorp put it, “The orbiting roulette ball suddenly seemed like a planet in its stately, precise and predictable path.”

  In 1955, Thorp got hold of a half-size roulette table and set to work analyzing the spins with a camera and stopwatch. He soon noticed that his particular wheel had so many flaws that it made prediction hopeless. But he persevered and studied the physics of the problem in any way he could. On one occasion, Thorp failed to come to the door when his in-laws arrived for dinner. They eventually found him inside rolling marbles along the kitchen floor in the midst of an experiment to find out how far each would travel.

  After completing his PhD, Thorp headed east to work at the Massachusetts Institute of Technology. There he met Claude Shannon, one of the university’s academic giants. Over the previous decade, Shannon had pioneered the field of “information theory,” which revolutionized how data are stored and communicated; the work would later help pave the way for space missions, mobile phones, and the Internet.

  Thorp told Shannon about the roulette predictions, and the professor suggested they continue the work at his house a few miles outside the city. When Thorp entered Shannon’s bas
ement, it became clear quite how much Shannon liked gadgets. The room was an inventor’s playground. Shannon must have had a $100,000 worth of motors, pulleys, switches, and gears down there. He even had a pair of huge polystyrene “shoes” that allowed him to take strolls on the water of a nearby lake, much to his neighbors’ alarm. Before long, Thorp and Shannon had added a $1,500 industry-standard roulette table to the gadget collection.

  MOST ROULETTE WHEELS ARE operated in a way that allows gamblers to collect information on the ball’s trajectory before they bet. After setting the center of the roulette wheel spinning counterclockwise, the croupier launches the ball in a clockwise direction, sending it circling around the wheel’s upper edge. Once the ball has looped around a few times, the croupier calls “no more bets” or—if casinos like their patter to have a hint of Gallic charm—“rien ne va plus.” Eventually, the ball hits one of the deflectors scattered around the edge of the wheel and drops into a pocket. Unfortunately for gamblers, the ball’s trajectory is what mathematicians call “nonlinear”: the input (its speed) is not directly proportional to the output (where it lands). In other words, Thorp and Shannon had ended up back in Poincaré’s third level of ignorance.

  Rather than trying to dig themselves out by deriving equations for the ball’s motion, they instead decided to rely on past observations. They ran experiments to see how long a ball traveling at a certain speed would remain on the track and used this information to make predictions. During a spin, they would time how long it took for a ball to travel once around the table and then compared the time to their previous results to estimate when it would hit a deflector.

  The calculations needed to be done at the roulette table, so at the end of 1960, Thorp and Shannon built the world’s first wearable computer and took it to Vegas. They tested it only once, as the wires were unreliable, needing frequent repairs. Even so, it seemed like the computer could be a successful tool. Because the system handed gamblers an advantage, Shannon thought casinos might abandon roulette once word of the research got out. Secrecy was therefore of the utmost importance. As Thorp recalled, “He mentioned that social network theorists studying the spread of rumors claimed that two people chosen at random in, say, the United States are usually linked by three or fewer acquaintances, or ‘three degrees of separation.’” The idea of “six degrees of separation” would eventually creep into popular culture, thanks to a highly publicized 1967 experiment by sociologist Stanley Milgram. In the study, participants were asked to help a letter get to a target recipient by sending it to whichever of their acquaintances they thought were most likely to know the target. On average, the letter passed through the hands of six people before eventually reaching its destination, and the six degrees phenomenon was born. Yet subsequent research has shown that Shannon’s suggestion of three degrees of separation was probably closer to the mark. In 2012, researchers analyzing Facebook connections—which are a fairly good proxy for real-life acquaintances—found that there are an average of 3.74 degrees of separation between any two people. Evidently, Shannon’s fears were well founded.