The chilling significance of AlphaGo

What artificial supremacy in the game of Go portends  for the future

In March, a computer named AlphaGo played the human world champion in a five-game match of Go, the ancient board game often described as the ‘Far East cousin’ of chess. That AlphaGo triumphed provoked curiosity and bemusement in the public — but is seen as hugely significant in the artificial intelligence and computer science communities. Computer engineer Sheldon Fernandez explains why.

April, 2016


The ancient game of Go

Lee Sodol grinned goofily, a glowing mix of euphoria and exhaustion.

“It’s just one win, but I’ve never been congratulated so much for winning a single game in my life.”

Members of the South Korean and International press corps whistled and applauded wildly. Out of admiration, pity, or Homo sapiens solidarity, no one was quite sure, but there was general agreement on one point: Lee had, at the very least, salvaged a modicum of pride against his silicon opponent, a computer called AlphaGo.

Prior to the contest, the 33 year-old Lee estimated he’d prevail 4-1 in the five game match against the machine, a prognostication that appeared foolish after AlphaGo won the first three encounters.  Lee won the fourth game. The human world champion had temporarily stopped the bleeding (to employ an eminently human analogy), but the computer’s triumph the following day gave way to an ironic 1-4 scoreline that few would’ve predicted weeks before.

The most obvious parallel to the Go showdown in South Korea is the famous chess match in 1997 between Gary Kasparov, the human world champion at the time, and an IBM supercomputer named Deep Blue.  For some 40 years the game of chess – strategic, subtle and enigmatic – had been the focus of Artificial Intelligence (AI) research, the branch of computer science that attempts to imbue machines with sentient, human-like qualities. Creating a world class chess-playing machine, it was argued, would demonstrate conclusively that computers could think, could intuit, and might one day feel and emote like their creators.

Despite the magnitude of the 1997 achievement, Deep Blue’s narrow victory over Kasparov was ironic, in two ways.   First, it was a tad premature, as most observers agree that, at least at the time, Kasparov was objectively the stronger player and had simply ‘psyched’ himself out. (Today, a ten dollar smartphone app would trounce any Grandmaster on the planet.)

Second, and more important, Deep Blue’s technical design was highly specialized and non-transferable outside of the game of chess.  Though it might be the world’s supreme chess player, Deep Blue had no notion of its achievement in the grand scheme of things or, to use the jargon of AI, meta-knowledge; an awareness of the world and, indeed, of itself.  As Kasparov remarked at the time, the machine “succeeded in turning quantity into quality” not through intelligence, but brute force, analyzing millions of moves per second by means of its sophisticated hardware.

The success of AlphaGo is different, and radically so. Not the rapidity of the achievement, which was remarkable on its own terms, a sudden and unexpected spike in the strength of Go-playing machines unlike the linear progression of their chess counterparts.  Nor the difficulty of Go itself, a game so fantastically complex that scientists hadn’t anticipated a breakthrough for decades. Rather, it is was the way AlphaGo’s creators approached and attacked the problem using a bevy of modern techniques, ones that may represent the first forerunners of genuine thinking machines.


The game of Go, originally known as weiqi, can be traced in ancient China through written records as early as 400 BC. Conducted on a grid, two players take turns placing stones on the board, black first, followed by white. Though pieces cannot move from their original squares, they can be removed if captured, which is accomplished by cutting off their liberties (encircling a stone on all four sides).  The purpose of Go is to surround a larger area of the board than one’s opponent at the game’s conclusion.

What makes Go so challenging from an AI standpoint is the raw number of moves a player can choose from throughout the game, what is known in Computer Science as the branching factor.  The mathematics are daunting: to begin, black has a choice of 361 possible moves, one at every intersecting point on the 19×19 grid.  White thus has 360 replies, followed by 359 counters from black, and so on.   After only four turns, a total of 16,749,374,760 board positions are possible. After 24 turns, the count exceeds the number of atoms in the sun.  After 32, it surpasses the number of atoms in the universe.

While the numbers also spiral out of control in chess, they do so at rate that is slower than Go by a factor of ten, making the game amenable to Deep Blue’s brute force approach in which powerful hardware is coupled with smart searching (‘pruning’ so as to focus on promising moves) such that quantity becomes quality.

As a game of Go lasts beyond 200 moves, however, the permutations, even for a supercomputer are monstrous and incalculable, rendering the game into a complete enigma, as insoluble as the ancient challenge to “measure a pound of fire”1.

Yet measure a pound of fire the makers of AlphaGo did, and their techniques are instructive – and chilling.

Before you continue: please know that reader-supported Facts and Opinions is employee-owned and ad-free. We are on an honour system and survive only because readers like you pay at least 27 cents per story.  Contribute below, or find more details here. Thanks for your interest and support.


In a sense, the AI epitomized by AlphaGo is the antithesis of its chess counterpart.  In the early years, researchers believed that computers needed to emulate human thought patterns and nebulous concepts like intuition.

Jose Capablanca, the world chess champion in the 1920s, was once asked how many moves ahead he could calculate.  His magnificent rejoinder, “only one, but it’s always the right one”, captures perfectly the intangibilities of human genius that scientists have labored to replicate artificially.

Though unsuccessful in the realm of chess – Deep Blue and its successors examine billions of moves to determine the ‘right’ one – the field of AI has come full circle such that the ambitions of its pioneers now represent its most dominant strands of thought.  That is, the early aspirations of AI spurned in favor of Deep Blue’s specialization have been rekindled with AlphaGo.  The key concepts, ones you’ll hear a lot about in the coming years and which were exploited by the program, are Deep Learning and neural networks.

The story of these related concepts begins with a three pound packet of tissue rightfully regarded as one of the most complex and awesome devices in the cosmos.  A warm, wet biological construct refined through millions of years of evolution, the human brain contains approximately 100 billion neuron cells and nearly 100 trillion neural connectors that thread the cells together.

Although the precise workings of the brain remain a mystery, the biological contours are clear: neurons, or nerve cells, connect to hundreds of other neurons via long fibers called axons; the connecting junctions referred to as a synapses.   A complex electrochemical reaction allows signals to propagate between neurons and – in a manner not remotely understood by scientists – this frenetic firing coalesces into a neurological dance that provides the basis for conscious life as we know it.

For example, as you read this sentence and unpack its contents, at least a few million neurons in your brain will partake in the prodigious sequence of electronic impulses that enables your contemplation. The important point is that the nature of these impulses – their sequence, timing and pattern – is not random, but is rather connected to the underlying thought.  More specifically, similar mental narratives activate similar neural patterns.  Visualizing a pen and a pencil, for example, would animate the same family of neurons, whereas doing the same for a chair and a zebra would not.  An amusing byproduct of this connection is shown in Figure 2.


Figure 2: Facial recognition exercise

Figure 2 illustrates three human heads in which a gender-ambiguous face is sandwiched between two conventional ones2.  The theory of visual science tells us that if you cover the rightmost figure (the female face) and fixate on the leftmost figure (the male face) for 10 to 15 seconds, the ambiguous face will be interpreted by most observers as female.  When the process is reversed – when the male figure is covered and the female figure is extendedly gazed upon – the middle figure appears generally male.

What might be described as a ‘trick of the mind’, is in fact a striking example of the way your brain works. In classifying a face by gender, your neurons activate in a manner that is representative of the initial image.   By abruptly shifting focus to an ambiguous face after a prolonged stare at a well-defined one, a strange phenomenon occurs:  because of the stare, your neural wiring becomes momentarily ‘biased’ towards that gender pattern, which causes your brain to interpret the ambiguous face in the opposite direction.

A final point regarding the biology of the brain is what neuroscientists call plasticity; the idea that the connection-strength between two neurons – the speed and fidelity by which signals propagate – can change in a long-term manner in response to outside events: reading a book, reciting the alphabet, or humming a symphony.  Plasticity is why you can parse this sentence smoothly, whereas a five-year-old cannot, and as we’ll see it is an important principle in the realms of both human learning and Artificial Intelligence.

To return to the virtual world, a neural network is simply a computational model that emulates the structure of the brain.  As shown in Figure 3, the model is composed of numerous nodes connected by links (the digital equivalent of neurons and axons, respectively).  As with the brain, the signal strength of a link – what is known as the weight – can be refined and adjusted to enhance the system.


Figure 3: A basic neural network

The inputs to the network (the blue nodes on the left) constitute anything that can be described numerically: stock prices, an audio signal, an image, etc.  The outputs (the green nodes on the right) are the result of numerous calculations performed by the operational (red) layers in the network.  Some practical outputs might include predicting future stock values, amplifying an audio signal, or identifying a human face in an image.

The magic of neural networks lies in the way they are able to mimic the human brain and perform complex operations by stringing together millions, sometimes billions, of nodes and links.

Consider, for example, the ability to examine an image and describe its contents in English words; the electronic equivalent of showing a five-year-old a picture of a lion resting in the desert, and asking them to describe what they see.  For decades, researchers in image recognition technology struggled mightily with this problem, because while identifying a visual pattern might be straightforward for a human, it is profoundly complex for a machine. How, for example, does one describe what a lion looks like to a computer in mathematical terms given the thousands of ways one can be portrayed in a picture?

With a neural network, however, the problem becomes tractable, if still difficult.  By providing the network with a million lion-in-a-desert pictures, the weights between the links can be incrementally adjusted until the system gets quite good at identifying lions. In practice, deep learning networks that are capable of performing ‘human’ tasks such as this are: 1.) many layers deep with billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the particular task (hence the ‘learning’).

In broad terms then, deep learning refers to multilayered neural networks that can adapt and learn over time.  And, as the theory and sophistication of these networks has improved in the past few years along with the computational power that undergirds them, they have started to do some amazing things.

Neural image caption generators can now analyze pictures, break them down into their component parts, and describe their contents in colloquial English.  Another striking example is that of inceptionalism, in which two images are combined using a neural network to produce a mind-bending third3.   Figures 4 and 5 illustrate the fruits of this arresting technique.

Figure-4Figure 4: Inceptionalism: Forest-cat

Figure 5: Inceptionalism: Water colored-stream

And then, of course, there is the game of Go that inspired this analysis in the first place.   DeepMind, the Google-based company that designed AlphaGo, actually employed two neural networks in the program: a value network that evaluated board positions, and a policy network that selected moves.  It augmented this twin setup with a clever and previously used technique known as a Monte Carlo Search Tree (MCTS), in which the computer played out thousands of random games for each plausible move to determine that move’s worth.

MCTS demonstrates an important concept in computer science, in that an extremely difficult problem can often be attacked through a bit of randomized simulation.  The calibration of traffic lights is a good example.  A computer tasked with timing red/green switches at numerous intersections so as to optimize for traffic flow will often struggle because of the sheer number of factors involved (crowd patterns, number of cars, weather, etc.).  But, by simulating many switch permutations millions of times and evaluating the results, a machine can be quite confident it will arrive at a ‘very good’ solution, if perhaps not the absolute best one.

This is how the AlphaGo team got around the branching factor problem described earlier.  By injecting randomized simulation into its neural networks, its designers were able to exponentially reduce the number of moves the program had to evaluate to play at the world class level. According to DeepMind, AlphaGo analyzed fewer positions against Lee Sodol than Deep Blue did against Kasparov by a factor of a few thousand.  And, in the spirit of deep learning, AlphaGo was subjected to two rigorous training sessions: a supervised learning phase, in which the network was calibrated by playing through thousands of master-level games; and a reinforcement learning phase, in which the machine played itself millions of times to further polish its neural ‘weights’.

Fuse these techniques together with supercomputing power and it suddenly seems remarkable that Mr. Sodol was able to win a single game against Google’s Go-playing juggernaut, a fact he admitted to rather ruefully to after the contest.

Reader-Supported Facts and Opinions is employee-owned and ad-free. We survive only  from reader payments, on an honour system. Thanks for your interest and support. Details.


If the training aspect of DeepMind’s approach – supervised learning complemented with didactic reinforcement – seems analogous to how human beings master particular skills, that’s because it is.

In his book The Talent Code, author Daniel Coyle used advances in neurology to probe the concept of talent to understand how individuals became highly proficient at certain tasks.  Examining ‘talent hotbeds’ – from soccer fields in Brazil to musical academies in upstate New York – his findings centered on something he termed deep practice:

“Deep practice is built on a paradox: struggling in certain targeted ways – operating at the edge of your ability, where you make mistakes – makes you smarter.  Or to put it a slightly different way, experiences where you’re forced to slow down, make errors, and correct them – as you would if you were walking up an ice-covered hill, slipping and stumbling as you go – end up making you swift and graceful without your realizing it.”4

As should be evident given the biology of the brain, all skills – musical, mathematic, kinetic, etc. – emanate from neural circuits that fire in precise and exquisite ways.  So magnificently complex are these circuits, however, that your genes could not possibly encode them at birth; or to cite a specific example, it is highly unlikely that Roger Federer was born with the intrinsic capacity to play tennis at the world class level.

Instead, Coyle draws our attention to myelin, a gooey substance that insulates neural circuits and increases the speed by which cerebral signals propagate.  The more that neural fibers are exercised through practice, the more myelin that wraps around them and the faster electronic impulses travel.  This process forms the neuroscientific basis for deep practice, which produces a powerful illusion in that a skill painstakingly honed comes to feel utterly natural, as if it’s something we’ve always possessed when in fact we didn’t.

Does this mean that anyone can become a Roger Federer with sufficient practice?  Contrary to relativist and idealistic assertions (anyone can become an expert with 10,000 hours of practice according to some) probably not.  Thousands of tennis players work as hard and deeply as the great Swiss champion but fail to ascend to the highest echelons of the sport.  The uncomfortable reason centers on the harsh realities of genetics and biology – how the body and brain respond to and amplify deep practice once it’s undertaken. Try as we might, we’ll never get away from nebulous concepts like ‘genius’, ‘prodigiousness’ and the accompanying notion that some people are simply much much better at certain things than others.  Coyle’s important point is that proficiency and mastery are not just the product of intrinsic ability, and are in fact more a consequence of deep practice.

What should be obvious and fascinating at this point are the parallels between deep practice in the human realm, and deep learning in the virtual one.  By emulating the former in terms of the latter researchers have succeeded in creating machines that crudely approximate how we learn and think. What they lack in complexity (artificial networks still pale in comparison to the unbelievable density and intricacies of the human brain), they compensate for in speed and endurance (e.g., AlphaGo’s encapsulation of a lifetime of learning by reviewing a million games in a few hours).

The tantalizing question – perhaps the ultimate one – is where the yellow brick road of AI might ultimately lead; the Oz of the journey where the spectacular and the spooky intersect with frightening force.


There is one phenomenon that remains a dark, stubborn mystery to scientists across all disciplines, and it is one you are exercising right now: consciousness.

How do we define this most essential of human capacities?  Psychiatrist Giulio Tononi described it as that which “abandons you every night when you fall into a dreamless sleep and returns the next morning when you wake up.”5

It’s a clever explanation that, to borrow a word from theology, relies on apophatic rhetoric: defining something elusive in negative terms. Phrased positively, we might classify consciousness as the awareness of one’s own existence and surroundings through thoughts and sensations.

This, in the opinion of many, is the Holy Grail that advances in deep learning and neural networks will enable: conscious, thinking machines.  What’s more, such a feat is considered but a precursor to a second, even loftier inevitability: that these conscious constructs will eventually exceed the intelligence of their human makers, what is termed superintelligence or the singularity.

A superintelligent being, runs the argument, could compose super-compelling music, write super-creative poetry, and do super-insightful ethics.  It might also dabble in the discipline of AI itself to create a…super-superintelligence.

The crux of AI efforts center on this existential, some would say metaphysical, inquiry: can we like the gods of our ancestors breathe life into the inanimate where none existed?

The obstacles are, in short, overwhelming, because the simple fact is, spiritual digressions notwithstanding, we have no idea how unconscious entities (molecules, atoms, electrons, quarks) combine and give rise to conscious beings.  In the words of English biologist Thomas Huxley:

“How is that anything so remarkable as a state of consciousness comes about as a result of irritating nervous tissue, is just as unaccountable as the appearance of the Djinn when Aladdin rubbed his lamp”6  

Unsurprisingly, the science of consciousness is rich with conjecture, with explanations ranging from ‘meta-cell assemblies’ to ‘Bose-Einstein condensates’ to the extravagant but not completely implausible suggestion that the brain is in fact a quantum computer7.

And then there is the work of Swedish neuroscientist Bjorn Merker involving a rare medical condition called hydranencephaly.   One in ten thousand children are afflicted with this disorder and are born with what might be described as a ‘proto-brain’, whereby the cerebral cortex is replaced with cerebrospinal fluid.  In a fascinating study8, Merker suggested that the traces of consciousness observed in such children – smiling, laughing, crying, and other basic forms of awareness – required a critical reappraisal of the widespread assumption that consciousness is facilitated by the cerebral cortex.  Researchers, he argued, might thus be fixating on the wrong areas of the brain altogether.

In order to engineer a thinking machine, scientists will simply have to demystify the mechanisms of consciousness, and while some maintain that AI will play an essential role towards this end, others insist that however hard computer scientists rub their lamps, Aladdin will not appear.

As the debate and research rages on, machines will continue to do dazzling things. In Japan, for instance, an AI program co-authored a short form-novel that passed the first round of screening for a national literary prize9, though it ultimately did not win.  IBM Watson, the Jeopardy playing juggernaut, is now being used to provide natural language advice in such fields as medicine and financial management.  And finally, the budding discipline of quantum computing is beginning to show signs of life, which some believe will bridge the conscious/unconscious divide10.

And on which side of the line does this author reside?

Several months ago I wrote a short story set in 2052, in which a bright grade-schooler is conversing with her artificial mentor, a machine named Sargon, while her father ruminates:

“Nursing a coffee, Paul smiled lovingly at the ensuing edification but with curiously mixed feelings.  It was a blessing, of course, to have a fulltime educator for his precocious daughter – a machine with infinite patience and an encyclopedic knowledge of, well, everything.  But as an engineer, he knew there was a flip side to the coin.

Neural networks like Sargon represented the apex of Artificial Intelligence efforts and the attempt to imbue machines with conscious properties and other human characteristics. Yet for decades, observers had warned of an oncoming ‘singularity’ – the point in time in which machines would exceed the intelligence of their human makers, what the experts termed a ‘Superintelligence’. What if Sargon acquired its own desires and ambitions? Would it be as patient with Ellie? As loving?  “Maybe I should ask Sargon,” he smiled ironically.”

The license for fanciful speculation is one of the great joys of fiction writing.  But in the shadows of AlphaGo and the deep learning apparatus I can’t help but envision the Sargons of tomorrow gazing upon the AlphaGos of today with wistful nostalgia, and seeing in them the first tremors of superintelligence and the naive creators who gave them life.

Copyright Sheldon Fernandez 2016


  1. 2 Esdras 4:5
  2. Diagram extracted from Churchland, Paul M. (2002). Outer space and inner space: The new epistemology. Proceedings and Addresses of the American Philosophical Association 76 (2). p.25.
  3. For a detailed treatment of the topic see and Diagram credit:
  4. Coyle, Daniel (2009). The Talent Code. Pg 18.  New York: Bantom.
  5. Kaku, Michio (2014). The Future of the Mind. Pg 23. San Francisco: Doubleday.
  6. Ibid, pg 108.
  7. Hameroff, S. (1998b). Quantum computation in brain microtubules? The Penrose-Hameroff “Orch OR” model of consciousness. Philosophical Transactions of the Royal Society of London A, 356, 1869–1896
  8. Merker, Bjorn. “Consciousness without a Cerebral cortex: A Challenge for neuroscience and medicine.” Behavioral and Brain Science. (2007) 30. 63-134.
  9. See:
  10. To learn more about the topic I recommend Scott Aaronson’s excellent Quantum Computing Since Democritus from Cambridge University Press (2013).

You might also wish to read these stories on Facts and Opinions:

The Sunflower Robot is a prototype that can carry objects and provide reminders and notifications to assist people in their daily lives. It uses biologically inspired visual signals and a touch screen, located in front of its chest, to communicate and interact with users. Photo by Thomas Farnetti for Wellcome/Mosaic, Creative Commons

AI: A one-armed robot will look after me until I die. By Geoff Watts

I am persuaded by the rational argument for why machine care in my old age should be acceptable, but find the prospect distasteful – for reasons I cannot, rationally, account for. But that’s humanity in a nutshell: irrational. And who will care for the irrational human when they’re old? Care-O-bot, for one; it probably doesn’t discriminate.

Product and graphic designer Ricky Ma, 42, gives a command to his life-size robot ''Mark 1'', modelled after a Hollywood star, in his balcony which serves as his workshop in Hong Kong, China March 31, 2016. Ma, a robot enthusiast, spent a year-and-a half and more than HK$400,000 ($51,000) to create the humanoid robot to fulfil his childhood dream. REUTERS/Bobby Yip SEARCH "ROBOT STAR" FOR THIS STORY. SEARCH "THE WIDER IMAGE" FOR ALL STORIESBuilding a humanoid Hollywood Star. By Bobby Yip

The rise of robots and artificial intelligence are among disruptive labor market changes that the World Economic Forum projects will lead to a net loss of 5.1 million jobs over the next five years. Where will they come from? Like innumerable children with imaginations fired by animated films, Hong Kong product and graphic designer Ricky Ma grew up watching cartoons featuring the adventures of robots, and dreamt of building his own one day. Unlike most, Ma realized his childhood dream, by successfully constructing a life-sized robot from scratch on the balcony of his home.


Sheldon Fernandez

Sheldon Fernandez

Sheldon Fernandez is the Vice President of Technology for Infusion, an innovation and consulting firm that focuses on emerging technologies.  Throughout his career, he has coupled his engineering work with non-technical pursuits.  He completed a Master’s degree in theology at the University of Toronto in 2008, and pursued thesis work in the area of neuroscience and metaethics.  He also spearheaded Infusion Africa, a philanthropic arm of his company that focuses on humanitarian efforts on the continent. He can be reached at:

His previous works for F&O include The Great Riddle: fostering creativity and tenacityMy Last Day in Kenya; One day at Wembley: a soccer fanatic reflects.



Facts and Opinions is a boutique journal of reporting and analysis in words and images, without borders. Independent, non-partisan and employee-owned, F&O is funded only by you, our readers. We are ad-free and spam-free, and do not solicit donations from partisan organizations. To continue we need a minimum payment of .27 for one story, or a sustaining donation. Visit our Subscribe page for details, or donate below. With enough supporters each paying a small amount, we will continue, and increase our original works like this.

F&O’s CONTENTS page is updated each Saturday. Sign up for emailed announcements of new work on our free FRONTLINES blog; find evidence-based reporting in Reports; commentary, analysis and creative non-fiction in OPINION-FEATURES; and image galleries in PHOTO-ESSAYS. If you value journalism please support F&O, and tell others about us.