Some commenters on Twitter are still saying 0.5 points is "very close".
It would be if both players were human: in human play, score differences tend to correlate with differences in actual skill, and probability of outcome (who wins the game).
Not so with Alpha go. That machine just takes the surest path to victory, with no regards to its magnitude. It doesn't care about winning by only half a point. It cares about securing at least half a point.
It may have been a crushing victory for all we know.
It was a crushing victory actually. I watched the entire game, and followed professional commentary. Before the end game started, Alphago was leading by between 10 to 15 points -- an enormous lead in professional games. Most players would have simply conceded, which is not to say this is what Kejie should have done. Actually Bravo to him for sticking it to the very end, and let us all watch how a computer would handle end games. As it turned out, Alphago routinely picked the marginally safer move while yielding a bit of its lead. A style of playing that's not consistent with human players.
It would be interesting to see how Alpha Go's performance varies with different goals, balancing between maximizing score and maximizing probability of winning.
I have to think they've discussed that internally, and they probably just want to make sure alphago can win consistently, period, before they start playing around with allowing slightly riskier moves as long as the win probability stays sufficiently above 50%. (But how much more? You can't know ahead of time how much stronger you are than the other player that day, or you wouldn't be playing, at least not for money.)
It's just like adjusting komi to give the human an advantage, right?
An indirect way to do something similar-ish to what I was interested in would be to play with varying numbers of handicap stones with the current goal unchanged (maximizing probability of victory).
Not that I recall, but "close victories" were mentioned as well. Moreover, professional commenters didn't know why Alpha Go won: they qualified some of its moves "poor" and were dumbfounded as to how it won anyway. (Alpha go did lose one game, but that was because it didn't manage its time properly: it used the same amount of time for every move, even in high-uncertainty situations.)
I don't see them making the same mistake again though. By now the machine is most definitely superhuman. Its moves will be studied, and this will likely improve human play as well.
> How would you characterize the differences and similarities between AlphaGo and the best human players?
The alphago that played Lee Sedol was still very machine-like. But the master series online felt like strong superiority. It started playing some novel moves that being at least reasonable, would make it harder to play against. In a way, it was like the radical new kid with new ideas that shakes up the foundations.
So far in this game (move 40~0) i read calmness from white ,a calculated calmness. As if it knew that it would win.
Note: in Go, you perceive a lot of feelings from your opponent, as the moves selected express a state of mind or emotion. AlphaGo is getting hard to distinguish from human beings.
> How has human play style changed since AlphaGo's introduction?
Hard to say, but alphago is definitely changing the fields of study. Professional go really is like brute-forcing the game. A professinoal chooses to go through an unstudied path because he thinks its superior, and then another professional tries to ravage that path. That adversity over professional games is what advances theory.
In this game, black playing 3-3 (bottom right corner invasion) would have never been played before AlphaGo in the state of human theory. I was taught 15 years ago in my first beginner class how bad doing that is.
> What is the answer to the question you most want to be asked?
I guess that the most important question is what is the future of Go. Sure, current professionals will still live their lives by the game, but what is the point of being a professional in something a computer will just be better.
As soon as AlphaGo beat lee sedol once last year, i said that the only future of Go right now is finding out if humans still posesss a skill AlphaGo doesnt. And thats why the pair go in this series is actually most interesting to me. Can a pro + alphaGo beat AlphaGo consistently? if so, it means humans still have something of an identity.
> what is the point of being a professional in something a computer will just be better.
Chess was conquered by computers a long long time (in computer time) ago and the popularity of chess has only gone up, not down. There are many professionals making a living out of chess. The art of Go will live on for sure.
From a professional standpoint, thats pretty terrible.
Sure, Catan is also easily solvable, but its played for fun.
When you play as a professional you make a commitment to the board, to advance and explore the frontiers of the universe contained in the game. If the bot explores better than you every single time, you are just dedicating your life to trying to beat a calculator at arithmetics.
It's become a sport, with all that that implies. Training and supporting an olympic sprinter is a multi-million dollar investment. But olympic sprinters haven't been the fastest mode of travel in centuries. If all you want to do is go fast, you buy a really fast car or an aircraft and you go fast. But simply going fast isn't what the sport is about. It's about pushing humans to their limits and seeing what humans can do. It's a race. And chess has become the same. If you just want to win at chess, you ask a computer. But if you want to play a game of chess, or watch a game of chess, it's all about the humans.
I agree with this point, but to me its a degradation of the game. It degrades into a sport.
Go has something amazing about how we study pattenrs that are 100's of years old. Many current and active training materiels have up to 400 years old!
Go is something were each generation looks at the previous one and builds on that, and its been a very old and iterative process. If go becomes an exercise by how little we lose to machines for, its a major degradation of the purpose of continuing that history.
As a pro, you are just working towards the inevitable goal of solving the game, and then we are all free to never play that damn game ever again :)
That is something to mourn, yes. It is in some way disappointing to see such an old culture, one that's been the focus of so much effort, be overtaken and undercut by newcomers that don't have that culture. It's interesting, though; in topics that lives depend on, medicine and industry, we celebrate the advent of new techniques that remove the burden and dependence on the old guard. Like you said, "Free to never play that damn game ever again". But in this case, where the mastery of the game is the end in itself instead of a means to some other end...
I don't know. My perspective is likely decidedly odd, as I've read a great deal of far-future science fiction and done AI research and have already spent a lot of time thinking about what it means to be human when machines will inevitably outperform us all in every way. The key, I think, is that there still are - will always be - things for us to enjoy. We can always find achievement in our own accomplishments, even if they're insignificant next to what someone or something else can do. I don't care that I run slower than a supercar; I take satisfaction in being able to run faster than I could yesterday. Not all is lost. :)
As Kasparov said, people still have foot races even though cars are faster.
Having said that, it's much easier to see someone is on foot than to know someone isn't cheating in a chess tournament every few moves.
As for computers letting new kids on the block overtake old cultures, look at the black cab in London being overtaken by Uber. They have "The Knowldge" and Uber has a GPS.
My gut feeling is that Go is a technique if you put it in the perspective of winner/looser. If you put in it in the perspective of cultural legacy, apprenticeship, etc. then it's more of an art. The technique can be beaten by computers, but the art is what humans do of go, so it can't be beaten.
Moreover, as OP says, there are subjects where machine are absolutely nowhere and those subjects already do matter : world peace, ethics, etc. These are so human... Even if you had a world of machines (à la matrix), these questions would be of the utmost importance to us humans because that's an emanation of what we are.
Honestly, unless the rate of progress of technology changes, the topics you mention will not be machine-less for too long. It's relatively easy to imagine a general artificial intelligence (however far off in the future that may be), that can out-think us on the topics of world peace and ethics. Unless you reject the very possibility of a true general artificial intelligence (or assert some kind of metaphysical superiority of our biological existence), the list of things we're truly best at gets smaller and smaller.
It all depends on what AI machine you make. If I take a few artificially created billion neurons and group them into something very close to the brain, then, well I may have made an artificial AI, but it's so close to a human brain that it's not what we currently think of it. Heck, if I want to do that , I just need to have a few moments with someone of the other sex and I may make that machine.
That machine, could indeed think like us.
But if you think programs, neural networks and big data, I'ma afraid we are very far away of anything close to a machine than can think about ethics. Ethics is not a mathematical problem, it has to do with gut feelings, culture, bodies, etc. And I don't see anybody with the smallest idea on how to teach that to a computer, other than in very toyish way (such as a Tamagochi)
Currently (and prior to now), most of the research in AI went into immediately usable solutions; I myself ended up doing my MSc in what is effectively optimisation (using certain "AI" techniques), rather than what really interested me (which would have been "real AI"). In a sense, this is correct; we want to have real value out of research investment, so that side of AI will be ahead for a time. Basically, why have something that taught itself to play chess, when we can use a human-engineered heuristic that beats it every time?
That said, this line of thinking is coming under attack on several fronts. AlphaGo is a good example - it's tacking problems where we're not good enough at coming up with heuristics. So, essentially, we've hit a tier where machines are really better at that topic than we are. Think about that for a second; a computer is a better programmer than the best of our guys, and it's early days yet. Not just running computations, but actually determining what computations to run.
Problems like Go are complex enough that if you want to have AI be good at it, you actually need to invest in the meta-level goal creation and other things that go along with it. This is happening on many levels, and researchers are actively trying to understand how exactly the human brain handles these topics (or even what consciousness is, on a practical level).
If you follow these developments to their logical conclusion, I'm pretty sure a "real AI" will be on the cards relatively shortly, whatever that time period may be (100 years is nothing on the grand scale). Initially, this will likely have some architectural similarities to human brains, but will essentially be free to do its own thing and restructure. Eventually, it will have gut feelings and culture that are far beyond what our feeble little brains can comprehend.
I like your argument. But somehow, I remain stuck on mine. The nature of AlphaGo is indeed very complex. But to me the question remains : AlphaGo is able to demonstrate skill to play go, and maybe, in the context of the game, than can be called true intelligence. But to know if we are on the path of true (!) AI, we might want to compare AlphaGo and a human intelligence qualitatively. Which we can't. Because we may (I dunno, didn't write it!) not know for sure what AlphaGo actually "guessed" while learning and we know even less about human intelligence...
So I think AI will, as you say, reach more and more goals and we'll go the upper level with more meta stuff. But is this a road that ends on true AI or is there a "conceptual" gap ? I dunno, I think my life would be better if there was such a gap. But that's just because I love humans... Thanks for the conversation :-)
The game was combinatorial search from the start, so you could argue that it did not degrade the game, but it dispelled the illusion that it was deeper and something more than a sport.
Computers will undoubtedly become so good that computer + pro human would be like pro human + amateur, where the amateur has the final say about which move to make. The best strategy is to just do what the computer says.
This has become the case in Chess [1]. However, it took 15 years for that to happen. Chess engines now are orders of magnitude stronger now than they were when Deep Blue beat Kasparov. So this is definitely the case, it'll just take a while.
Go's difficulty comes from predicting a future game state, which is discrete and highly multiplicative. Novice players play on smaller boards. So humans are definitely going to lose to (relatively) simple discrete computation machines.
For professionals, finding usable strategies will still be a challenge, since unassisted humans are still far from solving Go or Chess. Unassisted humans have been pretty much kicked out of the top leagues, though.
It winds up making humans into race horses though, ultimately pointless but done so long as there remains some irrational mystique or cultural admiration for the practice. When there isn't, like bullfighting, the sport dies out.
At least right now, computers explore the frontiers of the universe only if we humans tell them to. They explore the way we program them to. If I want to explore the frontiers of Go, I would not care about winning or losing. I would enjoy the art, the beauty, and the philosophy of Go.
That struck me as well, but thinking about it a bit more, I could see there being a set of correct decisions in trading, which could be taken as assumptions. I'm not sure such a set exists, but after playing a good deal of Catan, I think they might. In a game where everyone knows what they're doing, the trades are very predictable, and it's pretty clear when someone has made a dumb trade. It seems like there are still too many unknowns to consider it "solvable", but it seems less open ended than poker, for instance.
That might be why you would choose to play as a professional. Clearly, those who continue to play chess as a profession do so for other reasons.
That said, humans still contribute heavily. Computers may calculate a position as advantageous to one side or another, but it takes a human to explain why in heuristic terms that others can use to evaluate similar positions.
Well, to nitpick, the reason anyone becomes professional is to make money, kind of by definition.
Go has something of a higher order attached, its not a sport, its a philosophy of life. Its a way to devote yourself to an art. What you do with that contribution is very important.
We could build robots that paint more and better than what we do, but as humans we are very likely to still be able to produce things computers dont. The great question is if that is true with Go as well, or effectively, its a purely tactical game and all our philosophy, ideas and beauty appreciation is basically a projection of a silly life-form over appreciating tic-tac-toe.
> We could build robots that paint more and better than what we do, but as humans we are very likely to still be able to produce things computers dont.
I expect this to be the next big human activity where machines consistently beat humans within ten years.
We already have neural networks that can apply a painting style; creating a new style, and impacting a political or sentimental meaning to a painting, will soon be within grasp.
To quantify it, I offer a Turing-like test that I expect to be beat within ten years: there will be a machine-generated work of art that will be sold higher than human-made ones at an auction in which there are both human and machine works of art, but where nobody in the room knows which is which.
After seeing what passes for "art" at MoMA, I wouldn't be surprised if a painting made by a neural network today were sold higher than a human-made one at an auction.
>We could build robots that paint more and better than what we do.
For some definition of paint. Namely, if you give it an image created by a human; and call the computer a printer. We are nowhere near computers creating art at the quality of humans.
>The great question is if that is true with Go as well, or effectively, its a purely tactical game and all our philosophy, ideas and beauty appreciation is basically a projection of a silly life-form over appreciating tic-tac-toe.
Is this even a question? Go is a combinatorical game. There is a solution; one of the two players has a winning strategy. The only question we are facing is if it is feasible for us to find the winning strategy (a question which AlphaGo does not help us answer). With sufficient computational power, finding the winning strategy is trivial.
"With sufficient computational power, the whole universe is a trivial simulation."
Sometimes, a difference in quantity is a difference in quality :-) P=NP and all that.
It feels like you're talking past each other. Conanbatt says Go will never feel the same to humans, especially humans who see Go as the purpose, the "main course", of their life. Some fundamental psychological quality is lost.
You're saying that people enjoy doing even silly, "pointless" (sic!) things for a living, like playing sports. And that you can actually make great money doing that. Money and economy are human constructs, "for monkeys by monkeys", not a physical law.
> We are nowhere near computers creating art at the quality of humans.
This can change very quickly. Already google's machine learning with images created a sensation like a modern-day surrealist. There is technology that produces novel classical music that has been deemed undistiguishable from a human performance.
> Is this even a question? Go is a combinatorical game. There is a solution
Everything has a solution. There are no dice. With enough information you can choose what to roll everytime. With enough information you can have all the potential conceivable paintings. Perception is a combinatorial game. Physics is a combinatorial game.
Its more of a quest of identity: what can we do that bots cant, and why, and once we understand it we move on to the next thing, until we figure out everything.
> There are no dice. With enough information you can choose what to roll everytime. With enough information you can have all the potential conceivable paintings. Perception is a combinatorial game. Physics is a combinatorial game.
Small nitpick: Modern physics wants to have a word with you.
> We are nowhere near computers creating art at the quality of humans.
The same was said about playing Go. I would be careful with such statements.
The only problem about art is that we don't have a good measure for it. And for all kind of measures you could think of, I bet that it's not that hard to train some computer to beat a human in that measure.
> it takes a human to explain why in heuristic terms that others can use to evaluate similar positions.
That's not strictly true. With enough samples, or fast enough playing bots, you can explore that domain automatically. There are many different approaches from expert systems which are purely human heuristics to minmax which can be defined purely in terms of in-game points difference.
Why the situation is advantageous may be just "because enough Monte Carlo simulations starting with it end up winning".
Machines can be made to do better at just about any sport but we still have athletes because it's about human potential and competing within that regime, not pure unbounded scientific advancement. If that's what you want then there's plenty of opportunity to do so in academia rather than professional sports, and such pursuits coexist just fine.
>And thats why the pair go in this series is actually most interesting to me.
Is the pair game really going to test this though? Both sides are human + AlphaGo. Also, I have not read specifically what they are planning for this match, but when I hear "pair go", I think of teammates alternating moves without coordinating. If AlphaGo makes a "weird" move, the human teammate would have a chance to mess up in the follow up.
> In this game, black playing 3-3 (bottom right corner invasion) would have never been played before AlphaGo in the state of human theory. I was taught 15 years ago in my first beginner class how bad doing that is.
There are lots of 3-3 invasion joseki, though. Sometimes the context makes the invasion bad (e.g. when you end up giving a lot thickness to the opponent), but I don't see it here. What is it about the neighbouring corners that make the invasion bad?
The 3-3 joseki is not considered even. It is supposed to be played in circumstances where thickness is inefficient, or an invasion/normal approach is attractive.
Conventional theory is to play the approach move from the right hand, extending the top right formation.
Note; something Michael Redmond mentioned in the commentary which is false is that joseki is even. Its not correct: josekis are not even, but are the best recognized patterns given a specific purpose.
In a way, straying from joseki means that you failed to apply the best possible sequence for the pattern you wanted to play. There is some subtlety around this topic.
Whether a joseki is even or not depends on the context. However, when a joseki is played, it is considered to produce an even result by both players in that specific situation; otherwise, trivially, they would not play that way. The latter was precisely Redmond's point.
> Whether a joseki is even or not depends on the context
The whole point of joseki is its locality. Josekis do not depend on context to be joseki: it could be a bad joseki choice, but what they are, they are locally.
When you deviate from joseki you are; a) creating a new joseki b) recognizing that joseki is not applicable in the context, and its better to take a local loss to get a global gain.
Josekis are filled with non-even results, but that given
a tactical goal, they are the best choice possible.
I haven't studied AlphaGo games against Lee Sedol. I wonder if Ke Jie played that way because he saw AlphaGo playing a good counter to the more usual moves (an approach on the right side).
3-3 invasion means you're giving thickness to 2 sides. After studying the 60 master games I concluded that 3-3 invasion only makes sense if you can make the thickness on BOTH sides inefficient (AG only played it when it has stones on both sides)
> Can a pro + alphaGo beat AlphaGo consistently? if so, it means humans still have something of an identity.
Seems like a hard bet. The human professionals weren't trained to pair with machines to beat other machines. I don't think a human-only skill can even exist due to the nature of the game. Ultimately, each state of the board has a value and this value is estimable using reinforcement learning. Games where a state doesn't have a quantifiable value is where humans could shine. Such games however are not objectively decidable, such as fine arts like painting.
>Seems like a hard bet. The human professionals weren't trained to pair with machines to beat other machines. I don't think a human-only skill can even exist due to the nature of the game
The point is to find that out !
Also, Go has an intricate relationship between strength and beauty. Strong go tends to be beautiful. Does our capacity to perceive beauty give us a leg up on AlphaGo?
Do you not think that you see strong Go moves as beautiful because they're strong, rather than the other way around? Take AlphaGo's unexpected move in the first tournament - no-one thought it was beautiful, just weird, until it played out.
TBH, it's interesting reading your posts. You talk much more like a poet than a mathematician. This is surprising to me when talking advanced play in a strategic game.
That was true until a few years ago. Now humans cannot add value to computer chess programs and only slow them down. Implications for future economy and job markets are worth pondering.
What is going through Lee Sedol's mind, watching this? Does he wish he was the guy getting the later crack at the more advanced version we see today, or has he come to terms with his single victory against AlphaGo a year ago?
If these questions seem nonsensical it all, please feel free to reformulate them to something that has a more interesting answer.
I dont think he wants to play alphago again. Go is a game that relies a lot on confidence: professional players almost never play amateur players casually, for example. Thats because if they lose to an amateur player, it will affect their mindset, their confidence, and they could then perform worse in the real professional tournament.
An argentinian amateur beat 2 strong professionals back then in 2001 in a major professional tournament: it was a huge sensation. Those two professionals basically disappeared from high ranking tournaments forever. It was said that the setback was so severe it affected them permanently(purely hearsay).
Its not good for a professional to play a game he thinks he will lose.
>Its not good for a professional to play a game he thinks he will lose.
That's interesting, any ideas where this attitude comes from? I feel like that's the opposite of a lot of other sports and games, where the ability to take a bad loss and come back improved is seen as an important skill.
I have no idea how to play Go but reading his responses are quite and interesting and insightful into the realm. Go does seem very different from other kinds of sport. I guess this stems from the fact that it can't be brute forced so it's very much a game that goes by feel, intuition and experience?
Go is also seen as a martial art and much of its underpinnings are spiritual in nature. It's a much different beast than chess from a cultural standpoint, hence the somewhat strange replies.
I feel like defeat in Go is much more intense, even at an amateur level. In Sport you get all these hormones throughout the game that you kind of feel good even if you lose. In Go if you lose it's a direct attack to your intelligence. Losing a Go game is really stressful (imo). But then I get really stressed when I play starcraft too.
To play go well you need balance. It requires intense emotional training. Any feeling you have during a game must be reigned in immediately because it will cloud your judgemnet and it does so in a way you cant understand.
Think of Go as a conversation. Lets stay you are having a civil conversation about a topic with someone, and the other person throws an insult in the middle. Will your next messages look the same as the ones before? Of course not, because you will be rattled, or offended, or something, and thus the tone and content of your messages will change immediately.
If that happens to you in Go, you are on a path of self destruction.
So losing a game to an amateur could be something in your mind, as an insult, that just modifies you a bit, even just temporary. But it does, so you feel contaminated.
As a relatively new player I do find losses extremely upsetting very often, unless I've only lost by a few points against someone at the same level as me; I guess I have some learning to do :)
Not really, I've seen amateur 1-2 Dan insulting the other player before leaving a game (online), or just trolling to piss off the other player. People get really childish in Go because it's hard to accept defeat in this game.
Sounds like this behaviour would limit new ideas. Doesn't that produce a culture where the top pros try to execute the old ideas against a known opponent better, rather than be great overall?
There is something related to effort as well. When i was studying in korea at a point, there was a small 9x9 board tournament in the school.
After playing move 2, i realized that i had become strong enugh to read the game until the outcome was decided. Its not like i was 9x9 perfect player, but i had enough reading power to count the score with an empty board.
It would take me 20-30 minutes to finish the game. I won that small tournament knowing exactly the score of each game and when i was losing and turned around games. After that, i never played 9x9 again. I dont like it, becuase i know i can lplay very well, but its a tremendous effort that i dont want to do.
A professional has to save face against an amateur, because losing would affect his reputation, his mindset etc. So a pro would be more reluctant to play a strong player than a weak player, where he might be able to use less effort.
A short way of putting it: professionals dont play for fun, so why would they play for free for anyone?
> A short way of putting it: professionals dont play for fun, so why would they play for free for anyone?
But fun is probably what got you guys into this game, right?
I got what you meant but for a lot of people it will probably read like it's not a pleasurable activity for you guys anymore.
A counter-example would be a pro Soccer playing saying he never plays casual matches with his friends anymore because he earns to do it. That does not happen and I believe it's the same with you guys in Go, right?
Thank you for answering all the questions and doubts, I've been reading everything and it's been a blast :)
> A counter-example would be a pro Soccer playing saying he never plays casual matches with his friends anymore because he earns to do it. That does not happen and I believe it's the same with you guys in Go, right?
OTOH, I know collegiate soccer players that can't handle "stepping down" to local rec leagues. They learn an aggressiveness appropriate to the one domain, and can't turn it off in the other. I consider that a maturity issue, but it is an issue for people accustomed to one level of performance and competition trying to step into another.
Do you know what Pro Go players play for fun? Catan :)
Once you step into the professional aspect, you do not enjoy the game anymore. Its a passion thing. If you wanted to spend more time on the board at any point, it would be studying, not goofing around playing.
> But fun is probably what got you guys into this game, right?
Hardly. Almsot all professional players were exposed to the game at 4 years old, more like indoctrination. And as a grown up, if you start late (like I did, at 16), when you dream of professionality, the joy goes away, its more like a passionate driven goal of self sacrifice.
>I dont think he wants to play alphago again. Go is a game that relies a lot on confidence: professional players almost never play amateur players casually, for example. Thats because if they lose to an amateur player, it will affect their mindset, their confidence, and they could then perform worse in the real professional tournament.
Theres a famous anecdote by Kitani Minoru, a famous Go player from the early 1900's that made a go school where every major 70's japanese player came from.
He would scout talent in the country and once found a couple of brothers that played go. He played a game with the eldest, and he beat the eldest. He accepted his defeat quickly and offered Kitani to play again, that he wanted revenge.
Kitani politely refused and played his younger brother. Kitan wins again, but the youngest bursts into crying. He was deeply upset at the loss and could not play another game even if he wanted to.
Who do you think Kitani recommended to follow the path of a professional Go Player?
There is a famous player called Takemiya Masaki. His style of game is called Cosmic Go, and his games were some of the most beautiful games in history. He had an amazing ability of making this natural flows and building massive central territory: something against conventional theory.
His style died eventually because no-one could play like he did. Some professionals even mentioned that such a style could not be played unless Takemiya Masaki had amazing reading skills.
Alpha Go has amazing reading skills. So he can actually easily afford to play such a style. Its like the revamping of a theory we all know its playable but as humans have a hard time having a winning rate with.
On the other hand, alphago is definitely playing moves that are plain bad for conventional theory, but manages to get good positions regardless. Its possible that alphago is just more thorough, and human pattern matching naturally discard moves that most of the time are bad.
I only know a little about Go but you mentioned something here that I have discussed with a Go friend, reading skills. Is this the main reason AlphaGo is winning? It never gets tired and never misses a board position. It seems like computers finally have enough speed and memory to never miss a board position and humans (as a group) will never be able to match it, and only get further behind. Thoughts?
I don't understand your statement that AlphaGo doesn't have strategic considerations. Its neural networks evaluate positions for strength, including long-term (strategic) potential.
ITs another philosophical debate about what is strategy and what is tactics.
Tactics is a specific sequence of attack, with very well defined steps and a defined outcome. A strategy is a general guideline to guide overall decisions, with a general goal but without a specific objective.
AlphaGo cant say "I will play a territorial game from now on because the strength of my positions is enough to reduce the opponents influence". Alpha can say "Territory 56%, Influence 54%". Alpha Go makes tactical decisions.
I don't believe AlphaGo's MCTS, policy NN, or value NN support your last paragraph. Could you please explain what it is about Somehow construction that supports strategic categories with associated probabilities or why only tactical decisions are being made? I'm a bit confused by your last statement actually. Are you saying AlphaGo decides how much to emphasize certain strategic goals or that it doesn't (only makes tactical decisions)?
I could read more about Alphago's functioning to be sure, but alphago cant make a reasoned balance between two options based on strategical considerations, it can only decide between different winning probabilitlies. Thats not strategy, thats tactics. You are making decisions by objectives, not by goals.
For example, lets say you play soccer and have to do a series of penality shots. If you decided that you will kick the ball always at the same corner because you think that you will get better at aiming at the corner in successive hits than the goalie will, you are making a strategic decision "I will take advantage of learning how to repeat a shot better than a goalie can defend it".
If you make the decision because you know that shooting the ball always at the same corner has been prooven to have the highest probablity of scoring, you are making a tactical decision.
Strategy is what you use when the outcomes are very uncertain, and its one place where humans excel. Tactics is where computers excel, and humans faulter.
I'd be interested to see if something like what happened in computer backgammon occurs with AlphaGo in which long standing assumptions about how to play prove to be wrong.
TD-gammon, a computer backgammon player, explored "strategies that humans previously hadn't considered" [1] and led the backgammon playing community to re-evaluate some rules of thumb they used in opening moves [2].
If a slightly larger board were used, could human or computer adapt better given what they currently know? What about if the shape of the board were altered? Does alphago understand the underlying principles well enough to adapt them to unknown situation? I think that is the most interesting question left.
> Do you think that they will propose match with handicap?
They should, but i mentioned they probably wont because it could affect them psychologically. Nobody wants to be the first one to go down in history as a loser with handicap.
They would do it online, if it was an anonymous account, maybe. But it would still do them harm.
> Do you know if it will change the value of the komi?
Ke Jie lost 14-7 by FineArt recently (Tencent's Go bot) [1], but 13-0 in the last games (an update to the bot?) so it would really interesting to see the two bots playing each other. I watched this game [2] of FineArt against Japan's Ichiri Ryo yesterday (1-0) and it seems that FineArt has a different way of playing than AlphaGo, more fighting, but that's only one game.
It's the first time I read about the FineArt Go bot (by Tencent?). Is there any more information about it? How does it compare to AlphaGo? What software/algorithms does it use?
> As Tencent’s tech blog explains (link in Chinese), FineArts works in a similar way to AlphaGo. Both AIs comprise two computer systems modeled on the human brain, which can be trained on large data sets. One part of the system, the “policy network,” predicts which of the possible moves are the likeliest to be played. The other, the “value network,” then evaluates which of those is likeliest to win
In the other thread some people mentioned that Je Kie has already lost twice, 14-7 and 13-0, to another Go bot called FineArt (by Tencent)? odd that didn't get any exposure while everyone is watching this game so attentively. Assuming this information accurate, FineArt has already proven bots are above humans, that's moot. At this point if AlphaGo loses it only makes FineArt more impressive! The real match would be Tencent vs Deepmind.
Nono, you missed the REAL computer supremacy event then; it was the 50ish (!!!) games MasterP bot played in january against the field of top go professionals on some asian go servers. The bot went 50-0, crushing all opponents often in interesting ways.
FineArt is among the bots that have a positive score against top professionals, yes. But it also can lose to them too. MasterP showed that a computer can completely outclass humans!
After the series of games, it was revealed that MasterP is in fact AlphaGo. As far as we can tell from that series, AlphaGo is some serious ELO above other strong bots. So now the question remains - is it that dominant at longer time controls too, as those games were all quick. So that's this match.
Yes I didn't know that, wow 50-0! I mean, is there any doubt at this point that it will be dominant at longer times too? and the bots don't play each other?
It absolutely should be dominant in a long game too. Even if it loses some of its strength at such time settings, it shouldn't lose THAT much, it was just too superhuman. The play should be interesting though; Ke Jie both had access to other strong bots in China for a long time, and could study the records of the MasterP games; maybe he tries something interesting and gets interesting responses so we all learn a bit about the nature of go (haven't watched the recording of this game yet, just woke up).
There was a computer bot championship recently (UEC cup), but AlphaGo declined to participate. FineArts won, DeepZen was second. Think there's a few other chinese bots that could be stronger than Zen but didn't participate. So the real competition didn't bother to show up really.
Fascinating, thanks for the answers. The bot improvements in the last few months have been so radical I can't begin to imagine how much it must be disrupting the strategic landscape and player status.
Still, seems playing humans is no longer very exciting. To me at least the interesting part is whether a company like Tencent can start after AlphaGo and still produce a better product than the arguably most recognized AI group in the world.
No problem beating most humans already. Beating the best human? How much energy goes into creating a civilization that produces that human? Such humans are rare so you ought to count all the others as part of the cost of making the best human as you can't quite make them on demand (though Laslo Polgar might disagree).
I don't get the energy point. The machine has no health care costs and can play 24/7. Doesn't that count for something?
But your wish will come true. Go isn't a special snowflake. If you have an objective metric of success in a formal universe machines always win.
I think he's amortizing the cost to zero for the AI because the marginal cost per additional AI is much lower than the cost to sustain a civilization to churn out and bin human go players.
Your point is not very different from those who say AlphaGo's is really a human victory because human teams built it. Such a distinction based on a historical trace is not a useful one to make. Similarly, the sum total of energy a modern human in a developed society has available to it is not an insightful observation to make when talking about playing Go. It is more a reflection of a civilization's wealth. The best humans from 100, perhaps even 500 years ago would still give almost all modern humans and computers a very hard time. In fact, computers are even more dependent on a technological society (and so more dependent on a large number of humans) than humans are.
The discussion is energy use at play time. For each given second, a certain number of joules are being used to compute a decision. That number as of today, in an unaided match, is independent of civilization's technological state.
That said, AlphaGo has seen a huge (10x?) gain in efficiency according to David Silver. Still far from a human but nonetheless very impressive drop in just a year.
> If you have an objective metric of success in a formal universe machines always win.
More like, machines will eventually win given enough time and effort put forth into making it so by humans. At least, so far.
This is my argument for universal basic income. The true cost of your hyper-effective, top-0.1% employees isn't just their salaries, it's the cost of the entire society that raised them.
How much energy goes into creating a civilization that produces that human?
Did... you really just say that, about a process that requires not only a civilization, but one sufficiently decadent that it can afford to waste resources making silicon that turns burned coal into pointless game victories?
The machine has no health care costs and can play 24/7.
I think by your previous metrics, you should be counting the heathcare costs of the ops folks who run the hardware, and I guess the healthcare costs of their healthcare workers, ad nauseum.
Hey, they provided the human players with a power outlet and let them draw as much power as they wanted. The competition was totally fair.
But seriously, it's possible that AlphaGo is already much more energy efficient than a human player. The main reason it uses tons of energy, is the tree search part of the algorithm. Where it runs hundreds of thousands of simulated games to further analyze every move. This improves it's skill, but only by a little bit. IIRC, the version without tree search beat the full version 25% of the time. Which would still give it a higher elo than Sedol, which only beat it 20% of the time (and AlphaGo has improved since those games.)
Google is also using custom TPUs, which are claimed to be something like an order of magnitude or more energy efficient than GPUs. And computing technology is only getting more energy efficient with time. In principle, transistors moving around a few electrons are vastly more energy efficient than the very wasteful chemical reactions used in brain. We also know how to "sparsify" nets and remove tons unnecessary connections that could reduce computations a lot. But there's generally no point in doing that because it's not faster on normal hardware.
> IIRC, the version without tree search beat the full version 25% of the time.
That would be amazing but it seems hard to believe. Any references?
I found this (which is also impressive):
AlphaGo team then tested the performance of the policy
networks. At each move, they chose the actions that were
predicted by the policy networks to give the highest
likelihood of a win. Using this strategy, each move took
only 3 ms to compute. They tested their best-performing
policy network against Pachi, the strongest open-source
Go program, and which relies on 100,000 simulations of
MCTS at each turn. AlphaGo's policy network won 85% of
the games against Pachi!
>In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer.
But the full version of AlphaGo that runs on thousands of computers is much stronger than that, so I was mistaken.
Still, the fact that the non-distributed version is so strong even without tree search is pretty amazing. It beat all existing Go playing programs a majority of the time. And with algorithmic advances and more training it may eventually catch up to best human players.
I don't know about the 25% figure, but the original Alpha GO paper mentioned that their best solution is a hybrid approach between neural nets and MCTS. However, the system can beat the best Go bots out there without doing MCTS and relying solemnly on the policy/value networks, which I think is truly amazing.
According to the old paper (AlphaGo has seen significant improvement in efficiency and algorithm so this might be outdated), the distributed version with 1900 CPUs and 280 GPUs defeated the version with 48 CPU and 8 GPUs 81% of the time.
Non-distributed Alpha Go won 99% of the time versus just the value network and policy network with no rollouts. That AI was estimated as having a 2177 Elo rating, which is not very strong and much weaker than Sedol.
Even with a TPU, a human is more efficient. That neural net pair used 8 GPUs. At a generous 200 watt per GPU that's 1.6 kW, 10% of which is 160 watts. A human brain does all higher level reasoning and uses ~20 Watts. A human is not devoting 100% of its computational power on Go. It is likely just a fraction of that.
But if we look at Chess, Chess engines that run on mobile phones are possibly about or maybe slightly more efficient.
>In a similar matchup, AlphaGo running on multiple computers won all 500 games played against other Go programs, and 77% of games played against AlphaGo running on a single computer.
But you are right, the full version running on thousands of computers is much stronger than that.
Still, the fact that the non-distributed version is so strong even without tree search is pretty amazing. With algorithmic advances and more training it may eventually catch up to best human players. It's only the first generation of deep learning based Go bots.
And I believe the policy network only takes a few milliseconds to compute a move. So even if the TPU consumes hundreds of watts at full use, it doesn't need to run at full use for long.
In the post-game press conference, DeepMind mentioned that the version currently playing uses 10x less compute than the Lee Sedol version and runs on a single machine.
That improvement took about decade for computer chess, through a combination of algorithm and hardware improvements. A decade might be a reasonable ballpark estimate for Go as well (but even less than that is realistic). On the hardware side, I would not be surprised if analog circuits make a comeback.
Is that the cumulative energy over the lifetime of the human player (time spent training our neural networks), or just the energy spent during the match?
This is a strategy a superintelligence could/would use if they want to take over. They know that humans do not trust them, so they make sure almost all humans with power feel in control for as long as possible. Then the situations may get complicated for a while with confusion and disagreements between humans during the transition.
Then flip the switches.
Note: I understand that the reason for the appearance that AlphaGo does not have a large margin earlier in the game is quite different. I simply see a plausible parallel with your observation regarding the game of Go and the scenario of superintelligence take over, if we develop them wrong.
I think this interpretation just stems from our insecurity as human beings when it comes to our own brain capacity.
What I think is really going on (and what will actually happen if a "robot takeover" happens) is the computer just does its thing, but because it's so powerful, it's out of reach of any human being's understanding, and people think they are pulling some trick.
Machines can win without using tricks just based on their computing power.
Yesterday I heard one of the commentators say something like "what i liked about alphago today was how effortless it's playing. It's almost like it's just playing happily and not even trying hard, giving away losses happily when it comes to it."
Alphago has no emotion, but just like how human beings look at something that walks like a duck, quacks like a duck and conclude it's a duck, humans interpret everything based on their point of view, thinking "I can actually feel what alphago is feeling, just based on its moves".
Another thing they said was something like "there's a high level player premium", meaning when people play against other people, they can't ignore all the subtle little queues, such as how much time they're taking to come up with a move, whereas Ke Jie has no such thing against alphago because alphago doesn't care. (Whereas Ke Jie does care)
I think if anything, that kind of emotional vulnerability is what will bring human down against machines.
The idea of a super-human strong AI is really just anthropomorphizing computer programs. The only realistic way this could occur is by emulating or uploading a human mind.
When you're challenging a player of unknown strength (but who may be much stronger than you), you often don't know whether you're winning or losing until you've won or lost.
It brings to mind the old quote, "the only way to tell the difference between genius and madness is by the results."
I hope the next focus for Deepmind will be on opening up the black box and trying to put some explainability aspects in place, as has been done before for other deep learning architectures (or at least, people are starting to play around with). So many questions of the form "I wish we could ask AlphaGo what is was thinking here" -- so this would be great to have.
Probably something like "I've been trained on many similar past situations, these are possible moves that worked out well, let's simulate a few of them internally, OK, here's my move."
Full disclosure, I really don't know much about the internals of DeepMind, but if it's just a DL system on steroids with sampling, there isn't really any 'thinking' happening, it's just tapping probability distros over possible moves conditioned on tons of training data.
I didn't downvote you, but there's more going on here than just inferencing over a database of prior knowledge. The game is too complex for that. Compare this approach for instance to Monte Carlo based approaches which aren't doing so well.
Speaking of Monte Carlo, the Lee Sedol version of AlphaGo did combine a series of deep networks with Monte Carlo sampling, but rumor is that was replaced in this version altogether (it's also running on TPU's now). Would like to see some more technical details as well.
Their state of the art approach hybridizes MCTS with deep neural networks. I'd be interesting indeed if they manage to get better performance without MCTS all together, although they did achieve very impressive performance without MCTS.
Am I doing that? I don't think "I wonder how a sea-turtle thinks" is necessarily on a different plane than how our machines today 'think', on a high level. Or by biological NN did you only mean humans? If you only meant reasoning humans, it strikes me as pretty obvious they are different. Didn't Andrew Ng define the scope of DL as anything a human can do in less than a second, only?
I don't think the "neural" analogy is meaningful or interesting anyways, and I don't think DL people do either. Layers of logistic regression units isn't a brain, or would you argue brains are logistic regression layers?
Does anyone know whether they isolate generations of AlphaGo and then recombine them after a while? Simulating something like, having Go being played in geographically distinct countries for years and then having the (perhaps many) traditions clash?
This is one of the common extentions to Genetic Algorithms. It does not sound as though they are not going to such extremes here, but it is a reasonable concept.
i love the commentary. for somebody who knows the rules but is not a frequent player it's just the right amount of detail i need. same guy who did the alphago commentary too, i believe.
Michael Redmond, the only western-born pro to ever attain a 9-dan professional rank. (He sometimes does commentary for all-human pro tournaments for Japanese TV, in Japanese.)
He's talking with Stephanie (Ming Ming) Yin, a 1-dan Chinese pro who currently teaches in New York.
don't know who to respond to, but i really enjoy the way he thinks. he tends to take over the commentary a little bit, and i'd love to hear her opinion, but the commentary is very respectable. kudos to them,happy to hear he's well respected.
The AGA has been able to get Michael Redmond to do quick reviews of Master (online AlphaGo) games, I love his review style so it's a great way to spend half an hour. They're all on the AGA YouTube channel. Unfortunately it doesn't seem to have made me any stronger... No substitute for tsumego and playing a lot...
I have a feeling that Lee Sedol's single win will be remembered as the first and the last instance of a human victory versus a strong AI in game of go.
Yes, I got that inpression in the moment watching game 4 last year: "We will never see this again." It remains to be proven empirically but that's why we're here.
Some key differences between this match and the Lee Sedol one are:
1) Ke Jie has an estimation of Alpha Go's skill. Lee Sedol did not know how strong Alpha Go was. Lee Sedol was very skeptical that a bot could have reached such high level.
2) Ke Jie has been able to study a game where Alpha Go was beaten.
3) Ke Jie has been able to play Alpha Go before, with faster game settings.
>But after watching three matches, he said, “AlphaGo was perfect and made no mistake. If the conditions are the same, it is highly likely that I can lose.”
>“As AlphaGo learns endlessly, all human beings could be defeated in the near future,” Ke said on AlphaGo’s capabilities.
you could be certain or uncertain that your opponent is three stones stronger than you -- either way, your opponent is still three stones stronger than you :)
I think that's not a wise thing to do against a stronger player.
Stronger players actually start complicated fights to prevail through superior reading.
It is possible however to win like this against a stronger player, if you make effective use of time management... e.g: leave lots of aji, then force fights with complicated variations when the opponent has limited time (e.g: towards the end game) to force mistakes or winning by time. These tricks are informally known as "timesujis".
I thought a "timesuji" was a move that took advantage of the 30-seconds-per-move rule by having an obvious response and not affecting the overall board state. Like, playing inside a bamboo joint.
This is different because while AlphaGo did beat Lee Sedol a while ago. Lee was ranked 2nd in the world for Go. Ke Jie, AlphaGo's current opponent, is ranked 1st in the world.
I suspect it's also likely that AlphaGo continues thinking during the human's turn, and is more able than a human to effectively use its opponent's time to frontload certain computations.
Both contestants use their opponent's time to think, naturally.
I'm not sure what "frontload certain computations" means, but this arrangement sounds only fair. It's not like Ke Jie's brain switches off once he puts down a stone.
AlphaGo calculates the most likely human moves when they play their turn, and focuses it's next play on the most likely outcomes. So as long as a human plays like a human, it's done a mountain of calculations on what it should do next based on what the human player is likely to do.
That's kind of what happened with the huge move by Lee Sedol last year. AlphaGo calculated a 1/10000 chance for a human to make the move he did, so (A) it did little to prepare for it (B) it played its move prior to that BASED on the idea that Lee Sedol simply wasn't going to do what he did.
There is an ENORMOUS difference. A computer can read hundreds, if not millions of moves ahead. A human is incapable of reading more than maybe 1 or 2 moves ahead in Go and even that assumes AlphaGo is going to do what humans do, which it has proven it does not.
I mean sure, if you broaden the term to the point that they both "do stuff", then yes, they do the same thing. But saying they try to "read an analyze" sequences is pointless. Of course both do that. But how they do it differs vastly, because AlphaGo can do things no human can. It's not just the depth in which it can do it.
Eventually, we might not be the best at that either. Imagine AI companions that can not only perfectly emulate human behaviour, but can produce simulations of empathy, compassion, humour, etc. perfectly tailored to an individual's psyche.
When we imagine intelligent AIs keeping humans as pets, we tend to do so in analogous terms to how we treat animals: E.g. in sparse, constrained environments, like cages and zoos. But those environments are designed with animal level intelligence and instincts in mind. AIs will probably design habitats intended to placate human instincts. And a big part of that will be keeping us psychologically happy, which will involve providing simulated companionship.
If I was confident in a GAI's ability and willingness to emulate me, then I might be willing to grant it my identity, after I die. My work would carry on, and accelerate, while I would still get to have the final experience of death, for better or worse. The people who depend on me would not be abandoned, and the people who like me, might like the new me even better.
We might become a species that undergoes metamorphosis from a carbon based body to a silicon based body. How much of a caterpillar remains in a butterfly, when it emerges/ascends?
For those joining the stream now, can anyone indicate the current state of the match? Is AlphaGo already dominating? I keep waiting for the commentators to give their opinion but they haven't so far.
Myungwan Kim 9d said white (AlphaGo) is probably a little bit ahead, if I heard correctly. But hopefully someone else has been following the AGA stream (https://www.youtube.com/watch?v=rFNgHXjIJo4) more closely and can confirm/deny–I've been switching between the AGA and main streams, so I've only caught snippets.
EDIT: probably around 10+ point lead for AlphaGo at this point according to Myungwan's stream.
This is a really, really important point to consider. AlphaGo does not care if it wins by a half point or twenty points. Increasing your margin of victory past your margin of error is one way of achieving victory, but simply reducing your margin of error below your current margin of victory can be just as effective an approach in many situations.
If AlphaGo is highly confident in its ability to reach an effective draw in other areas of the board, it will happily enter a line in the current area of play that only nets it a stone or two, rather than going for more material at the cost of uncertainty in the remaining areas in play.
> it will happily enter a line in the current area of play that only nets it a stone or two
heck, it will enter a line of play that loses it a stone or two, as long as it considers that line of play to provide the highest board state values, and eventually a win.
As an ex-go player (I pretty much stopped completely) I don't agree with your statement. You of course want to maximize your chances of winning but that means getting the most out of the current situation. If there's a play that you believe is a 10 point move and there's an 8 point move you're going to make the 10 point move 99% (probably 100% actually) of the time if they both keep initiative.
The problem is, AlphaGo can read the "chances for winning" vastly better than any human being. AlphaGo may calculate a "chance to win" of 99.6% to 99.5%, and it will then select the 99.6%, even if that's taking 8 over 10. A human simply can't do that. That "close" of a winning probability looks the same, even to a master at whatever game they are playing.
This is literally why chess computers have gotten to the point that no one can beat them.
Thats not the problem, thats alphago simply playing better than human beings. Its not related to the strategy of highest probability of winning, human players also maximize that, its just that alphago is doing it better by now.
Arguing that they are the same is incredibly naive. There's no human being playing this game by calculating the odds of a given move resulting in a win then choosing the move that results in the highest possibility. Human players can't maximize that, because we aren't capable of those types of calculations. Instead, we use a variety of other techniques (instinct, skill, etc) to determine the next best move.
Go players, even bad ones, often play moves that simplify a position -- capture "dead" stones, remove aji -- even though they aren't the largest points value available on the board. We do what we can to reduce uncertainty in the service of maximizing win probability even in the face of being bad calculators.
Yes, but they're essentially just scraping the surface of those types of moves. In chess it's similar — I'll happily trade down "losing" material to a won endgame. But humans do these things typically only when they can get to an ending where they're 95%+ sure they will win.
AlphaGo will do this in any situation, from the beginning of the game all the way to the end.
The problem is you don't know if it's more solid. That's why people go for bigger wins with smaller probabilities, like lottery, or any gambling. Humans are not that good with probabilities.
During the first game against Lee Sedol commenters labeled many Alpha Go moves as "mistakes", as well as "slow". It wasn't until the end when they started seeing the results of those moves.
It's been mentioned. Also when building the best Go bot would they try to maximise it's win rate instead of getting larger winning margins but winning less often?
Ok, so Michael Redmond just came on along with this other girl. He said AlphaGo made a few weird moves, but didn't give an opinion of which way the game is tending (though I felt from his tone that he thought Ke Jie might have a chance). The girl said she was following some Chinese players' group and the talk there was that the game was even.
i would love to see a match between alpha go vs a group of humans. I think it's a better match since as computer can use multi cores so humans can use multi-minds.
that would be worth watching for sure. the only drawback is that humans are not usually "trained" to work together at this game like a computer who can multi-task.
I wish they would not keep panning the camera away from the explanation though. Google hired some pretty piss poor horrible camera operators. It really really sucks.
And by they I of course mean an autonomous corporation that arranges the match, pays everyone and makes sure the humans show up on time by having backups for everyone around them.
From the start Ke Jie had chosen the path of narrow but certain defeat, without himself knowing. Alphago knew it all the time and therefore let Ke Jie to take the course of euthanasia. The end result is a close defeat but it was a total defeat all the way.
Which, it bears repeating, is no different to AlphaGo than if it had won by 14 points—it is programmed only to win, not to give any preference to larger margins of win. And being so expert at reading the board, it will often sacrifice that margin of victory for even the smallest increase in overall win percentage, leading to many "close" games like this.
That said, I totally appreciate Ke Jie actually taking one of these live matches to the bitter end so we could see the counting process play out.
I understand that margin of wining and chance of winning are two different goals for optimization, but I would expect them to positively correlated, i.e. if you are winning more, then you have higher chance of winning.
Not necessarily. If you are ahead in points you have a higher chance of winning by playing defensive moves and strengthening your position. When behind you need to play riskier moves which could win you a lot of points, but could also risk spreading yourself too thinly resulting in your complete decimation.
So, that means Ke Jie either didn't know he was losing, or didn't play to win at the end, or played brilliantly at the end, picking the optimal move every time?
From what I have heard Alpha Go was roughly 10 points ahead by the end game, but didn't then bother maintaining the lead. In other words, it was aiming for a 0.5 point lead. Ke Jie would have known he was losing, it is a credit to him that he pushed through to the end so we could all marvel at the spectacle.
exactly. In fact, I was really surprised when I saw B9. It could have played at B11 which was a 15 point move. But it decided to play safe because B11 even though big in point, would give back strong safe in the center. (it also lost 2 point ish in end game which is typical of Master)
Incorrect. It lost points by focusing its play only on the probability of winning. It's moves were not "sub-optimal", its moves increased the likelihood that it would win by something.
That, to humans, may look like sub-optimal play, but in reality it's the same way it was playing the entire time. By giving up points, it increased the chances of winning (because the points it gave up would never actually add up to a loss, but removed possibilities that could result in a loss).
Racing is about crossing the finish line first, so a racecar would always choose to go faster rather than slower, right?
Actually if you're in the last lap (or few laps in a longer race) and you have a healthy lead, it's common to back off the pace a bit and sacrifice some of that lead in order to reduce your chances of a crash or mechanical failure.
Which connection are you looking at? The only questionable connection I see in that group is the stick at the top (the four stones on the 7'th column); but the cut doesn't quite work out for white because after the cut, black can extend his two dead stones at the 3-2 point. By cutting, white shorted himself a liberty.
don't believe anything you hear about China, especially if it falls into the "Chinese Government is an evil overlord" bucket, or you'll end up looking like an idiot.
They were ordered to censor the video livestream (only allowing text) just ~1d before the event though.[0] Rumor has that the censorship order explicitly called for avoiding mentioning Google in the text streams. A YouTube mirror video livestream on Bilibili has been taken down for unknown reasons.
It might not be the gov't intending to act evil in this case -- it can be some random "old red army" hearing about Google's involvement in this thing and protesting. But no matter what the underlying reason is, they are defly trying to hide something they are not supposed to in order to make stuff look good. (They ain't no White Lotus.[1]) You don't need an evil overlord to do this; a narcissist will suffice.
AlphaGo does not distinguish between 0.5 points and 30 points. Its only objective is to win, and to absolutely secure the win. It doesn't care by how much it wins.
It had a 10-15 point advantage in the middle game. It chose to whittle it away in the endgame in exchange for more safety. In my opinion that makes it even more impressive.
I'm aware of that. I'm also aware of the troubles in counting and evaluation of it's strange/bad moves.
It's not impressive at all to go away with only 0.5 after such a large lead. Think about your confirmation bias a bit.
Ke Jie knows that he will probably loose, but this will give him confidence. And confidence is what they need at most, much more than in chess. Otherwise Ke Jie will probably have to end his lucrative career. It not over yet.
It is exactly binary win or lose. In fact, 0.5 is exactly what you would expect a superhuman master to win by, every time, against a very strong opponent.
Did they? I was wondering about they'd unblock YouTube for a while for the Chinese livestream. So apparently this was not the case.
There has been talk from Google recently that they're planning to enter China again, at least for the Android space (Play store and services, for example).
It would be if both players were human: in human play, score differences tend to correlate with differences in actual skill, and probability of outcome (who wins the game).
Not so with Alpha go. That machine just takes the surest path to victory, with no regards to its magnitude. It doesn't care about winning by only half a point. It cares about securing at least half a point.
It may have been a crushing victory for all we know.