Thanks For Ruining Another Game Forever, Computers

In 2006, after visiting the Computer History Museum's exhibit on Chess, I opined:


This is a companion discussion topic for the original entry at http://blog.codinghorror.com/thanks-for-ruining-another-game-forever-computers/

Whenever I think about things like this I always think if Chess is hard, where we know all the information we need, the position of all 64 pieces and all their possible moves, think how hard it is to do weather/climate predictions, traffic modelling, missile defense, etc. In those systems, they can’t possibly know all the data they need. But, as @codinghorror pointed out, you can still be able to come up with a solution to the problem that’s good enough, even if you can’t guarantee a perfect solution.

1 Like

Well let me say that as for as Civilization goes, the AI still massively suck.

“Today the best chess player alive is a ‘centaur’: Intagrand, a team of humans and several different chess programs.”

“The top-ranked human chess player today, Magnus Carlsen, trained with AIs and has been deemed the most computer-like of all human chess players. He also has the highest human grand master rating of all time.”

I find that surprising, I would think a massive array of cloud GPUs could look so far ahead in chess moves today that human intervention would be unnecessary.

Perhaps it’s just not necessary since improved algorithms let current chess programs play at beyond grandmaster level without needing massive look ahead trees.

http://computerchess.org.uk/ccrl/4040/

current leader is

https://stockfishchess.org/

Benchmark results

I downloaded the Stockfish benchmark and using the x64 modern engine following the benchmark instructions I think I got 10,050 kN/s. That seems about right for an i7-6700k.

As a point of reference, the highest scores on the list are for dual 18 core CPUs, a total of 36 cores for 43,050 kN/sec.

Even though these engines no longer need crazy amounts of brute force to perform at beyond grandmaster level, it is amazing how we still can’t really catch up to Deep Blue in terms of raw brute force.

AlphaGo vs Lee was hardly a fair fight. Assuming (conservatively) 100W per CPU and 500W per GPU, AlphaGo consumes about 200kW of power, compared to the human brain at about 10-20W. And yet the human brain still won once!

And it’s also fair to assume that Lee’s brain was also looking after a whole lot of other processes and sensory inputs at the same time. AlphaGo had just one job, and it still lost once!

So that’s a pretty big win for evolution over our puny attempts to emulate its achievements.

I’m not saying it’s not impressive, but it is just one small step on what is still a very long road. SkyNet’s not coming along any time soon - there isn’t enough energy generated on the entire planet to power it. This is why I am always astonished when people as intelligent as Stephen Hawking start talking as if we face imminent danger from our own AI creations.

1 Like

A couple of gripes:

The “go is ruined” meme seems silly to me. Is swimming ruined because dolphins are faster? Is running ruined because cars are faster? Is arithmetic ruined because computers are faster? The “go is ruined” line of thought seems to come out of a vague anxiety about Terminator-style “computers enslave humans” stories. But I think it’s much more productive to think “great, now humans can spend more time doing things we enjoy that computers don’t do well yet.”

I don’t see chess AIs and go AIs as being categorically different. They are both based on game tree searching with sophisticated heuristics to avoid spending too much effort on pointless game trajectories. The best go AIs use Monte Carlo tree search (MCTS) and machine learning (to build better board evaluation functions). However, they are far more similar than either is to (to pick a random example) robot navigation AI.

“CPU performance has largely stalled in the last decade”

No it hasn’t… There’s no point in improving CPU performance if they’re already good enough for pretty much any general purpose task for the average consumer. We still see improvements every two years and the TDP of a certain processor is just as important for it’s performance as are clock speeds.

GPUs are amazing - but hard to program for.
I work for a company called SQream Technologies , and we have a GPU based SQL database - that is usually around 100x faster than other databases for big data analytics - even when others have many many more CPU cores.

GPUs aren’t perfect though, and there are a lot of things they can’t do properly. Also, the amount of memory they usually have available is relatively low. The highest-end cards only have around 12GB, which isn’t much for bigger tasks that require a lot of data.

1 Like

I agree that the theme is very wrong.

In fact, chess has never been healthier. Top players are making use of computer assisted learning, and the ability to study opening, middle-game and endgames with computer insight as to the best path to take. Top players today as a result are arguably much better than top players from three decades ago. Go will turn out the same.

The chess world is looking forward to the World championship taking place this fall for the first time in a long time. (The last time was in the World Trade Centre). And with the Challenger’s going on right now and an American is currently leading, this might be an event renewing interest in chess in North America again to the highest level since Bobby Fischer vs Boris Spassky.

CPU performance has largely stalled in the last decade. While more and more cores are placed on each die, which is great when the problems are parallelizable – as they definitely are in this case – the actual performance improvement of any individual core over the last 5 to 10 years is rather modest. But GPUs are still doubling in performance every few years.

This is somewhat misleading. It suggests that CPU cores have stagnated while GPU “cores” have not. In fact, clock rates – i.e. “actual improvement of any individual core” – have stagnated in both cases. Graphics just happen to be deeply parallelizable, so we can keep adding “cores” (shaders, etc.) to GPUs to get more and more performance.

I’m sorry for an off-topic (and fairly strange) question, but why did you select that particular image of a video card to go with your post?

You took your time before pointing out that we did it. It’s not a computer building a computer - humans are involved in every way in creating the program that solved the games. We can go on to use what we have learned as another lever to achieve greater things (or be more destructive in wars). It is people that have done this.

Despite what people in the 90s thought, computers have vastly improved the chess experience. We now have an objective arbiter that can tell us when a move is really better. It makes things much, much more interesting than it used to be and makes the game accessible to amateurs such as myself.

It’s like everyone having their own personal chess commentator telling you what is going on. If you don’t play chess, I would say it would be similar to having a seasoned quarterback (John Elway for example) explain what is going on during a football match.

AlphaGo will do the same thing for Go. It opens up the game it makes it much, much more fun. Prior to the computers, top players could make arbitrary decisions about good and better positions and we could do no better than believe them. Now we know and we have found that sometimes the best players were, well, less than honest. But most of the time they were right.

The computers still have weaknesses. They still don’t understand one whit of strategy. It’s just that 90% of the moves in chess (and go) can be understood in concrete terms. It is funny to watch the engines get entirely lost and confused when a human player makes a very strong strategic move and the computer thinks it’s just a mistake, only to realize a few moves later that the human is better. If they have a good animation it can be fun to watch the engine go from “white is much better” to “black is winning” to “the position is equal” over and over as it gets lost.

That 10% of moves is not enough for a human to beat a computer, but it does highlight something that the algorithm doesn’t “get” yet. So it’s an interesting space.

Hawking has an agenda and he is smart enough to know what to say in order to get people focused on achieving his goals.

His goal is to get humanity focused on protecting our species from potential extinction. He is more worried about things like astronomical events, but if he can give us a short-term existential threat that the general populace can get worried about, then funds and manpower can be devoted to solving the (nonexistent) problem of AI while at the same time preparing for real threats.

In addition, it is a first step in thinking of ourselves as “species vs the universe” so we can be united for the cause, whatever danger we may face.

2 Likes

Err, Chess is not European. Chess is Asian (Indian).

1 Like

See Chapter 5, “An Enjoyable Game” : How HAL Plays Chess, from the book “HAL’s Legacy - 2001’s Computer as Dream and Reality” for a fascinating discussion of the state of chess computer programs prom the perspective of the books publication date, 1997. The chapter author, Murray S. Campbell, was a member of the original IBM team that developed Deep Blue, the machine that beat Garry Kasperov. In 1997, the prospect of a computer beating a Go champion was far, far off on the horizon.

Unrelated to the discussion, but you have a double “why” here: “That’s also why why your password is (probably) too damn short.”

I just watched a quick video on how to play Go - https://www.youtube.com/watch?v=Jq5SObMdV3o

At the end it states “Did you know… Go is the only board game in which humans can still defeat computers with reliable consistency”. It was dated 2010.

At this point, I will suggest reading Roger Zelazny’s greatest piece, a short story entitled “For a Breath I Tarry”. Foremost a tale of breathless beauty, its philosophical musings about AI and humanity have never been outdone.