They also had a headline for Alphazero that convinced everyone that they crushed Stockfish and that classical chess engines were stuff of the past, when in fact it was about 50 elo better than the Stockfish version they were testing against, or roughly the same as how much Stockfish improves each year.
I think Alphazero is a lot more interesting than Stockfish though. Most notably it lead me to reevaluate positional play. Iirc A0 at around 2-3 ply is still above SuperGM Level which is pretty mind-blowing. Based on this I have increased my strategy to tactics ratio quite a bit. FWIW Stockfish is always evolving and adapting and has incorporated ideas from A0.
Hmm: NNUE was introduced in 2018, the AlphaZero preprint 2017, AlphaGo 2015-2016. I checked this because my memory claimed that it was AlphaGo's success that sparked the new level of interest in NN evaluation.
Wouldn't surprise me if AlphaZero's improvements had no influence in that timeline, but for AlphaGo it would.
The original NNUE paper cites AlphaZero[0]. The architectures are different because NNUE is optimized for CPUs and uses integer quantization and a much smaller network. I don't think one could credibly claim that it would have come about if not for Google making so much noise about their neural network efforts in Go, Chess and Shogi.
For whatever it's worth, the NNUE training dataset contains positions from Leela games and several generations of self-play. Stockfish wouldn't be where it is if not for Google's impact. AlphaFold will likely have a similar impact on our understanding of protein structure. I don't know why everyone is so offended by them puffing their chests out a little bit here, the paper's linked in the article.
The first thing I'd recommend is constantly evaluating positions from a strategic POV ("Evaluate like a GM" is a good book, alternatively look at a lot of positions and evaluate like you were an engine and then engine check).
Second (or first if you lack even the basics to do said evaluation) is understand strategic concepts. A good starting point would be "Simple Chess" the next step would be pawn structures ("Power of Pawns" -> "Chess Structures" would be my recommendations, the latter is probably the greatest chess book in recent times imo). There's also many Chessable courses, I'm quite fond of "Developing Chess Intuition" by GM Raven Sturt and the "Art of..." series by CM Can Kabadayi for lower rated players. The sky is the limit, there's good books all the way up, for example "Mastering Chess Strategy" usually recommended for 2000+ ELO
Third study great positional players like Carlsen, Karpov, Petrosian etc.
I'd say the most important thing to realize is that just like tactics puzzles, there's strategic puzzles but they are not as obvious.
It definitely deserved a lot of praise, but the testing situation wasn't really against a fully fledged stockfish running on similar hardware, but one that, among other things, had no opening library.
The issue is not whether alphazero was impressive, but that we should be careful about the specific claims of the press releases, as they are known to oversell. The whole thing would have been impressive enough if the games had been against the last release of stockfish with good hardware, just for the way it played.
And then what happened is AlphaZero changed the professional game in various interesting ways, and all its ideas were absorbed into Stockfish. A little bombast is forgivable for technology that goes on to have a big impact, and I don’t doubt it’s the same story here.
That's not true at all, Stockfish still uses only human heuristics for search and NNUE for eval, a completely different architecture than alphazero and derived from the Yu Nasu Shogi engine.
It's a neural network trained on self-play games (many of them lifted from Leela Zero). I get that it's a different shape of network, but people really seem touchy about crediting Google with the kick up the bum that led us here. AlphaZero had a massive effect on chess globally, whatever people think about its press releases. My main point is that people should update the heuristic that wastes energy arguing about bold claims when clearly something amazing has happened that everyone in the industry will react to and learn from.
I don't have any particular thoughts about DeepMind's board game algorithms or how they were advertised, but even if I happened to think it was the most innovative and influential research in years, I'd still ask for honest communication about the work. It's part of being a healthy research community - although clearly the AI community falls well short on this, and nobody could say it's only DeepMind's fault.
Where do the evaluations come from? The idea that Stockfish isn't benefiting hugely from Google having created and advertised AlphaZero is preposterous, can we please just stop?
Okay, well, no sale I guess. Stockfish's training dataset is mostly self-play games from an engine directly inspired by AlphaZero. It moved to neural network evaluation after a fork based on a paper that cites AlphaZero. It plays chess more like AlphaZero than Stockfish 11. Yes, it's extremely interesting that it continues to edge out Leela with a fast, rough approximation of the latter's evaluation but much faster search. But it (and human chess) wouldn't be where it is today without AlphaZero, and I was originally responding to someone dismissing it based on the perceived over-zealousness of its marketing, as people seem to want to do with TFA. I merely submit that both of these Google innovations are exciting and impactful, and we should forgive their presentation, which nevertheless has been kind enough to link to the original papers which have all the information we need to help change the world.