lichess.org
Donate

Stockfish over >3000 ELO rating!

Well, that depends on your definition of "by far". The best centaurs still have a plus score against the best unaided engines, but there are still a LOT of draws, so the actual rating gap between centaurs and unaided engines is pretty small these days.

Also, it's not so easy just to play "anticomputer" chess against them these days.

Naka tried that in his chess.com match against SF (while assisted by an old version of Rybka, even!), and still lost.

Now, I'm not saying it's impossible to do, of course. Strong players with a lot of experience playing against engines can still get a draw here and (very rarely!) a win here. The problem is that in the meantime you've lost 100 games :)

Now, you'll find some people in forums posting a lot of games where they beat or drew SF with anticomputer play, but there's nothing to prevent those people from playing SF over and over again and posing only the draws and wins (or taking back moves, etc.).

I've seen some of those people play against SF in live matches, and things don't work as well as in the narratives presented in forums :)
carlsen has openly stated he doesn't like playing engines because he loses over and over. i don't get it what the argument is here. engines can calculate like 40 moves deep for every line every move. of course they're better.

if you threw stockfish in the fide pool, it would lose a few games, but it would have like 6000 rating because of how many wins it would get.

those 3200/4000 ratings are decided on, not earned, so no one really knows how strong these engines are
Well, they are earned, and have a very precise meaning. They're not just arbitrarily awarded to the engines.

The engines start with some rating, and then their ratings are adjusted based on results, just like with humans.

The only difference is that there are no human players in the rating pool, so we can't really compare them.

Also, you were probably just using hyperbole (so feel free to virtually slap me if you were), but they don't calculate anything close to 40 moves deep for every line.

When Stockfish or any other top engine says they're on "depth" X, the vast majority of lines are explored to a much, much lower depth than that (and some are explored deeper).

I definitely agree that we have no idea how strong they are in human terms, though. Until we either let them compete in tournaments or hold more human-engine matches, we'll just not know for sure.
yeah it was hyperbole lol. it only deeply computes a small number of lines, but that is mostly because 95% of the possible moves possible are terrible, and it doesn't have to compute past a couple moves for those garbage lines.

anyways, putting a bunch of sgms in a room, giving them 3000 ratings and having them play once a year doesn't really qualify as earning a rating imo.

just like if i started on lichess with a rating of 3000 and only played other ~3000 rated people of relatively the same strength. i'd have to lose 1000+ games to get to a realistic rating. almost impossible, especially if i was only playing in one or two tournaments a year. id be like 2800 - 3400 rated.

long story short, i don't think they earn those ratings at all
Very funny - I already deleted my profile (because I got tired of being repeatedly mocked when I lose bullets games due to lag of unknown origins - such players should be silenced, I do that in my own chess server, and no I won't advertise it here).

I will come here one last time to finish the discussion, and counter your points, although there is hardly anything to counter, since you just ignored most of what I said, including that I did not lose 100 games, to be precise it was 2 loses, then a draw, then one game where I had a clear win, but blundered, then I went to lose 9 games, drew one and finally the win, the games I posted should still be available, and no I did not took moves back, I can also record how I play against Stockfish level 8 if it is necessary, when I'm in my best shape and will really try I don't think it will win me more than once per two games, and to be honest it is not very strong here, mainly due to small amount of time it uses.
Also my level is around 2100 FIDE judging from games against known rated players (and I don't have an official rating), and I don't know a chess program that I can't beat at least once, so where is your 3000 ELO??
Here's a win against Houdini 4 on a near 3Ghz machine, the time control was 30 seconds per game +1 second increment, the game was within the allotted time and without any takebacks:
http://www.chesspastebin.com/2014/12/26/valters-houdini-4-32-bit-by-valters/

The engine simply fails to understand that the doubled pawn is worth nothing in that setup.

Here's a draw against Houdini 3. The time control was 1 minute per game with no increment, I did all my moves within the allotted time:
http://www.chesspastebin.com/2014/12/26/valters-houdini-3-64-bit-by-valters/

I'm 100% sure I couldn't beat Magnus or Kasparov on any time control whatsoever (unless they would be very very drunk).

If I could find my game archive I have won every chess program that I seriously tried to, and again you are missing the point - nowhere did I say that I'm a GM, or even a master or even exceptional anti-computer player - I'm sure I am not, but that's the point!

I Quote "OneOfTheQ":
"The best centaurs still have a plus score against the best unaided engines ... the actual rating gap between centaurs and unaided engines is pretty small these days."

I think your statement simply false. It should have been clear that I did not talk about any newcomer chess player improving his moves with computer, but about GM, master or other highly experienced players with good centaur chess game experience - I think it is obvious that such combination can cover many to most of the many holes that engines have.
I quote wikipedia: http://en.wikipedia.org/wiki/Advanced_Chess
* increasing the level of play to heights never before seen in chess
I challenge you to provide some citations or other kind of proof for your statement, since other-ways it simply appears to be false.

again quoting oneoftheq: "Naka tried that in his chess.com match against SF (while assisted by an old version of Rybka, even!), and still lost."
Nakamura is a professional chess player and a super GM. He spent his whole career playing and practicing against human opponents who play entirely different game, he also probably has little to none insight into how chess programs deduce their moves. So citing him as a prime example is simply short-sighted.

So since there was practically no counter-arguments to my points, just vague accusations of cheating I will simply recite my points.

- our knowledge about the "engines have overcome humans in chess" is based on what could be called "anecdotical evidence" - there is absolutely no scientific proof, no trials involving many participants etc. - it is absurd to propose that there is a slightest guarantee that the best "anti-human" player will be the best "anti-computer" player and in a show like setting with a few games to play.
Also there are no open invitations for say a prize of a million dollars to beat the best computer/engine in chess in a ten game match, I'm sure there would be very unexpected results had such propositions be made, but even if there would be no-one who could win, that would not prove that there is no-one who is capable of. There is perhaps a lack of motive and incentive for anyone to train himself to be able to do that, while there is plenty of motivation to make better and better chess engines.

- Chess engines can only calculate, and they don't even do that by pure "brute-force", but by using a kind of selective search where they choose what is worth investigating and what is not, and these methods have no way to truly compete (in certain and not so rare positions) with logical deduction and similar "true intelligence" methods used by humans. For example if engine is not specifically trained to "know" that certain doubled pawn is almost like no pawn at all, then it might require several tens of moves deep brute force search to know that, and that in many situations will be impossible due to a gazillion of positions it would have to investigate, and even when it's programmed to "know" it will mistake the position with other positions where the pawn is not so bad, and this is only an example - it can go much more deeper than that.

Here are a few arguments why humans won't have a chance against computers. I do agree that there's no empirical evidence for this as there are almost no man vs machine nowadays with machine playing at its full strength.

Even in the recent matches of rybkamura vs stockfish, Nakamura got the assistance of Rybka on a weaker machine in 2 games. In the other two, he was given pawn odds. In all these games, stockfish wasn't using opening book or endgame tablebases.

Here are a few observations from human vs human games:
1. An average game lasts 37 moves.
2. The number of ? moves a human makes per game >= 1 in each game (even the best players).
3. Humans generally get tired and tensed up and can't maintain the same quality of play throught the game.

With an opening database, an engine plays the opening perfectly until like 20 moves into the game from its memory. With an endgame database, an engine will play endgame perfectly and tries to reach a winning endgame early on. Even without an endgame database, an engine's endgame play is very strong as it searches very deeply like > 60 moves ahead.

So, the only practical window where a human might get an advantage is in the middlegame when the engine just ran out of its book moves.

Strategy emerges when you search and calculate 40 or 50 moves ahead. You can see king walks and breakthroughs in Naka vs stockfish matches (2nd game). It true that the search is selective but the branches that are prunned by alpha-beta search do not make a difference to the evaluation of the engine even if you consider them. There are few positions where the evaluation might take long to find the best move
1. A checkmate/stalemate starting lots of material loss or 'bad' looking moves
2. Sometimes drawn endgame/ closed positions with more material but no way to win.
3. very deep tactics (deeper than the search tree)

But then again you can make the engine play antihuman openings which lead to very complicated positions. Engines are tuned to win against other engines. But they could be tuned to be antihuman as well.

Keeping in mind all these factors, I think stockfish will win most of the time against any human and achieve > 3000 rating if it played in human tournaments at any time controls.
1. An average game lasts 37 moves."

This was to emphasize that computers plan ahead till the end of an average game and much more.

This topic has been archived and can no longer be replied to.