lichess.org
Donate

Tofiks 1.2.0

ChessChess engineChess botSoftware Development
A new release and not even a year has passed? I am truly spoiling my chess playing dog.

What's new?

Well in all honesty there should have been a hotfix release of v1.1.1 as I discovered some bugs right after the last release. Some bugs were found thanks to you my dear reader! The fixes were tiny but the effects huge - my move ordering was all wrong.

2k at over 12k

Last month my engine @likeawizard-bot first broke the 2000 rating barrier in Bullet, Blitz and Rapid. I am confident that Classical could also be broken but there are just way less games in that category. And the bot has played now over 12k games on the platform. I am truly delighted at the progress even when at times it is very slow and many ideas and a lot of work had to be abandoned as it did not produce a positive outcome.

Nevertheless, let's have a quick look what's new in v1.2.0

More work on the test suite

I have been adding more and more tests and internal health checks to make sure that new features are as robust and break as few things as possible. This has made the development much easier. Obvious bugs are caught immediately.

Bug fixes

Thanks to comments on the last blogs where some users took the time to look at the code and also thanks to the test-suite I managed to find a few bugs. These were both related to playing strength or just returning garbage to the UCI interface.

Delta Pruning

The engine spends a lot of time in the Quiescence Search - this is part of the search algorithm that attempts to release any tactical tension from the position so that positions can be statically evaluated without pieces hanging and kings being in check. Delta Pruning attempts to determine if any capture for the side can actually improve the position enough to make it worth while. If a position in the search comes up where the eval is at -12 then even capturing a Queen would bring it up to -3. This is a lost line nevertheless. Since a loss is a 0-1 score regardless if you're down a Queen or two, the algorithm just tries to accept that the position is lost as early as possible and will spend more time on potentially better lines. For now it only checks positions where it is down 975cp - the value of a Queen.

if eval < alpha-975 { break }

However, I could also look at what the most valuable piece that I can capture is and adjust the threshold like would generalize the idea beyond the Queen value:

if eval < alpha-(capture + margin) { break }

capture would be the centipawn value of the piece to be captured and margin is a safety margin usually around two pawns worth. Without the margin it could be too blind to powerful exchange sacrifices and other tactical considerations.

However this more advanced technique compared to the simple and static 750cp comparison did not result in a positive change in playing strength. It was either not working as expected or the execution was bad and the additional computation made it slower where even if it did work the value was not there.

Late Move Reductions (LMR)

If you have read any of my previous blogs, you know how I always go on about move ordering and pruning. If you have not then a quick summary - checking the most promising moves first allows the engine to eliminate other moves faster as they can be proven to be inferior. We already have Late Move Pruning where we simply ignore moves very late in the move order under certain conditions. Late Move Reduction does a similar thing. We search the first moves at full search depth and as moves are further down the move order we reduce the depth. If, however, a move searched at reduced depth turns out to be interesting we have to re-search the thing so that we do not miss things. If we are right about the first moves being the best then we have done less work to verify it. If we are wrong then now we need to do extra work to re-check it. If we are right more often than not then we can make decisions by looking at less moves and improve the playing strength. The conditions to what qualifies as late and by how much it should be reduced are not quite clear and there certainly can be more things that can be done to optimize this technique.

Optimization & Cleanup

I have used some static code analysis tools to clean up and optimize the code with little effort. I was able to observe around 2~5% performance increase at no effort. Which is a lot given that many of my attempts bring similar results or even negative ones. I still think there could be more performance to be gained.

Results

Before we wrap up I made all three release versions of Tofiks play a tournament of 5s+0.3s where each engine played 200 games against all others.

Rank Name        Elo     +/-   Games    Wins  Losses   Draws   Points
1 tofiks-1.2     189      34     400     260      62      78    299.0
2 tofiks-1.1      53      30     400     183     122      95    230.5
3 tofiks-1.0    -268      40     400      48     307      45     70.5

Again I am happy with the progress that I made over the last iteration although 1.1. Had a nasty stupid bug that made it much weaker than it should have been.

What's next?

There is an inexhaustible sea of ideas and improvements at my finger tips. I have some ideas that would increase the raw performance - optimizing the speed at which the internal state can generate moves, make moves and take them back. I want to also return to Texel tuning to optimize the weights I use for evaluation. And many more...

Try it!

As usual I encourage you to try out playing my bot @likeawizard-bot and you can find the source code and binaries on Tofik's GitHub.