And so after an absence of about 2 years the computer chess virus hit me again. I thought I was cured after the ProDeo 1.88 debacle, nice new feature, I love that kind of stuff but something went wrong with the playing strength or with the release of that version. Either way to play it safe I restarted my work from version 1.86 and on this page you can follow the playing strength progress that eventually (left or right) should lead to new a version, ProDeo 2.0 or REBEL 13, not sure about the latter.
So with this page you will get a glimpse in the mind of a chess programmer and by reading this diary you probably will conclude that's a scary place to be.
Kidding aside, first step is nearly finished, the tuning of the main ingredients of the evaluation function and not very surprisingly there is not much improvement to mention because it has always been well tuned since its early existence. However now that I have access to new hardware (a nice 16 thread workstation) tuning can de done much more secure. The improvements can be mentioned:
Note that the [Right to Move] parameter is a TEMPO penalty for the color to move, a pure evaluation ingredient. However in practice it is of great influence on the search, the higher penalty you apply, the faster the search will run. Now this looks as great news but unfortunately it isn't, a too high penalty will produce unreliable evaluation scores also and thus eventually you will pay for it as a regression in playing strength. In other words the right value has to be chosen with great care. To establish that 100 is the right value 60,000 (40/15) bullet games had to be played. And that value 100 is only valid for this 40/15 time control. It's yet unclear how it will perform at longer time controls.
The [King Safety] parameter increase looks odd at first glance but the change is due to new code which has influenced the nature of the evaluation. It took 7 x 12.000 = 84.000 bullet games to arrive on the 105 value.
Note that in general a 1% increase of the result stands for an ELO improvement of 7 points. Meaning, if these above 5 changes all work together in harmony (putting them together) it would give a 4.4% x 7 = 30 ELO increase. But then again I know from 35 years experience this will be unlikely due to the interaction and the overlap. When I am ready to test them I will be happy with 20 ELO.
Meaning, the main improvement will have to come from search changes, not exactly my main interest in computer chess, I never had a passion for it, but facts are facts, search (and speed) are the dominant factors for progress in computer chess since the early days. I sometimes tend to call it a necessarily evil. Okay, that came out too strongly
In the meantime I developed quite a number of new and promising search ideas.
However too few games are played to draw any conclusion. Furthermore my (current) test philosophy regarding search changes demands that scaling should be part of the testing too, resulting in the following test procedure:
12.000 [40/15] games
6.000 [40/30] games
4.000 [40/60] games
Meaning that in total 22.000 games need to be played before a search change is promoted as approved. More, there shouldn't be too much fluctuatiions between the 3 test runs.
September 19 - More on the [Right to Move] TEMPO penalty.
Contrary (I think) to most programs the TEMPO penalty is not given in EVAL but in the move_do() part where also the incremental update stuff (material, PST, double pawns) resides. By doing so we are more flexible and can ssolve some chess knowledge in a cheap way. We apply a penalty by piece type. The initial TEMPO penalty table (for the middle game!) looks like this:
As one can see the penalties for Queen and King are somewhat higher and because the penalty is in move_do() the penalty is applied everytime again the Queen and King moves. It's a small discouragement not to start shuffling its pieces with moves like Kh1 | Kh2 and back to g1 again.
Furthermore one can solve some other basic stuff in a cheap way in the pre-processor.
Loss of castling rights.
In the opening phase when the white king has not castled we increase the penalty for the white king from 0.10 to 0.25 and vice versa for black.
Avoid early Queen play.
In the opening based on the move number we increase the penalty of the WQ and BQ. We also turn off the normal pin bonus her majesty gets for pinning an opponent piece. That will teach her to behave and wait for her moment to come.
In the endgame the penalty table is set to 0.05 for all piece types, his majesty has a free role now.
While the current 22,000 games LMR test run is still in progress (looks good so far) I am trying something else. Instead of relying on move-ordering I am trying to do LMR with the values of the history table only. And it's surprisingly doing well on my development PC, at 40/60 it scores 53% after 1200 games. So that version will be the next test run. Is LMR really that simple? For the pseudo code look at the LMR page.
LMR (edition 2) test run has finished and the result is somewhat disappointing as the last 40/60 run had a very bad start (happens sometimes, the opposite too) and only marginally could recover with 4000 games. Too bad, I can't play more. The results for LMR (edition 2) :
10.02 - 9.80 = +0.22
11.17 - 10.81 = +0.36
12.15 - 11.70 = +0.45
The good news is that the newest LMR version (edition 3) has done extremely well on my development PC running at 4 full cores (no threads, so faster) scoring 53.3% (2021 games) [depths] 12.58 - 12.22 = +0.36 and is now put on the 22,000 games rack. It's good to have a 4th scaling reference point even if it's only (sic) 2000 games, which still takes a full day to complete. This is maddening, I can't even imagine some do this (and have done this) for a living
I made a list of the search changes I still want to test, if things go as planned (they never do) testing should be finished within 2 weeks. So here goes and results will be posted when available.
My suspect against [Right to Move = 100] became true, see how badly [40/60] scales to [40/15]. A 50% increase in ply should bring at least 10-15 elo even with only 2000 games and it gave a regression. So yesterday I stalled the parameter testing and will try lower values later which BTW also gave good results at [40/15] 25=50.8% | 50=51.4% | 75=51.4% respectively, so there is still good hope. But then (as predicted), there goes my 2 weeks planning.
What to do (test) next? And my curiosity about the simpleness of LMR (edition 3) got the upperhand and I decided to try LMR (edition 4) doing 4 reductions. But then only running on pure cores and [40/60] immediately, lower time controls at those low depths hardly make sense.
And then I made a mistake which I only noticed later, I had forgotten to restore the Right to Move parameter back to zero and so the 2 testruns were running with the bad parameter value of 100 instead of 0. But surprisingly so far results are still good (54%), it's coming more than a full ply deeper and I decided to give it a chance and let it run for the moment although it doesn't feel good.
I stopped both LMR (edition 4) matches. Reducing 4 plies is literally one bridge too far, for now. 3 is already a big step forward. Instead I will now focus to find the optimal value of the [Right to Move] parameter, there something to gain. See the changed test schedule above.
[Right to Move = 50] testing finished. Remarkable positive result. It shows me again you can not be careful enough messing around with uncllear and hard to define evaluation ingredients such as the value of a tempo. Obviously in the past I used the wrong values, from 100, back to 0, even 125, then in version 1.86 back to 0, to arrive at 100 again in 1.87. Now that I have reasonable hardware I don't have to wild guess any longer, so it seems, for the moment.
Next, combining LMR (edition 3) with [Right to Move = 50] and see what's happens. In progress now.
Results of test round-3, see test schedule above look fine except for 40/60 with the highest scaling which is worrying. OTOH it's only 2000 games with an error bar of 13 elo points so there is still hope. For that reason we now include the 2 (more or less proven) positional improvements (king safery and the double isolated pawn change) and start round-4. And the 40/60 [2000 games] run should really give a good jump else this effort for a new version is on the brink to fail.
There was a short power failure which caused one of my PC's to reboot. So unfortunately only 1918 games of the 2000 were played and I leave it that way, restarting a match with cutechess-cli is probelematic in the way I use the program (without the concurrency option). But..... I am very happy with the result, 55.0% (35 elo) see test schedule above. The 3 other runs are looking good as well. I will label this version as BETA-1. eventhough one match in this round is still running.
It's another reminder that sometimes the result of 2000 games can be very misleading, see round-3. It's what I noticed 2 years ago when I faced (and underwent) another attack on my programmer genes to improve that old beast and museum piece of the 80's and 90's. It goes like this, you play 2000 40/60 games ussing 4 cores which takes 30 hours to complete and you get a (say) 51.5% score (+10 elo) and you play the same match again you can get 49.5%, thus a regression! Happened to me several times and is predicted by the error-bar (margin) that comes with 2000 games. So, so now and then these things happen, unbalanced randomness finding the edges (+ or -) of the of the error-bar (margin).
Anyway, a 35 elo improvement in just 5-6 weeks in not bad at all, in the 80's and 90's that sometimes took a full year. I want 50 elo for a version worthy the name REBEL 13 so 15 elo to go. Next round is testing LMP usually good for a 10% speed-up tested already on [40/15] 12,0000 games scoring 50.8%. We will see how it scales.
LMP testing finished and it scales badly. All the way from 50.8% -> 50.6% -> 50.2% to even 49.5%. It would be risky to count it as an improvement because the overall score is somewhat positive. I have made scaling a dominant point for this version because I noticed from statistics made of rating lists that ProDeo doesn't scale well, meaning that its performance drops the longer the time control. Whatever the reason for that (and I don't think any programmer can fully grasp what the reasons are for this phenomenon) I think it makes sense one can try to improve by only accepting changes that scale well. It's an experiment. And a time consuming one.
It's best for now I put LMP in the freezer and have a look at the code later, it should breed some 5-10 elo.
Not much left on the menu to test that possibly could bring the desired 15 elo for a version release, I must go back to the drawing table hunting for new candidate improvements. In the meantime I am testing now the recapture extension, limiting its maximum from 2 to 1, heck I might even try to do without them.
Less recapture extensions isn't an improvement also, doing no recapture extensions at all is a big regression and so I am stuck for the moment. I will take a moment of reflection, either find some new changes or release the thing as ProDeo 1.9 and enjoy life again.
Consulted my notes from the past with suggested (small) improvements (ideas) and picked a number of them to try. Most of them were hardly measurable with the hardware of the past and were stamped as unclear, thus not used. The below list of changes will only be tested at [40/15] 12.000 bullet games. If there is a sign of improvement it will be included in the scaling testing later.
OLD - [Bad Bishop = 100] | NEW - [Bad Bishop = 75]
OLD - [Minimum Knight Mobility = 100] | NEW - [Minimum Knight Mobility = 50]
Increasing passed pawn scoring for the middlegame [Passed Pawns MIDG = 150]
Increasing passed pawn scoring for the middlegame [Passed Pawns MIDG = 175]
[Passed Pawns MIDG = 200]
[Passed Pawns MIDG = 163]
Futility pruning - don't prune pawn moves to the 6th/7th row. Played with 16 threads.
Futility pruning - don't prune pawn moves to the 6th/7th row. Played with 4 cores.
Knight outpost currently set to 125, try the 75 and 100 settings.
[Search Safety = 400] is an old parameter that nowadays is only used in the late endgame when nullmove is disabled. Its current value is 200 and I have good reasons to believe that part of the search should be less selective hence we double its safety. [late endgame depths] 14.61 - 14.64 = -0.03
Related to , complete rewrite of the search part that handles the late endgame. [late endgame depths] 14.41 - 14.76 = -0.35
In the endgame attack the opponent pawns from behind.
[Passed Pawn Tropism (1) = 100] evaluates a bonus for the king supporting its own passed pawn(s). Values to test that make sense are 75 and 125. Endgame stuff.
[Passed Pawn Tropism (2) = 200] evaluates to distances of the king to enemy passed pawn(s). Values to test that make sense are 250 and 300. Endgame stuff.
1. A 12,000 games run takes about 11 hours to complete.
2. A 50.3% result (indicating +2 elo) with only 12,000 games is pretty meaningless in terms of the error bar (margin) which is -5/+5. A 50.3% result more or less gurantees the change is not a regression and if it is it's most likely a very minor one. Looking at it from the bright side it statistically can also be an 5 elo improvement.
3. In the hope a cocktail of the above changes can bring me the 15 elo I want.
This is a boring (the waiting) and fascinating process at the same time.
 Bad Bishop evaluation is about a bishop looking at its own pawns in (the 2) forward directions and a penalty is given depending on the square before that pawn via a simple piece table multiplied by a factor 0, 1 or 2 via a square table. For a more detailed description see here.
 The Passed Pawn scoring in REBEL since day one is doubled when the board position is an endgame position. It's based on the observation that with a board full of pieces a passed pawn and the danger it may rerpresent can be easier neutralized than with a board of a few pieces, the endgame. Now that I have reasonable hardware to finally properly test this 35 year old hypothesis I pretty much surprised to see that assumption most likely was never true, the [3b] 50.6% score with a LOS confidelity score of 98.4% is a convincing number. Hence we extend the testing and set the parameter to 200 meaning passed pawn scoring then is fully equal between the middle game and endgame. I can not believe this is true, it goes against all my chess instincts but since numbers don't lie I will have to accept, this is computer chess after all, not normal chess. Keeping my fingers crossed.
[3b] Match [Passed Pawns MIDG = 200] finished (uncrossing my fingers) and no real surprise it's even far worse (49.4%) than the  default setting. So the optimal value seem to be in the  -  area. I will test that later using the "binary search" approach knowing that  gives 50.6% starting with . For now I want to test the fundamental [6a] search change first.
[4b] Futility pruning - don't prune pawn moves to the 6th/7th row. I anticipated a bit on my development PC playing 8000 [40/15] games, score 50.2%. Something like this was expected.
 50.1% is a meaningless result and thus a waste of time. I better can have another look at the new [6a] search code. Put that in my notes.
[6a] While just a 50.2% score is disappointing (I expected more) I should not be complain too much looking at the huge drop in depth (-0.35) the rewrite caused which is 1/3 of a full ply. Nevertheless it should tested at longer time controls which I will do later. Also the new code leaves room for improvement.
 [Passed Pawn Tropism (2) = 250] is a regression (49.5%) no need to test the  value.
We take a break from the canidate peanuts improvements as I don't expect much from the remaining    and  and return to round-6 above, only extend 7th row pawn pushes conditional and no longer unconditional. Round-6 now finished and with the match scores of 50.3 | 50.6 | 51.0 | 50.1 we will include this change in the upcoming BETA-2 testing.
In the meantime we continue our work on finding the optimal value for the [Passed Pawns MIDG = xxx] parameter.  gave 50.3%,  gave 50.6% and so we try  as first one.
Adding up the results of the October 1 list (1.6%) doesn't justify a new beta round yet, it's too little to my taste. Experience has learned me that lumping together 4-5-6 individual small improvements doesn't bring the sum of the individual score as these changes are going to interact with each other and even more with the rest of the engine. I would a happy man with a 1% (7 elo) gain.
On the other hand I will accept  Futility pruning - don't prune pawn moves to the 6th/7th row without further testing as an improvement. The lack of it is a system flaw from the beginning, pawns that move to the 6th row are always dangerous, let alone when they move to the 7th row. Pruning them in the last 6 plies (the current futility depth) before the horizon is a receipt for trouble.
[6a] is interesting enough to test it further and put it on the scale-rack right now to satisfy my curiosity despite my growing impatientness with this kind of testing. And so we are going to lean back (again) and watch only for another 1½-2 days before the results are in. I ordered and downloaded an interesting book yesterday to keep me busy in the between time.
Bad results for the [6a] version. I am taking another moment of reflection.
Time for a new beta round, BETA-2. Hopefully the final version.
We lump every positive change together and only test it at decent time controls and let it run for a couple of days and keep you up-to-date about its progress 3-4 times a day. Currently...
12.97 - 12.17 = +0.80
12.85 - 12.14 = +0.74
12.83 - 12.13 = +0.70
14.88 - 14.02 = +0.86
15.04 - 14.17 = +0.87
15.04 - 14.16 = +0.88
1. Second run is at the unusual 40/4 CEGT | CCRL level because the emphasis of this release is on scaling.
2. The column depths measures the progress made in search depth for the middle game. Not very interesting for the user I suppose but for me as an statistic addict it is.
 The results so far for the first run [40/60] are far below the expectations, likely (although too early to tell at the moment) due to interaction of the 2 parametesr that gave a not so convincing score (50.3%) (see  and  above) on the rest of the changes. As it looks for now (and this might be premature) a third beta run will be needed. C'est la vie.
 From this point on not much will change in the average search depth for the middlegame and it's good to see it increasing the longer the time control. Not that I am happy with the results so far, I am definitely not.
We remove the 2 changes from BETA-2 that were not so clear, didn't have a LOS of 95+% (but somewhere in the 70%) call it BETA-3 and repeat the above test procedure, 4000 [40/60] games and 500 at CCRL | CEGT level.
12.95 - 12.13 = +0.82
12.83 - 12.12 = +0.71
12.82 - 12.12 = +0.70
12.82 - 12.12 = +0.70
15.07 - 14.11 = +0.96
15.09 - 14.16 = +0.91
15.06 - 14.14 = +0.92
15.05 - 14.14 = +0.91
Okay, I am reasonable satisfied, reasonable because an 35-45 elo gain in just 2 months was a dream scenario in the old days. Nevertheless incest testing isn't everything and you never know for sure how well the changes will work out against other engines. What can be said with certainty, it's better.
And so this ends this blog and I will start make preparations for the release.
And speaking of preparations, be prepared for a surprise.
Follow us at Facebook for the latests developments