lichess.org
Donate

Stockfish trashing the Caro-Kann

I am officially cancelling stockfish. It has insulted my Caro-Kann in favor, of all openings, the French defense *gags*. Look at this evaluation (ignore the shit game thanks)

comp. eval. of openings...

I let Stockfish play against itself with the King's Indian opening, at first it gives almost 1.0 to White, then as you go along, the scored becomes 0.0, and both sides sac. their dark-squared bishops for knights... which is strategically wrong.

engine suck in closed positions, when what position is more closed then when no pawn has been swapped yet? it's the closest possible. Stockfish uses heuristics, selective search, whatever, I even suggested a move it wasn't even considering, and then after some pretty short thinking it says it's the best move - better than what Stockfish itself has found out without me. you know what the move was? h3. (preventing a knight from jumping in). Stockfish skipped it... as Kasparov says - it doesn't think, it makes less mistakes.
this variation is known to be bad for black anyway- white has easy play with g4 g5 and black has to defend very accurately for it to remain even +1 for white
Just find another computer to assist you and beat your stockfish enemy with caro kan and you are a winner
I've looked at the stockfish source code, and there is a misconception that it learns by itself or somehow has no knowledge of the game. That is not true, there are tables of weighting values/scalars for each piece as they correspond to each square. These tables get combined together in fancy math when you have a full position. This info comes from I dont know where, but seems to be the secret sauce of the eval. My point is that, the eval could be way off, as it seems a human hand tweaked a few of the "knobs and dials" so to speak in the algorithm for the eval.
This looks familiar. As does the concept of redundancy.

Sicilian and Modern have more bite than this for Black.
@shellc0der said in #5:
> I've looked at the stockfish source code, and there is a misconception that it learns by itself or somehow has no knowledge of the game.
I honestly didn't believe you, but after looking at the source code myself, it looks like you're right. It was to my understanding that Stockfish used to use the weighting values and scalars, but discarded them after the development of NNUE, and transitioned into training through self-play. But even with my knowledge C++ (or lack thereof,) I can tell that you are probably correct.

This actually does make a bit of sense, though. I have heard people say things like "Stockfish plays too ______." or "Stockfish doesn't _____ enough." I originally assumed that these were the words of idiotic patzers, like myself, but it now seems that they may have been on to something.

Considering this, doesn't it make Leela Chess Zero a far more valuable engine to use, as it actually trains through self-play? Lc0 would be far less affected by the biases that its programmers have, in comparison to Stockfish. That's also probably why it seems to play far more like a human would.
I think alpha zero, or similar chess engines have the privilege of using tons of GPUs and tons of hard drive space, where stockfish is designed for a personal computer. There are no perfect solutions, only trade offs. So I will accept that stockfish is an approximation, and alpha zero is the real thing. Nothing beats true machine learning/genetic algorithms that learn from scratch, but stockfish seems to have at least a headstart built in for performing on lower tier hardware.

This topic has been archived and can no longer be replied to.