Skip to content

Commit cb9c259

Browse files
Sopel97vondele
authored andcommitted
Update architecture to "SFNNv4". Update network to nn-6877cd24400e.nnue.
Architecture: The diagram of the "SFNNv4" architecture: https://user-images.githubusercontent.com/8037982/153455685-cbe3a038-e158-4481-844d-9d5fccf5c33a.png The most important architectural changes are the following: * 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster. * The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16. * The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future. Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better. First session: The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session. The training was done using the following command: python3 train.py \ /home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \ /home/sopel/nnue/nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \ --gpus "$3," \ --threads 4 \ --num-workers 4 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=1.0 \ --gamma=0.992 \ --lr=8.75e-4 \ --max_epochs=400 \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2 Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones. The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view Second session: The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600). The training was done using the following command: The training was done using the following command: python3 train.py \ /data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \ /data/sopel/nnue/nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \ --gpus "$3," \ --threads 4 \ --num-workers 4 \ --batch-size 16384 \ --progress_bar_refresh_rate 20 \ --random-fen-skipping 3 \ --features=HalfKAv2_hm^ \ --lambda=1.0 \ --gamma=0.995 \ --lr=4.375e-4 \ --max_epochs=800 \ --resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp295/nn-epoch399.pt \ --default_root_dir ../nnue-pytorch-training/experiment_$1/run_$run_id In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest. The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way: Get the https://github.com/official-stockfish/Stockfish/blob/5640ad48ae5881223b868362c1cbeb042947f7b4/script/interleave_binpacks.py script. Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack Tests: STC: https://tests.stockfishchess.org/tests/view/6203fb85d71106ed12a407b7 LLR: 2.94 (-2.94,2.94) <0.00,2.50> Total: 16952 W: 4775 L: 4521 D: 7656 Ptnml(0-2): 133, 1818, 4318, 2076, 131 LTC: https://tests.stockfishchess.org/tests/view/62041e68d71106ed12a40e85 LLR: 2.94 (-2.94,2.94) <0.50,3.00> Total: 14944 W: 4138 L: 3907 D: 6899 Ptnml(0-2): 21, 1499, 4202, 1728, 22 closes #3927 Bench: 4919707
1 parent b0b3155 commit cb9c259

File tree

7 files changed

+237
-302
lines changed

7 files changed

+237
-302
lines changed

src/evaluate.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ namespace Eval {
3939
// The default net name MUST follow the format nn-[SHA256 first 12 digits].nnue
4040
// for the build process (profile-build and fishtest) to work. Do not change the
4141
// name of the macro, as it is used in the Makefile.
42-
#define EvalFileDefaultName "nn-ac07bd334b62.nnue"
42+
#define EvalFileDefaultName "nn-6877cd24400e.nnue"
4343

4444
namespace NNUE {
4545

src/nnue/evaluate_nnue.cpp

Lines changed: 3 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -148,22 +148,18 @@ namespace Stockfish::Eval::NNUE {
148148
#if defined(ALIGNAS_ON_STACK_VARIABLES_BROKEN)
149149
TransformedFeatureType transformedFeaturesUnaligned[
150150
FeatureTransformer::BufferSize + alignment / sizeof(TransformedFeatureType)];
151-
char bufferUnaligned[Network::BufferSize + alignment];
152151

153152
auto* transformedFeatures = align_ptr_up<alignment>(&transformedFeaturesUnaligned[0]);
154-
auto* buffer = align_ptr_up<alignment>(&bufferUnaligned[0]);
155153
#else
156154
alignas(alignment)
157155
TransformedFeatureType transformedFeatures[FeatureTransformer::BufferSize];
158-
alignas(alignment) char buffer[Network::BufferSize];
159156
#endif
160157

161158
ASSERT_ALIGNED(transformedFeatures, alignment);
162-
ASSERT_ALIGNED(buffer, alignment);
163159

164160
const std::size_t bucket = (pos.count<ALL_PIECES>() - 1) / 4;
165161
const auto psqt = featureTransformer->transform(pos, transformedFeatures, bucket);
166-
const auto positional = network[bucket]->propagate(transformedFeatures, buffer)[0];
162+
const auto positional = network[bucket]->propagate(transformedFeatures);
167163

168164
// Give more value to positional evaluation when adjusted flag is set
169165
if (adjusted)
@@ -190,27 +186,20 @@ namespace Stockfish::Eval::NNUE {
190186
#if defined(ALIGNAS_ON_STACK_VARIABLES_BROKEN)
191187
TransformedFeatureType transformedFeaturesUnaligned[
192188
FeatureTransformer::BufferSize + alignment / sizeof(TransformedFeatureType)];
193-
char bufferUnaligned[Network::BufferSize + alignment];
194189

195190
auto* transformedFeatures = align_ptr_up<alignment>(&transformedFeaturesUnaligned[0]);
196-
auto* buffer = align_ptr_up<alignment>(&bufferUnaligned[0]);
197191
#else
198192
alignas(alignment)
199193
TransformedFeatureType transformedFeatures[FeatureTransformer::BufferSize];
200-
alignas(alignment) char buffer[Network::BufferSize];
201194
#endif
202195

203196
ASSERT_ALIGNED(transformedFeatures, alignment);
204-
ASSERT_ALIGNED(buffer, alignment);
205197

206198
NnueEvalTrace t{};
207199
t.correctBucket = (pos.count<ALL_PIECES>() - 1) / 4;
208200
for (std::size_t bucket = 0; bucket < LayerStacks; ++bucket) {
209-
const auto psqt = featureTransformer->transform(pos, transformedFeatures, bucket);
210-
const auto output = network[bucket]->propagate(transformedFeatures, buffer);
211-
212-
int materialist = psqt;
213-
int positional = output[0];
201+
const auto materialist = featureTransformer->transform(pos, transformedFeatures, bucket);
202+
const auto positional = network[bucket]->propagate(transformedFeatures);
214203

215204
t.psqt[bucket] = static_cast<Value>( materialist / OutputScale );
216205
t.positional[bucket] = static_cast<Value>( positional / OutputScale );

src/nnue/layers/affine_transform.h

Lines changed: 32 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -63,19 +63,17 @@ namespace Stockfish::Eval::NNUE::Layers {
6363
{
6464
# if defined(USE_SSE2)
6565
// At least a multiple of 16, with SSE2.
66-
static_assert(PaddedInputDimensions % 16 == 0);
67-
constexpr IndexType NumChunks = PaddedInputDimensions / 16;
66+
constexpr IndexType NumChunks = ceil_to_multiple<IndexType>(InputDimensions, 16) / 16;
6867
const __m128i Zeros = _mm_setzero_si128();
6968
const auto inputVector = reinterpret_cast<const __m128i*>(input);
7069

7170
# elif defined(USE_MMX)
72-
static_assert(InputDimensions % 8 == 0);
73-
constexpr IndexType NumChunks = InputDimensions / 8;
71+
constexpr IndexType NumChunks = ceil_to_multiple<IndexType>(InputDimensions, 8) / 8;
7472
const __m64 Zeros = _mm_setzero_si64();
7573
const auto inputVector = reinterpret_cast<const __m64*>(input);
7674

7775
# elif defined(USE_NEON)
78-
constexpr IndexType NumChunks = (InputDimensions + 15) / 16;
76+
constexpr IndexType NumChunks = ceil_to_multiple<IndexType>(InputDimensions, 16) / 16;
7977
const auto inputVector = reinterpret_cast<const int8x8_t*>(input);
8078
# endif
8179

@@ -150,24 +148,27 @@ namespace Stockfish::Eval::NNUE::Layers {
150148
}
151149
#endif
152150

153-
template <typename PreviousLayer, IndexType OutDims, typename Enabled = void>
151+
template <IndexType InDims, IndexType OutDims, typename Enabled = void>
154152
class AffineTransform;
155153

156154
// A specialization for large inputs.
157-
template <typename PreviousLayer, IndexType OutDims>
158-
class AffineTransform<PreviousLayer, OutDims, std::enable_if_t<(PreviousLayer::OutputDimensions >= 2*64-1)>> {
155+
template <IndexType InDims, IndexType OutDims>
156+
class AffineTransform<InDims, OutDims, std::enable_if_t<(ceil_to_multiple<IndexType>(InDims, MaxSimdWidth) >= 2*64)>> {
159157
public:
160158
// Input/output type
161-
using InputType = typename PreviousLayer::OutputType;
159+
using InputType = std::uint8_t;
162160
using OutputType = std::int32_t;
163-
static_assert(std::is_same<InputType, std::uint8_t>::value, "");
164161

165162
// Number of input/output dimensions
166-
static constexpr IndexType InputDimensions = PreviousLayer::OutputDimensions;
163+
static constexpr IndexType InputDimensions = InDims;
167164
static constexpr IndexType OutputDimensions = OutDims;
168165

169166
static constexpr IndexType PaddedInputDimensions =
170167
ceil_to_multiple<IndexType>(InputDimensions, MaxSimdWidth);
168+
static constexpr IndexType PaddedOutputDimensions =
169+
ceil_to_multiple<IndexType>(OutputDimensions, MaxSimdWidth);
170+
171+
using OutputBuffer = OutputType[PaddedOutputDimensions];
171172

172173
static_assert(PaddedInputDimensions >= 128, "Something went wrong. This specialization should not have been chosen.");
173174

@@ -202,20 +203,12 @@ namespace Stockfish::Eval::NNUE::Layers {
202203

203204
static_assert(OutputDimensions % NumOutputRegs == 0);
204205

205-
// Size of forward propagation buffer used in this layer
206-
static constexpr std::size_t SelfBufferSize =
207-
ceil_to_multiple(OutputDimensions * sizeof(OutputType), CacheLineSize);
208-
209-
// Size of the forward propagation buffer used from the input layer to this layer
210-
static constexpr std::size_t BufferSize =
211-
PreviousLayer::BufferSize + SelfBufferSize;
212-
213206
// Hash value embedded in the evaluation file
214-
static constexpr std::uint32_t get_hash_value() {
207+
static constexpr std::uint32_t get_hash_value(std::uint32_t prevHash) {
215208
std::uint32_t hashValue = 0xCC03DAE4u;
216209
hashValue += OutputDimensions;
217-
hashValue ^= PreviousLayer::get_hash_value() >> 1;
218-
hashValue ^= PreviousLayer::get_hash_value() << 31;
210+
hashValue ^= prevHash >> 1;
211+
hashValue ^= prevHash << 31;
219212
return hashValue;
220213
}
221214

@@ -242,7 +235,6 @@ namespace Stockfish::Eval::NNUE::Layers {
242235

243236
// Read network parameters
244237
bool read_parameters(std::istream& stream) {
245-
if (!previousLayer.read_parameters(stream)) return false;
246238
for (std::size_t i = 0; i < OutputDimensions; ++i)
247239
biases[i] = read_little_endian<BiasType>(stream);
248240

@@ -254,7 +246,6 @@ namespace Stockfish::Eval::NNUE::Layers {
254246

255247
// Write network parameters
256248
bool write_parameters(std::ostream& stream) const {
257-
if (!previousLayer.write_parameters(stream)) return false;
258249
for (std::size_t i = 0; i < OutputDimensions; ++i)
259250
write_little_endian<BiasType>(stream, biases[i]);
260251

@@ -266,10 +257,7 @@ namespace Stockfish::Eval::NNUE::Layers {
266257

267258
// Forward propagation
268259
const OutputType* propagate(
269-
const TransformedFeatureType* transformedFeatures, char* buffer) const {
270-
const auto input = previousLayer.propagate(
271-
transformedFeatures, buffer + SelfBufferSize);
272-
OutputType* output = reinterpret_cast<OutputType*>(buffer);
260+
const InputType* input, OutputType* output) const {
273261

274262
#if defined (USE_AVX512)
275263
using acc_vec_t = __m512i;
@@ -312,7 +300,6 @@ namespace Stockfish::Eval::NNUE::Layers {
312300
#if defined (USE_SSSE3) || defined (USE_NEON)
313301
const in_vec_t* invec = reinterpret_cast<const in_vec_t*>(input);
314302

315-
316303
// Perform accumulation to registers for each big block
317304
for (IndexType bigBlock = 0; bigBlock < NumBigBlocks; ++bigBlock)
318305
{
@@ -377,26 +364,28 @@ namespace Stockfish::Eval::NNUE::Layers {
377364
using BiasType = OutputType;
378365
using WeightType = std::int8_t;
379366

380-
PreviousLayer previousLayer;
381-
382367
alignas(CacheLineSize) BiasType biases[OutputDimensions];
383368
alignas(CacheLineSize) WeightType weights[OutputDimensions * PaddedInputDimensions];
384369
};
385370

386-
template <typename PreviousLayer, IndexType OutDims>
387-
class AffineTransform<PreviousLayer, OutDims, std::enable_if_t<(PreviousLayer::OutputDimensions < 2*64-1)>> {
371+
template <IndexType InDims, IndexType OutDims>
372+
class AffineTransform<InDims, OutDims, std::enable_if_t<(ceil_to_multiple<IndexType>(InDims, MaxSimdWidth) < 2*64)>> {
388373
public:
389374
// Input/output type
390-
using InputType = typename PreviousLayer::OutputType;
375+
// Input/output type
376+
using InputType = std::uint8_t;
391377
using OutputType = std::int32_t;
392-
static_assert(std::is_same<InputType, std::uint8_t>::value, "");
393378

394379
// Number of input/output dimensions
395-
static constexpr IndexType InputDimensions =
396-
PreviousLayer::OutputDimensions;
380+
static constexpr IndexType InputDimensions = InDims;
397381
static constexpr IndexType OutputDimensions = OutDims;
382+
398383
static constexpr IndexType PaddedInputDimensions =
399-
ceil_to_multiple<IndexType>(InputDimensions, MaxSimdWidth);
384+
ceil_to_multiple<IndexType>(InputDimensions, MaxSimdWidth);
385+
static constexpr IndexType PaddedOutputDimensions =
386+
ceil_to_multiple<IndexType>(OutputDimensions, MaxSimdWidth);
387+
388+
using OutputBuffer = OutputType[PaddedOutputDimensions];
400389

401390
static_assert(PaddedInputDimensions < 128, "Something went wrong. This specialization should not have been chosen.");
402391

@@ -405,20 +394,12 @@ namespace Stockfish::Eval::NNUE::Layers {
405394
static constexpr const IndexType InputSimdWidth = SimdWidth;
406395
#endif
407396

408-
// Size of forward propagation buffer used in this layer
409-
static constexpr std::size_t SelfBufferSize =
410-
ceil_to_multiple(OutputDimensions * sizeof(OutputType), CacheLineSize);
411-
412-
// Size of the forward propagation buffer used from the input layer to this layer
413-
static constexpr std::size_t BufferSize =
414-
PreviousLayer::BufferSize + SelfBufferSize;
415-
416397
// Hash value embedded in the evaluation file
417-
static constexpr std::uint32_t get_hash_value() {
398+
static constexpr std::uint32_t get_hash_value(std::uint32_t prevHash) {
418399
std::uint32_t hashValue = 0xCC03DAE4u;
419400
hashValue += OutputDimensions;
420-
hashValue ^= PreviousLayer::get_hash_value() >> 1;
421-
hashValue ^= PreviousLayer::get_hash_value() << 31;
401+
hashValue ^= prevHash >> 1;
402+
hashValue ^= prevHash << 31;
422403
return hashValue;
423404
}
424405

@@ -441,7 +422,6 @@ namespace Stockfish::Eval::NNUE::Layers {
441422

442423
// Read network parameters
443424
bool read_parameters(std::istream& stream) {
444-
if (!previousLayer.read_parameters(stream)) return false;
445425
for (std::size_t i = 0; i < OutputDimensions; ++i)
446426
biases[i] = read_little_endian<BiasType>(stream);
447427
for (std::size_t i = 0; i < OutputDimensions * PaddedInputDimensions; ++i)
@@ -452,7 +432,6 @@ namespace Stockfish::Eval::NNUE::Layers {
452432

453433
// Write network parameters
454434
bool write_parameters(std::ostream& stream) const {
455-
if (!previousLayer.write_parameters(stream)) return false;
456435
for (std::size_t i = 0; i < OutputDimensions; ++i)
457436
write_little_endian<BiasType>(stream, biases[i]);
458437

@@ -463,10 +442,7 @@ namespace Stockfish::Eval::NNUE::Layers {
463442
}
464443
// Forward propagation
465444
const OutputType* propagate(
466-
const TransformedFeatureType* transformedFeatures, char* buffer) const {
467-
const auto input = previousLayer.propagate(
468-
transformedFeatures, buffer + SelfBufferSize);
469-
const auto output = reinterpret_cast<OutputType*>(buffer);
445+
const InputType* input, OutputType* output) const {
470446

471447
#if defined (USE_AVX2)
472448
using vec_t = __m256i;
@@ -491,12 +467,11 @@ namespace Stockfish::Eval::NNUE::Layers {
491467
#if defined (USE_SSSE3)
492468
const auto inputVector = reinterpret_cast<const vec_t*>(input);
493469

494-
static_assert(InputDimensions % 8 == 0);
495470
static_assert(OutputDimensions % OutputSimdWidth == 0 || OutputDimensions == 1);
496471

497472
if constexpr (OutputDimensions % OutputSimdWidth == 0)
498473
{
499-
constexpr IndexType NumChunks = InputDimensions / 4;
474+
constexpr IndexType NumChunks = ceil_to_multiple<IndexType>(InputDimensions, 8) / 4;
500475
constexpr IndexType NumRegs = OutputDimensions / OutputSimdWidth;
501476

502477
const auto input32 = reinterpret_cast<const std::int32_t*>(input);
@@ -555,8 +530,6 @@ namespace Stockfish::Eval::NNUE::Layers {
555530
using BiasType = OutputType;
556531
using WeightType = std::int8_t;
557532

558-
PreviousLayer previousLayer;
559-
560533
alignas(CacheLineSize) BiasType biases[OutputDimensions];
561534
alignas(CacheLineSize) WeightType weights[OutputDimensions * PaddedInputDimensions];
562535
};

src/nnue/layers/clipped_relu.h

Lines changed: 11 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -26,51 +26,41 @@
2626
namespace Stockfish::Eval::NNUE::Layers {
2727

2828
// Clipped ReLU
29-
template <typename PreviousLayer>
29+
template <IndexType InDims>
3030
class ClippedReLU {
3131
public:
3232
// Input/output type
33-
using InputType = typename PreviousLayer::OutputType;
33+
using InputType = std::int32_t;
3434
using OutputType = std::uint8_t;
35-
static_assert(std::is_same<InputType, std::int32_t>::value, "");
3635

3736
// Number of input/output dimensions
38-
static constexpr IndexType InputDimensions = PreviousLayer::OutputDimensions;
37+
static constexpr IndexType InputDimensions = InDims;
3938
static constexpr IndexType OutputDimensions = InputDimensions;
4039
static constexpr IndexType PaddedOutputDimensions =
4140
ceil_to_multiple<IndexType>(OutputDimensions, 32);
4241

43-
// Size of forward propagation buffer used in this layer
44-
static constexpr std::size_t SelfBufferSize =
45-
ceil_to_multiple(OutputDimensions * sizeof(OutputType), CacheLineSize);
46-
47-
// Size of the forward propagation buffer used from the input layer to this layer
48-
static constexpr std::size_t BufferSize =
49-
PreviousLayer::BufferSize + SelfBufferSize;
42+
using OutputBuffer = OutputType[PaddedOutputDimensions];
5043

5144
// Hash value embedded in the evaluation file
52-
static constexpr std::uint32_t get_hash_value() {
45+
static constexpr std::uint32_t get_hash_value(std::uint32_t prevHash) {
5346
std::uint32_t hashValue = 0x538D24C7u;
54-
hashValue += PreviousLayer::get_hash_value();
47+
hashValue += prevHash;
5548
return hashValue;
5649
}
5750

5851
// Read network parameters
59-
bool read_parameters(std::istream& stream) {
60-
return previousLayer.read_parameters(stream);
52+
bool read_parameters(std::istream&) {
53+
return true;
6154
}
6255

6356
// Write network parameters
64-
bool write_parameters(std::ostream& stream) const {
65-
return previousLayer.write_parameters(stream);
57+
bool write_parameters(std::ostream&) const {
58+
return true;
6659
}
6760

6861
// Forward propagation
6962
const OutputType* propagate(
70-
const TransformedFeatureType* transformedFeatures, char* buffer) const {
71-
const auto input = previousLayer.propagate(
72-
transformedFeatures, buffer + SelfBufferSize);
73-
const auto output = reinterpret_cast<OutputType*>(buffer);
63+
const InputType* input, OutputType* output) const {
7464

7565
#if defined(USE_AVX2)
7666
if constexpr (InputDimensions % SimdWidth == 0) {
@@ -191,9 +181,6 @@ namespace Stockfish::Eval::NNUE::Layers {
191181

192182
return output;
193183
}
194-
195-
private:
196-
PreviousLayer previousLayer;
197184
};
198185

199186
} // namespace Stockfish::Eval::NNUE::Layers

0 commit comments

Comments
 (0)