Why Leela's performance is dropping again ?

1,876 views
Skip to first unread message

Vassilis

unread,
Dec 20, 2018, 11:09:43 AM12/20/18
to LCZero
Hi all

For the latest networks (32112 - 32117...) self elo is constantly dropping.
Some new changes of parameter values, or is it something else ?

Regards
Vas

Francesco Tommaso

unread,
Dec 20, 2018, 11:28:23 AM12/20/18
to LCZero
Hi, Vassilis.

I think it's just random. However, yesterday the LR was dropped for the second time. I think there is some expectative of a real improvement on real elo (not self elo) for the next days/week.

If you examine on the main page, it is obvious that self elo is not good indicator of real elo, so I wouldn't worry about this drop.

Regards,

Francesco 

Ingo Weidner

unread,
Dec 20, 2018, 11:35:17 AM12/20/18
to LCZero
Hi,

starting with ID 32105 there was the second LD drop (= learning rate drop). While teh self-play elo might ecrease the "real" Elo which is more important could increase after that.

After the first LR drop there was a big jump in the real and/or estimated Elo. You can have a look at the graphs here with teh first LR drop around ID30854:

As recently we were alraedy at a quite high level of real Elo that jump might not be as big now as with teh fisrt one. Anyway it coudl also happen that the real Elo will not further increase and then the ID 3xxxx might be failed and they have
to start a new 4xxxx network.

Vassilis

unread,
Dec 20, 2018, 12:00:58 PM12/20/18
to LCZero
I see...
I wasn't aware of those graphs... Very informative indeed.
So we'll just wait a few days and see! Hope for the best.

Thank you so much, Francesco and Ingo.
Vas

Francesco Tommaso

unread,
Dec 20, 2018, 12:17:35 PM12/20/18
to LCZero
I just saw the test 35 graphics and according to MTGOStark elo's estimation, it looks like it has already reached test 30.

Very interesting. 

Vassilis

unread,
Dec 20, 2018, 1:00:12 PM12/20/18
to LCZero
It is a smaller size net, Francesco. I guess, that's why it learns faster!
The question is: does it also learn better ... in the end?
I'm very optimistic here, that some of these tests will eventually surpass stockfish 10 (and any other AB engine) soon.

Just wait and see...
Vas

ovi...@gmail.com

unread,
Dec 20, 2018, 1:47:55 PM12/20/18
to LCZero
MTGOStark uses short TC and no increment in a not very powerful hardware. Test35 is a smaller net that get more nodes per second. In this ultrabullet conditions test35 is favoured. With longer TC, test35 is about -50 elo to SF5 (a fantastic elo, nevertheless) and test30 is closed to SF8.
Both of them have had a drop in LR recently so hopefully they will increase in strength.

hibernal

unread,
Dec 20, 2018, 3:02:56 PM12/20/18
to LCZero
Test 35 is also a squeeze and excitation net, I believe, which is a significant change on its own.  Likely the strongest net will be the generation after, which will include all of the lessons learned in 30 and 35 ( including lessons learned from switching to dedicated endgame nets ).

best,
dan

OmenhoteppIV

unread,
Dec 20, 2018, 5:00:21 PM12/20/18
to LCZero
I noticed on test10 and test20 when self elo drop the true elo increases.😂

Ingo Weidner

unread,
Dec 21, 2018, 7:11:23 AM12/21/18
to LCZero
With my current tests against Stockfish 10 at 10,in + 3s time control along the networks that came after ID 32089 (which is the best 3xxxx net here until now) so far only ID 32125 seems to perform at a comparable level.
FWIW so far i have tested until ID 320131.

I guess we will have to wait until there is a big jump in the self-elo into the positive direction. Until i see such jump i the graph i will focus on further testing IDs 32089 and 32025.

ID 32089 so far at 16 games with 10min + 3s time control has a score of 40,6% against Stockfish 10 (= 13 draws and 3 lost games). 
At the same time control ID 11248 currently has a score of exactly 50% against SF 10 (with 3 games won). So far at those conditions no 3xxxx network won a game here at my system..

boud...@hotmail.com

unread,
Dec 21, 2018, 7:44:51 AM12/21/18
to LCZero
Thank you for sharing.

Your results seems in line with my own tests... with 32093 tested less than 30 ELO below SF10.
I haven't tested 32089..... did you test 32093?

Also what GPU do you use for LC0 and what CPU do you use for SF10?

rgds

Ingo Weidner

unread,
Dec 21, 2018, 8:24:58 AM12/21/18
to LCZero

Am Freitag, 21. Dezember 2018 13:44:51 UTC+1 schrieb boud...@hotmail.com:
Thank you for sharing.

Your results seems in line with my own tests... with 32093 tested less than 30 ELO below SF10.
I haven't tested 32089..... did you test 32093?

Also what GPU do you use for LC0 and what CPU do you use for SF10?

rgds



Hi,

yes i had tested ID 32093 but IIRC here it performed slightly worse than ID 32089. 

My hardware is:
- HP Omen notebbook
- Windows 10 Home 64-bit
- RAM: 16 GB (with 1 GB hash used)
- GPU: mobile Gforce GTX 1050 TI with 758 CUDA cores  ==> nodes: around 2-3 knps
- CPU: mobile Core i7-7700HQ (4 x 2.8 GHz with up to 8 threads)    ==> nodes: with SF 9 at depth of 26 around 5176 knps with 4 cores (more knps at higher depths)

Currently either 4 or 3 CPU threads used also depending on which other stuff i currently do with the notebook.while Arena runs in the background.
In any case the performance seems to be more in favor of the CPU at this setup except if i switch to only one CPU thread/core..

Francesco Tommaso

unread,
Dec 21, 2018, 8:36:06 AM12/21/18
to LCZero
It seems like the drop in LR is not producing high gains, at least when compared with test 10 and the ones before. Maybe test 30 is next to it's potential ceiling.

I really don't believe that a 20x256 network can get much further than 11258. 

Edward Panek

unread,
Dec 21, 2018, 8:51:21 AM12/21/18
to LCZero
Do you think a larger network size is required?

Hace

unread,
Dec 21, 2018, 9:29:52 AM12/21/18
to LCZero
Maybe we should better understand what is fed to the net and how it learns from the testgames?

As far as I know, each position in a game is used for learning. Suppose white wins against black in some training game; even when a while position might lead to a win in a few other testgames but is lost in this testgame, the positions are learned to be a loss.
If there is too much noise or randomness in moves it might be very hard to generalize?

Op vrijdag 21 december 2018 14:51:21 UTC+1 schreef Edward Panek:

ovi...@gmail.com

unread,
Dec 21, 2018, 9:33:22 AM12/21/18
to LCZero
Do you believe that, in the first attempt, with no optimising parameters, test10 have already get the best from a 20b net and no further gain can be obtained? In that case we were very lucky/unlucky

Francesco Tommaso

unread,
Dec 21, 2018, 9:44:29 AM12/21/18
to LCZero
Well, as far as I know, we actually used Google's Deepmind parameters. I think those guys made a lot of optimising, considering that they can test results in hours, and not in months, like us. 

I am not saying that we can't get better results than theirs, but I find it very improbable that with a few tests here and there we will. 

I think that a way better use of resources would be to simply test network size as a "parameter to be optimized", instead of all the others we tried already. Even more because we didn't test it enough. Every network size increase came with better results. We stoped testing before we fully understood how much it can be increased before coming to a plateau on current hardware. 

I simply do not understand the current strategy, and no one gave a good argument against my criticism. If somebody could explain to me what am I missing I would be very happy. I trully would. 

I see no sign of breaking 11258 elo's barrier any time soon with the current strategy. Test 20 was a total failure and test 30 seems to have reached "close to potential".

One more thing: why can't we do parameter tuning on smaller networks?  

Regards,

Francesco

Owen W

unread,
Dec 21, 2018, 10:03:29 AM12/21/18
to LCZero
In a previous post they seemed to do a lot of optimizing based on that paper:


I wonder hard this would be to incorporate? We as a community are up against the fact that they had way superior hardware to do what they did with fairly immediate results, we don't have that luxury it seems.

Owen

Owen W

unread,
Dec 21, 2018, 10:08:04 AM12/21/18
to LCZero
Also note in that paper 3.3 Task 3: Tuning on TPUs

The use of Bayesian Optimization.

Jon Mike

unread,
Dec 21, 2018, 10:18:20 AM12/21/18
to LCZero
@Francesco,

Why can't we do parameter tuning on smaller networks?

This has been my song for many months.  I think others, like Margus Riimaa have mentioned this too. Testing and tuning parameters on smaller networks is the most cost efficient route we have available.  Unfortunately, I haven't been able to sway the devs.  The trend of testing and tuning on larger and larger networks is unnecessarily costly, slow and IMHO not very wise.  Does anyone know why this has not been done?

Ingo Weidner

unread,
Dec 21, 2018, 10:22:01 AM12/21/18
to LCZero
Speaking about new networks after the LR drop today ID 32125 scored 44% against Stockfish 10 at 10 mins + 3s time control (in 8 games and colors switched between them).
Again today with the same same conditions  ID32089 (which was best 3xxxx net so far...) scored 44% too which is slightly better than the previous tests with it. tat were around 40%.
ID 11248 still keeps a score of 50% like it did a few times before and still is the only net that scored a few wins agaianst SF 10 under those conditions..

hardware used:
RAM: 16 GB (1024 MB hash), CPU: mobile i7-7700HQ 4x2.8GHz (3 threads/cores used for engines), GPU: mobile Nvidia Gforce GTX 1050 TI 4GB (768 CUDA Cores)

Owen W

unread,
Dec 21, 2018, 10:24:07 AM12/21/18
to LCZero
So why is 11248 not being optimized?

Vassilis

unread,
Dec 21, 2018, 10:41:52 AM12/21/18
to LCZero
Yes indeed... Very good idea!
After all it has the same architecture (20x256) with the rest.

Francesco Tommaso

unread,
Dec 21, 2018, 11:35:20 AM12/21/18
to LCZero
A network, in this case the 11248 is just a file with several weights. You can't optimize a network without changing it's nature. The magic happens with the way that these weights are found. 

There is a theoretical optimal configuration of the weights, which would lead to the best performance given the size of this network. The way weight configurations are achieved is through training. The training happens with a network playing against itself, but with high variability of move choices (high temperature) that guarantees exploration and novelty, finding of new ideas, which generate new networks that incorporates those ideas. And so and so...

The parameters tuning made lately are mainly related to two things:

1 - How the exploration (moves considered) is done, given a certain amount of nodes searched: should it be more swallow but more diverse, or should it focus on less moves diversity, but go deeper into those variations?

2 - How those things that were learned are applied to the new weights: for example, how important are such principles found in those specific cases (novelties)? Should they be given more or less importance to than those that it already knows or already though were optimal?


My explanations is far from precise, but I hope it gives some clues about the internal workings of the training and can help with my argumentation.

Deepmind already made optimisations in these parameters. Of course, their focus is more open than ours, since they are not inherently worried specificaly with Chess, but with board games like Go and Shogi. Given this, it is fair to conclude that we can find better parameters for Chess than the originals. 

The problem is that all these parameters tuning tests are being done with big networks and we only know the result months later. If we are to try to surpass Deepmind's tuning we should do this in a more efficient way, as pointed by Mike, with smaller networks.

One parameter which we tested, and it was the first one, was network size. The project started with very small network and went slowly increasing its size. It wal clear that while smaller networks were reaching plateau, bigger networks surpassed these with the same parameters other than network size. 

When we eventually reached 20x256 size, the same as Alpha Zero, we simply stopped. The best network we have so far, even with bugs, is the 11258. Which parameters were used? Deepmind's! The result: a network that IS Alpha Zero. It plays almost identical, the benchmarks also show that it's elo is very close to those of Alphazero. Lots of benchmarks under same conditions as the paper lead to the same result. Dietrich also showed some of these. 

How do we surpass Alpha then? According to the strategy currently in execution, by outsmarting the guys on Deepmind, finding better parameters in the first or second try, since we can't afford to keep years testing parameters.

My humble suggestion: lets increase network size. Take a look at the difference between the 15x192 and the 20x256. With the same parameters you end up with a much stronger network. Why not increase it's size a little bit and see?

Until now no one gave a good argument against it. I would be very satisfied if someone came here and said somenthing like: "Hey, Francesco, you are an idiot. It does not work because... (and gives good solid argument against)." I would be very relieved to know that we are going through the best path, even if I am an ignorant fool.

The time consuption argument, that bigger networks take more time to train, is flawed. We wasted more than 100 million games with tests that until now look like complete failures (I hope I am wrong!)

With this amount of games we could have fully trained a 28x362 network (41% increase size in each dimension, which leads to 1.41^2 time increase, or two times current time, which is of 48 million games at current time, considering the 48 million games of 11258).

Regards, 

Francesco

Owen W

unread,
Dec 21, 2018, 12:06:50 PM12/21/18
to LCZero
Isn't this a point that is being made about automating this versus pure trial and error, which seems to be what is going on - hence the Bayesian Optimization? Or have they incorporated this?

Vassilis

unread,
Dec 21, 2018, 12:10:45 PM12/21/18
to LCZero
Hi Francesco ...

Very well said!

I also believe that a bigger net would be finally stronger, in terms of positional evaluation. I do not object about the training time for those nets, either. But on the other hand, bigger nets are also heavy, expensive, in terms of calculation time. That means, in actual, "practical " play (after they've being trained), bigger nets would examine far fewer nodes/sec.  --> reach shallower depths. There must be a "balance" between the quality of the evaluation and the search depth. Can we be sure before hand that those nets will actually perform better in practical play?

I like 20x256 nets, with alpha-zeros initial training parameters. The community does not possess Deepmind's extreme hardware to do the training. So, in order to surpass alpha zero, the nets must be trained "smarter". Something like the strategy you are following now. Somehow I feel that the best weights are not in 11248. Maybe they are close to them, maybe not. The question is how do we get there? Experimenting is the answer...

Regards...
Vas

Owen W

unread,
Dec 21, 2018, 12:35:16 PM12/21/18
to LCZero
Vas,

Experimenting is good if it is directed. I have no problems volunteering computing time and even buying more powerful cards, but for me to justify doing that, going forward, I would want to see a comprehensive plan, integrating automated hyper-parameter setting and not guessing, etc. Why is Tensor Cores not being used, for example. Based on A0's papers we know about how many games it took to get to a certain point - it seems like the current data gathering is a bit scattered, they got to 11248 strength a few months or so ago and have not come close to it since. It seems like several people are contributing good ideas, but again it seems as though there is no consensus direction.

Cheers,
Owen

Jon Mike

unread,
Dec 21, 2018, 2:27:57 PM12/21/18
to LCZero
@Francesco,
With respect this satirical reply is not against you, but against the idea which disagrees with my own scientific conscious.  

I mentioned something about a 100 block network a while back, but now I am thinking a 10,000 block would be even better.  Sure, it might take longer and but it should be much stronger.  Once the network is extremely large, then we can experiment and try to understand just what we are doing.  It is always best to expand before consolidation, since expansion (stronger end networks) is the goal.

However, if we are concerned about understanding... it would be logical to test and tune parameters using very small networks of a gradient of different sizes.  Perhaps that understanding would reveal a clear path to stronger networks.

In summary the network size is strongly correlated to the end strength, but on the same hand the network size is inversely proportional to the cost and more importantly the rate of our scientific observation and understanding.  I guess it is a matter of what matters most.  But is that subjective to the individual?  I think not.  In science, knowledge/understanding is what matters most!

Edward Panek

unread,
Dec 21, 2018, 2:57:52 PM12/21/18
to LCZero
If I understand this correctly some network has the optimum size to handle chess’ complexity but also has a small enough size for search depth, correct? Aside from speed what is the negative over too large a network size?

Edward Panek

unread,
Dec 21, 2018, 3:04:39 PM12/21/18
to LCZero
Or what test can be used to verify/validate we have the right size network?

Vassilis

unread,
Dec 21, 2018, 3:25:24 PM12/21/18
to LCZero
Hi Edward!

Generally speaking, the larger the network the better. Another downside of a very large network, besides its speed, is its training time...
Thus its "optimal size", has to take all these factors into account. The problem is that this "optimal size" is not known at first, so one has to try progressively larger and larger nets, until some fits his needs!

Regards...
Vas

123

unread,
Dec 21, 2018, 3:33:45 PM12/21/18
to LCZero
Jon Mike:
1.Yes yes we need bigger block, the network should be much larger!
2.LC0 must be trained with the chess960 and not with the one main starting position!
          -The main starting position is good and that's all but:
          -The chess960 positions are 960 starting positions instead of only one main starting position (960 vs 1). Also it improves the positional and strategical abilities more due to much higher complexity. Also the tactical skills would be some hundreds elo higher, because the chess960 positions leads to much more tactical fireworks, maybe more often by a coefficient of 300. Also the playing style would be more mixed and the chess knowledge would be on a much higher level.
3.LC0 should not be trained only from the beginning of the game but also it should be trained from the end of the game! Instead of picking one endgame position and not he other, that feels somehow to be unfair and not in harmony, LC0 should train all possible 4 men positions like these would be the starting positions and train it to a point where we can say it has now perfect play in 4-men positions, then we can switch to 5 men positions and after that to 6 men positions and after that to 7 men positions.

Deep Blender

unread,
Dec 21, 2018, 3:36:24 PM12/21/18
to LCZero
What kind of understanding do you expect could be gathered by training smaller networks?

Vassilis

unread,
Dec 21, 2018, 3:43:41 PM12/21/18
to LCZero
This is exactly the point @Jon!

Knowledge is the key here!
And by saying this, I don't necessarily mean to understand the intricacies of DCNN's and how they learn, or discover clever technical ideas around them, but also new knowledge in the game of chess!
And for this last one Leela (and alpha zero) are unlike everything we have seen so far.
I play chess myself, and even I'm not a strong player, I'm deeply impressed by the way Leela plays. Perhaps this is how chess should be played.
Even if Leela never manages to be the top engine, will be first among equals, in terms of imagination, originality, exploration of new paths of the game!
Chess will be greatly benefited from these projects.

Regards...
Vas

Ingo Weidner

unread,
Dec 21, 2018, 3:59:07 PM12/21/18
to LCZero
Between ID 32132 and 32138 the self-play Elo jumped up around +74 points and now with around 9638 points is close to a new record.
If it goes on that way it looks promising...

Jon Mike

unread,
Dec 21, 2018, 9:36:30 PM12/21/18
to LCZero
@all,
I am afraid the majority of fellows mis-interpreted my above reply.   I believe the optimal size is a matter of present resources over time.  I believe small networks such as 10 block or smaller are overly sufficient for all our needs as scientists and chess players. 

I am convinced nets even smaller than 10 block can teach us much more about neural networks, parameters and especially chess.  This is counter-intuitive to many as we have assumed a complex problem to bear a complex solution.

@Deep Blender,
What kind of understanding do you expect could be gathered by training smaller networks?

Similar understanding as what would be gathered with larger networks, but at a much faster rate with far less cost in resources!

I believe:
  • A gradient of relatively small networks provides the optimal sandbox for scientific knowledge of NNs and chess understanding
  • The Markov Property found in chess significantly clues us in on some properties of the perfect solution namely, that it is possible the solution we are looking for is:
    • embedded and depth-less (present without conventional turn by turn analysis).  
    • astoundingly simple, although one would assume otherwise given the complexity of chess
  • It is more probably that a very small neural network could find the above proposed pure and simple depth-less solution
What do you think?

Deep Blender

unread,
Dec 21, 2018, 10:43:55 PM12/21/18
to LCZero
Would it be accurate to say that you believe all the knowledge which can be gathered with large neural networks can also be gathered with smaller ones (at least in the context of Leela)?

Jon Mike

unread,
Dec 21, 2018, 11:13:09 PM12/21/18
to LCZero
@Deep Blender,
Yes, that summary is 99% accurate, but maybe I should expound.

I believe more specifically, the solution (rather than all knowledge) can be found with both large and small neural networks.  However, I propose the relatively smaller networks will be more efficient as well as having a greater likelihood (more probable) of finding the proposed simple solution compared to the larger networks.  Of course the solution also encompasses all knowledge which can later be derived from a place of understanding.

I think the network size should be scaled to contain the space for all the fundamental variables, but no more.  And perhaps that required variable space is much smaller than what is generally believed.

Dietrich Kappe

unread,
Dec 22, 2018, 12:14:41 AM12/22/18
to LCZero
Between t30 catching up to t10 and leela’s performance dropping like a rock, I’m getting whiplash.

Owen W

unread,
Dec 22, 2018, 12:19:22 AM12/22/18
to LCZero
lol

Markus Kohler

unread,
Dec 22, 2018, 2:04:43 AM12/22/18
to LCZero
Training backwards looks also like a good idea to me.

Isn't it statically rare that Leela sees endgames in training?

Ingo Weidner

unread,
Dec 22, 2018, 4:59:56 AM12/22/18
to LCZero
The new ID 32139 seems to perform really great:

My own tests curently show it at the same level as ID 11248 and better than ID 32125 which was my best 3xxxx net so far and after more tests it might even surpass ID 11248....  

Vassilis

unread,
Dec 22, 2018, 6:02:16 AM12/22/18
to LCZero
That's great news.
Maybe we already have a better network, since training has reached up to 32148 by now.
@Ingo, what time controls are we talking about? And what hardware?

Thanks
Vas

Deep Blender

unread,
Dec 22, 2018, 6:19:39 AM12/22/18
to LCZero
In machine learning, the state of the art of many challenging tasks is nowadays being improved on a regular basis. A huge amount of those are reportedly achieved thanks to larger networks. That's not only the case for (un-)supervised learning, but also for reinforcement learning. Why do you think Chess is different?
I have the impression that you are looking for a compact and sharp representations of Chess knowledge. The only way that is currently known to improve the "sharpness" of a neural network is by making it (significantly) larger. There is a lot of research going on to improve that and I expect that we are going to see major improvements within a few years. Unfortunately, we are not yet a this point.

Vassilis

unread,
Dec 22, 2018, 6:43:43 AM12/22/18
to LCZero
@Deep Blnder, are you talking to me? :)

... No I'm not looking for anything like that. Compact chess knowledge and sharpness have already being implemented into elite (and non elite) chess engines. Neural networks are our opportunity to discover something more/else. I also agree that large networks are more capable of this task, than smaller ones. What I was saying in my previous post, is that there must be a trade-off between the size of the net and the time (and other resources) needed to solve a problem. Nothing absurd about that... I guess.

Vas

Deep Blender

unread,
Dec 22, 2018, 6:47:06 AM12/22/18
to LCZero
Sorry for not making that clear enough. The reply was for Jon Mike.
Message has been deleted
Message has been deleted
Message has been deleted

Ingo Weidner

unread,
Dec 22, 2018, 7:04:29 AM12/22/18
to LCZero
Hi,

first tests were at 1mins + 1s time control against Stockfish 8 which due to my tests gives almost the same results as using Stockfiah 10 at 10 mins + 3s. 

At 1mins + 1s against SF 8 after 8 games both ID 11248 and ID 32139 had a score of 50% but after 12 games ID 11248 dropped to 37,5 % (2 wins, 5 draws, 5 lost) and ID 32139 still had 50% (2 wins, 8 draws, 2 lost)..

Now i switched to let both play against Stockfish 10 at 15mins + 3s time control. The first game just ended and ID 32139 scored a draw there.

System/hardware used: 

Windows 10 64-bit, RAM: 16 GB DDR3 (1024 MB hash used for engiens), CPU: mobile i7-7700HQ 4x2.8GHz (3 cores used for engines), GPU: mobile Nvidia Gforce GTX 1050 TI 4GB (768 CUDA Cores, average nps around 2k to 3k)


Best wishes.
Ingo
 

Ingo Weidner

unread,
Dec 22, 2018, 7:15:38 AM12/22/18
to LCZero
ID 32140 was just added to the estimated Elo graphs and so far like 32120 reaches 3386 points there which is the highest score for the 3xxxx nets:


With my own tests ID 32139 had scored better than ID 32140 but for the graph they use steps of 20 nets.
Message has been deleted
Message has been deleted

Ingo Weidner

unread,
Dec 22, 2018, 9:19:10 AM12/22/18
to LCZero
While i am still testing it really looks like ID 32039 could be a new 2milestone" in terms of Lc0 networks and those bets taht follow might be even stronger. For example i have not checked ID 32150 yet.

After ID 321039 alraedy performed much better than ID 11248 against Stockfish 8 at 1min + 1s time control (50% for ID 321039, 37,5% for ID 11248 at 12 games for each of them) currently after just 2 games
against Stockfish 10 at 15 min + 3s time control ID 321039 alraedey leads 1.5 -  0.0 points compared to ID 11248 (which coresponds to a score of 75% for ID 321039 !!). 

This includes the first win against SF 10 under those conditiosn for a 3xxxx network that i have seen at my system/hardware. Also having 2 lost games in a row for ID 11248 was quite rare in the past...
As at the same time ID 32039 is playing really good it thereason could not really be the hardware. I did also not change any settings for ID 11248 comapared to yesterday where it performed much better..
Reply all
Reply to author
Forward
0 new messages