I started testing a car to understand its limitations, then pushed it to the max.
I noticed that if I leave the temperature measurement as CPU-AVERAGE the temperature errors decrease A LOT.
I attach the two files, the first that measures the temperature of ALL cores (with multiple errors it was blocked after 13 minutes) with 8 errors
The second, with AVERAGE measure, gave 3 errors after 20 minutes.
At this point I have a doubt: what should I trust more? On the first test?
But if I look at the core temperature detail (present on both tests) I find that the maximum temperature on the first test involved 4 cores, while on the second it involved 9 (and why only 3 errors)?
I don't know if I managed to explain myself...
I noticed that if I leave the temperature measurement as CPU-AVERAGE the temperature errors decrease A LOT.
I attach the two files, the first that measures the temperature of ALL cores (with multiple errors it was blocked after 13 minutes) with 8 errors
The second, with AVERAGE measure, gave 3 errors after 20 minutes.
At this point I have a doubt: what should I trust more? On the first test?
But if I look at the core temperature detail (present on both tests) I find that the maximum temperature on the first test involved 4 cores, while on the second it involved 9 (and why only 3 errors)?
I don't know if I managed to explain myself...
Comment