Hi all,
new to this forum so hopefully I am doing this in the right spot.
I am running the advanced disk test to validate disk for the new systems we want to introduce at work.
I am testing on a 2xIntel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz
with latest generation of Intel SSD's (INTEL SSDSC2KG480G
the thing is that when I run the test with my specific test case:
2 threads sequential writes (block size 2Mb / file size 2Gb / 50% random data)
1 thread sequential reading (block size 2Mb / file size 2Gb / 50% random data)
test runs for 3600 seconds with 0.25ms sample size (so 14400 samples per hour)
I filled up the disk with a dump file of 440Gb to mimic worst case scenario.. (through FSutil commands)
I get some unexplainable drops in the graphs..
when I look at the datadump (export of the cvs)
I can see steps were no data has been transferred either right or read..
so that causes the system to think there is a time step were there was no data transfer and thus screwing up the min/max/average calculations.
(please see attached files / images)
I am checking other disks from other manufacturers it happens also.. but it seems random, so it migh tthe PC or the SSD causing this I don't know. but it seems very odd behaviour for these very expensive SSD's..
Am i doing something wrong? does anybody else have seen something similar?
next test is a different version of intel disk, and I will re check with an empty disk to see if it is different.
I just want to know why this happens and if i can discard this as a tool issue or I must clasify this as a disk issue.
many thanks
Paul
new to this forum so hopefully I am doing this in the right spot.
I am running the advanced disk test to validate disk for the new systems we want to introduce at work.
I am testing on a 2xIntel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz
with latest generation of Intel SSD's (INTEL SSDSC2KG480G
the thing is that when I run the test with my specific test case:
2 threads sequential writes (block size 2Mb / file size 2Gb / 50% random data)
1 thread sequential reading (block size 2Mb / file size 2Gb / 50% random data)
test runs for 3600 seconds with 0.25ms sample size (so 14400 samples per hour)
I filled up the disk with a dump file of 440Gb to mimic worst case scenario.. (through FSutil commands)
I get some unexplainable drops in the graphs..
when I look at the datadump (export of the cvs)
I can see steps were no data has been transferred either right or read..
so that causes the system to think there is a time step were there was no data transfer and thus screwing up the min/max/average calculations.
(please see attached files / images)
I am checking other disks from other manufacturers it happens also.. but it seems random, so it migh tthe PC or the SSD causing this I don't know. but it seems very odd behaviour for these very expensive SSD's..
Am i doing something wrong? does anybody else have seen something similar?
next test is a different version of intel disk, and I will re check with an empty disk to see if it is different.
I just want to know why this happens and if i can discard this as a tool issue or I must clasify this as a disk issue.
many thanks
Paul
Comment