I'm a newbie that needs some direction/facts rather than "I think it works that way". I couldn't find anything on your website or documentation so I thought I'd try here.
I understand that as you reduce the file size you increase the number of cycles per given time because you are writing and verifying smaller "blocks" of information. The questions were raised:
Do you write and verify a 1% FS to a single contiguous block?
Then is the second cycle random on the disk or next to the first block (sequential)?
Alternatively, if you use a 0.01% FS are the 100 cycles (1%) written to random blocks on the disk or are they in sequence on the disk?
Although the same percentage of disk and processor capability is verified, from a Quality perspective it's better to have randomly sampled for a potential problem than assume the first few blocks represent the rest of the process. I just need to figure out how to set the best process up for testing.
Isoalchemist
"Turning Quality into Gold"
I understand that as you reduce the file size you increase the number of cycles per given time because you are writing and verifying smaller "blocks" of information. The questions were raised:
Do you write and verify a 1% FS to a single contiguous block?
Then is the second cycle random on the disk or next to the first block (sequential)?
Alternatively, if you use a 0.01% FS are the 100 cycles (1%) written to random blocks on the disk or are they in sequence on the disk?
Although the same percentage of disk and processor capability is verified, from a Quality perspective it's better to have randomly sampled for a potential problem than assume the first few blocks represent the rest of the process. I just need to figure out how to set the best process up for testing.
Isoalchemist
"Turning Quality into Gold"
Comment