No announcement yet.

Disk Checking File Size and Verification

  • Filter
  • Time
  • Show
Clear All
new posts

  • isoalchemist
    Thanks for the quick reply that's the information I needed!


    Leave a comment:

  • Ian (PassMark)
    BurnInTest writes tests files to the disk. The size of each testfile is specified by the Preferences->Disk, File size (% of disk space). E.g. 1% on an 80GB disk creates test files of 800MB. 1 cycle of the disk test is defned to be the writing, reading and verifying a test file. Hence, reducing the File size percentage makes each cycle smaller (as it tests less of the disk).

    The block size that is written to (within) the test file is seperatley specified using Preference->Disk, Block size. Increasing the block size will often increase the throughput (and reduce the test time). This is particularly evident with FAT drives as
    they have a reasonable overhead updating the FAT table.

    The mapping of the BurnInTest test file to the location on the physical disk is determined by Windows. Logically it is written and read sequentially and typically on a newly formatted disk this will be what happens on the physical disk. Based on this,
    you could either just specify a percentage of the disk you want to test (e.g. 20%) and say this is a good enough indication, but of course to test the full disk you really should test x Cycles, where x = 100 / (File size %). E.g. 5% File size needs 100/5 = 20 cycles to test the whole disk. You may also consider that after the complete disk is tested once, BurnInTest changes to the next data pattern if Cyclic is specified. Hence, if you want to test the whole disk with all test patterns you would test for y Cycles, where y = number of BurnInTest test patterns (Cyclic) * x = 9 * 100 / (File size %), or in the above example 9 * 20 = 180 cycles. The level of testing depends on your test goal.

    Hope that helps.


    Leave a comment:

  • isoalchemist
    started a topic Disk Checking File Size and Verification

    Disk Checking File Size and Verification

    I'm a newbie that needs some direction/facts rather than "I think it works that way". I couldn't find anything on your website or documentation so I thought I'd try here.

    I understand that as you reduce the file size you increase the number of cycles per given time because you are writing and verifying smaller "blocks" of information. The questions were raised:

    Do you write and verify a 1% FS to a single contiguous block?

    Then is the second cycle random on the disk or next to the first block (sequential)?

    Alternatively, if you use a 0.01% FS are the 100 cycles (1%) written to random blocks on the disk or are they in sequence on the disk?

    Although the same percentage of disk and processor capability is verified, from a Quality perspective it's better to have randomly sampled for a potential problem than assume the first few blocks represent the rest of the process. I just need to figure out how to set the best process up for testing.

    "Turning Quality into Gold"