No announcement yet.

interpreting performance test results in memtest86 free version

  • Filter
  • Time
  • Show
Clear All
new posts

  • interpreting performance test results in memtest86 free version

    I am wondering if you can post help me to interpret the graphs of performance test results. I am using the free version (thank you to everyone for making this possible) and I am a typical out-of-the-box consumer and not a developer. Perhaps this question will be obvious to typical member of this forum, but I am posting because an answer might help other weekend warriors like me whose failing RAM sends them sweating to these pages.

    I ran benchmark tests for read and write speed each on my two chips and want to know if my results are average, good, or bad. If you could post something for me to use as comparison (a graph or description), it would help me to decide if speed could be causing problems (using my computer for typical things an out-of-the-box person uses a computer for). I am asking because the results I am getting seem hinky, and my computer is almost ten years old now, so what was "good enough" then might not be good enough now, even if my chip were working as designed.

    I would just post my graph, but could not find a way to do so. But a description will work for me, as I am really asking about a standard for comparison instead of interpretation of my particular results anyway. Performance tests give me speed plotted against bytes of step distance, from about 0 to 8kb in about 14 steps, and the result shows how performance speed changes relative to step distance. I am getting a decay curve, which makes sense because you would expect a computer to take longer to read/write data when the points being read/written from are further away from each other on the chip. Okay... but I am getting a very VERY steep exponential decay curve with tangent/inflection close to the left side of the graph. The worst is write speed on my aftermarket chip: it freefalls to about 200 mt/s at either 16 or 32, and then stays pretty level after that. I would imagine that a computer typically functions at lower step sizes as this would be most efficient, but given the cliff that I see on my graph, I am wondering if speed could pose a problem if a program ever does send my computer off the edge of that cliff?

    This is my reasoning as to why it might be problematic. If my assumptions here are correct and I were a chip designer, I would design it to decay in a more linear fashion. Perhaps the little RAM trolls lurking in the shadows - where a "troll" is my attempt to describe this not knowing any jargon - drastically increase the tolls they extract at 16, 32, or 64 bits. In other words, a steep decay like this would be expected if a manufacturer had to trade off way too much in time, money, or whatever to increase speed at larger steps, and they don't bother because people don't generally need a lot of speed there. In other words, a steep decline is that way by design because a budget chip gives budget performance.

    Anyway, if you could post something about what an expected decay curve should look like.... or even a budget chip plotted against an analogous top-o-the-line chip, if this is easy for you to do... it would really help me figure out if all the assumptions I made in the previous paragraphs are correct. thanks.


  • #2
    MemTest86 is mainly for testing RAM. It does have a small benchmark in it as well, but if you want to compare your machine to other machines, PerformanceTest in Windows is the better application.

    Here is a screen shot from the basic memory tests
    Click image for larger version

Name:	PerformanceTest-Memory-Test.png
Views:	636
Size:	117.6 KB
ID:	52352

    And here is an example screen shot from one of the Advanced Memory Tests.

    Click image for larger version

Name:	PerformanceTest-Advanced-Memory-Test.png
Views:	569
Size:	59.9 KB
ID:	52353