Announcement

Collapse
No announcement yet.

New Benchmark Requests

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • New Benchmark Requests

    Dear PassMark Team,

    as a freak of lossless data compression and benchmarks, I want to make some suggestions for additional benchmarks (CPU, GPU):

    * WAVPACK or FLAC or TTA
    * LZMA (7-Zip)
    * BZIP2
    * MP3 creation (LAME)

    This would be more up to date compared to ZLIB based compression benchmarks. All of the codecs above are available without charge and are used very often on home machines.

    When it comes to schience, today's distributed computing projects are both available without charge and really important.For the sake of science and medical discoveries projects like climaprediction.net, Rosetta@home or Folding@home were founded
    the last one being used to simulate proteine folding in order to help understand diseases like Alzheimer's, Mad Cow (BSE), CJD, ALS, Huntington's, Parkinson's disease, and many Cancers and cancer-related syndromes.
    http://folding.stanford.edu/English/Main
    Those simulations are very good workloads that can be used for benchmarking purposes and I would like to suggest adding a science benchmark.
    Another benchmark idea is the financial value estimation of stock value done by the Black & Scholes model, which is both a real world application and its source is free. The Black & Scholes kernel scales very
    well on muliple core CPU's.
    http://www.2cpu.com/review.php?id=110&page=4
    What do you think?

    Folding@home for example can also be run on GPU (both nVidia 8 & 9 series and 200 series as well as ATI 2xxx, 3xxx and 4xxx series) that allows pretty much faster protein computing because it can make use
    of all shading processors (up to 240 on a GeForce GTX 280 and 800 on a HD 4870 X2). This would be a really cool benchmark.
    Last edited by Stephan Busch; Sep-24-2008, 03:50 PM.

  • #2
    Thanks for the suggestions.

    There is in fact already a (lossless) compression test in PerformanceTest. It is one of the CPU tests. The existing compression test uses an Adaptive encoding algorithm based on source code from Ian H. Witten, Radford M. Neal, and John G. Cleary in an article called “Arithmetic Coding for Data Compression”. The system uses a model which maintains the probability of each symbol being the next encoded. It reports a compression rate of 363% for English text, which is slightly better than the Huffman method. This test reports its results in Kbytes/Sec compressed.

    You can find an nice list of articles about Arithmetic Coding here.

    While we could of corse add dozens of additional compression based CPU tests, our aim was not to compare the relative performance of compression algorithms, but rather measure the CPU's performance. And we probably only need 1 compression based test for that.

    We are aware of the distributed computing projects. There would be a whole bunch of issues with including their code in our software. Including, for example, the fact that Folding at home hasn't relased source code for reasons "relating to client reliability and other issues".

    Comment


    • #3
      GPU (Gromacs core) OpenMM library

      Hi again,

      Witten and Cleary invented the PPM algorithm (prediction by partial match) in 1984; together with Neal they invented CACM in 1987. I know that there's already a compression benchmark within the
      CPU benchmark, but I thought that LZMA might be better because it scales very well on multiple cores which other algo's might not.

      I agree that more than one compression benchmark would be redundant.

      The folding@gome guys at Stanford do offer an OpenSource'd variant
      of their GPU (Gromacs core) code in the OpenMM library
      (https://simtk.org/home/openmm) which could be added for a future-safe
      GPU benchmark.

      A lossy compression benchmark for CPU would not be redundant as it
      uses completely different algorithms and goals (such as MP3 encoding)
      and is very common. The MDCT engine is based on Fast Fourier Transform
      which might be interesting.

      Comment


      • #4
        We have almost completely re-written the 2D tests and will be adding new shader code (which runs on the GPU) to the 3D tests. The shader code will be doing things like smoke and water effects. Which is really what the recent GPUs were designed for.

        That isn't to say the OpenMM would not be a good benchmark, but I don't think we will use it for this coming release becuase 1) we are too far down the track to the next release to add it in now, 2) It is a lot of code, meaning most the code would not be used and would just increase the size of the executable 3) We want to stick to visual tests for the video card tests (stuff people can see running).

        Comment

        Working...
        X