We get asked this question from time to time. So thought we would post something in the forum.
The question being,
Question: With PerformanceTest, you have a Windows, Android and Apple iOS version of the software. Can the benchmark results be compared?
Answer:
Our original plan was to have the results directly comparable across multiple platforms. But it is harder than it sounds.
There are differences in programming languages, compilers, how much RAM is available, how multi-threading is done, what functions are available in each different operating system and what 3rd party libraries are available. For example we saw huge differences in encryption performance depending on if hardware accelerated AES was available and exposed via the O/S API.
For example on PerformanceTest for Android, V1 all the code is written in Java, which tends to be slower than the C++ and assembler code used in Windows. There are also compiler differences and some differences forced upon us in the actual test code. For example on Windows we have access to much more RAM than on iOS and Android. In Windows we also have access to the faster SIMD instructions by hand coding them in assembler.
In addition, different test sets exists among the platform as they are updated in different intervals. iOS app hasn't been updated in a meaningful way in a long time (5-6 years?) and changes in hardware, OS version have introduced extreme numbers. The mobile versions of PerformanceTest were originally based on V7 of desktop software. Some tests have changed and some are not portable to the mobile versions.
We are currently in the beginning stages of development of PerformanceTest V10 on Windows. And we have some plans to revamp the mobile PerformanceTest apps as well to be able at least to have a common CPU test that can be used to determine at the very least CPU performance between platforms.
The question being,
Question: With PerformanceTest, you have a Windows, Android and Apple iOS version of the software. Can the benchmark results be compared?
Answer:
Our original plan was to have the results directly comparable across multiple platforms. But it is harder than it sounds.
There are differences in programming languages, compilers, how much RAM is available, how multi-threading is done, what functions are available in each different operating system and what 3rd party libraries are available. For example we saw huge differences in encryption performance depending on if hardware accelerated AES was available and exposed via the O/S API.
For example on PerformanceTest for Android, V1 all the code is written in Java, which tends to be slower than the C++ and assembler code used in Windows. There are also compiler differences and some differences forced upon us in the actual test code. For example on Windows we have access to much more RAM than on iOS and Android. In Windows we also have access to the faster SIMD instructions by hand coding them in assembler.
In addition, different test sets exists among the platform as they are updated in different intervals. iOS app hasn't been updated in a meaningful way in a long time (5-6 years?) and changes in hardware, OS version have introduced extreme numbers. The mobile versions of PerformanceTest were originally based on V7 of desktop software. Some tests have changed and some are not portable to the mobile versions.
We are currently in the beginning stages of development of PerformanceTest V10 on Windows. And we have some plans to revamp the mobile PerformanceTest apps as well to be able at least to have a common CPU test that can be used to determine at the very least CPU performance between platforms.