Announcement

Collapse
No announcement yet.

BurnInTest V9 Beta Release

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by David (PassMark) View Post
    Petermann,

    Test tolerances:
    Some tests already have limits (network test & sound test). The network protocols are designed to fail then retry. And a small amount of sound distortion doesn't matter too much. For most of the tests however we expect 100% reliability. e.g. even 1 CPU error, Disk error or RAM error should always be a fail. In the USB example you posted above 9 send errors in 5 min is a fail. You need to distinguish between bit error rates and outright failure to send anything. If the test sends 1 billion bits, but then failed to send anymore data after a device driver crash (maybe generating just 1 error) then this must be a fail.
    .
    Thanks for responding David.

    Just a little clarification on my USB failure screenshot, that was a simulated test case so I could make a visual representation/example of what I would normally see. For the example, I literally pulled out one of the USB loopbacks right before the test ended. In reality, there would be 5 to 10 USB errors out of trillions of cycles over a multi-hour test.

    We have the hardest time with USB loopbacks and they tend to fail on us a lot. This is when we're testing brand new motherboards from big dollar companies like Intel, so we usually don't suspect the motherboard's USB ports as being the culprit. And usually when we switch out the loopback to another loopback, it passes on the same port. Since the USB loopbacks have a modest rate of errors, it would be nice to set a passing tolerance level manually. There are other IOs that would be nice to have tolerance as well, just maybe not the critical ones like CPU/RAM/Disk etc. I totally get the example of some items needing to expect 100% reliability, like CPUs.
    Last edited by Petermann; Feb-19-2018, 10:50 PM.

    Comment


    • #17
      Feature request in help files:
      Explaining the Multi-Process Torture Test Settings

      This is something that's still a little bit difficult to understand and reading through the RAM Test help files didn't help. I would like to request that the explanations are simplified a bit and the UI better explained. For example I still don't get what number of "10" % of RAM "7" each means? It's too cryptic for me and I could not find answers in the help files. Maybe there are others out there who had the same experience.

      Comment


      • #18
        Feature request:
        Simplify or clear-up confusion and extra steps when testing network ports

        There are two physical Ethernet connections on my motherboard and two other that don't have a physical port presence on the motherboard.

        If I go into BurnIn Test Preferences, select the "Network tab", and if I check the "Standard Network Test", and I don't know which numbers (1-10 field boxes) are available to test I look at the "Use same (#1) address for all tests" and see this as a straightforward solution. So I assume that by putting in my destination IP "192.168.0.1" in field box "1." and selecting the "Use same (#1) address for all tests", the program is smart enough to select only the testable physical ports with proper IPv4 addresses that have a physical presence on the motherboard.

        But when I go to start the test, it automatically selects the ports that are not connected and marks them as available to test. When they're not and Burnin test reports errors.

        The only way to get it to work:
        Only way to get it to work is to do some extra steps and click on "Advanced Test Options".

        Then see which network port has a valid IPv4 IP address. Then count down the list. For example on the list below:

        1. 169.254.34.233
        2. 169.254.34.132
        3. 192.168.0.151
        4. 192.168.0.150


        I would only be able to select and test 3, and 4. *(see attached photos for visual example).

        Then go back and select "Standard Network Test" again. Then leave field "1." and "2." blank and write out the destination IP "192.168.0.1" in the field boxes "3." and "4.".

        This adds a few extra steps, which seem to make the "Use Same (#1) address for all tests" somewhat useless (unless of course I'm using it improperly). Since I still have to do all these extra steps to figure out which field boxes, number 1-10, have a valid IPv4 address to test.

        When I run the test after these extra steps, everything works well.

        This above problem and the extra steps happen to me when I select the other options "All available physical network ports" and "All available physical 802.11 ports" as well.


        (See the next post for the rest of the photos)
        Last edited by Petermann; Feb-19-2018, 11:25 PM.

        Comment


        • #19
          Feature request:
          Simplify or clear-up confusion and extra steps when testing network ports (continued.)

          Comment


          • #20
            Bug?
            Testing USB 3.0 Loopbacks in USB 2.0 ports now lists the error message "USB 3.0 plug is not connected at 5GBps SuperSpeed" in BIT v9

            I've never had a problem using USB 3.0 loopbacks in USB 2.0 ports. They have been backwards compatible in Burn-in Test v8.1. V8.1 seemed to figure it out whether the USB 3.0 loopback was running from a USB 3 or USB2 port and test accordingly. When I ran Burn-in Test v9 the same way, with USB 3 loopback in all the ports (both USB 3 and 2) it gave me an error "USB 3.0 plug is not connected at 5GBps SuperSpeed". It's much more convenient to be able to use USB 3.0 loopbacks in slower USB 2.0. It would be nice if this can continue to be supported in Burn-in Test v9.

            Comment


            • #21
              Feature Request:
              Option to place "3D Graphics" and "Video Playback" windows on specific monitors in a multi-monitor setup. This includes placing the full screen option on a specific monitor


              Option to place 3D Graphics and Video Playback windows on a specific monitor when using a multi-monitor setup. This is useful because both those windows tend to completely block out the main burn-in test window and pile all on-top of the other and many times I'm using a multi-monitor setup with lots of extra real estate. See other screenshot for Windows 10 multi-monitor example.


              Example options for "Window Placement":
              Placement on Monitor 1
              Placement on Monitor 2
              Placement on Monitor 3
              Placement on Monitor 4

              Comment


              • #22
                Feature request (maybe wrong settings?):
                Push graphics card to use full consumption, max out capabilities while testing

                Before I make my request, I just want to mention how AWESOME the 3D test is now. Really well done! Great job to those who created it!

                I test a lot of Quadro graphics cards and I was wondering if there will be more options in Burn-in Test version 9 to push cards like that to their max? So far even though I turn on all the graphics card settings in Burn-in test to max, the performance tabs of Windows and GPUz report the card is not using its full power or potential. (see burn-in test screenshots results).

                However, when I run a program like Furmark and set that program's settings to the max, I'm able to get the graphics card to pull more power and a little more utilization of a card like the Quadro P6000. If we had more options to push these cards to their max, this will be really useful for us, as we often use Burn-in test to try and pull as much power as we can during our tests.

                Note: I had much better screenshots but they were too big for the forum photo limit, so I had to cut down to size. Hopefully the pictures still make sense. It was a comparison of running BIT v9 with all the graphic card tests and settings on max and then a screenshot of Furmark running on max as a comparison. In case it's too small here's the comparison:

                Running Bit v9 on max. Quadro P6000 graphics card power consumption 64.2% TDP

                Running Furmark on max. Quadro P6000 graphics card power consumption 91.3% TDP.
                Last edited by Petermann; Feb-21-2018, 11:27 PM.

                Comment


                • #23
                  Bring back a feature
                  Test Selection and Duty Cycles combined window for mouse-based users

                  I really like how you guys are designing Burn-in with touchscreen in mind. The tiles idea with the levels and settings accessible per test item/tile is brilliant and very convenient for those who use tablets and touchscreens. Would like to put in a little request that you bring back the combined Test selection and duty cycles for those of us who are still using mouse and keyboard and testing enterprise equipment like servers etc. The list view in columns is/was very useful when needing to use a mouse and keyboard to adjust the testing level and time and I noticed it no longer present in version 9. If it's not too much trouble it would be nice to have that window option back.
                  Last edited by Petermann; Feb-21-2018, 11:28 PM.

                  Comment


                  • #24
                    Beta release 5 was made public today.

                    Changes between beta 4 and beta 5 are,
                    • Fixed a bug where the MP3 test would cause the main BurnInTest window to freeze for short periods.
                    • Fixed bug with global error count value not being reset on test start
                    • Increased usb3 thread synchronization timeout from 1 minute to 2 minutes to avoid synchronization timeout when multiple usb3 plugs (more than 4 plugs) are connected. This is because Open function occasionally fails which causes 10 secs delay in enumeration of each plug leading to thread synchronization timeout.
                    • Added "Auto" option to the USB3 connection types
                    • Fixed a few language translation text issues.
                    • Replaced start/stop/reset icon with newer version on the Dashboard
                    • Some updates to help file text and images
                    • Improved list view mode on Dashboard, fixed icons and colours
                    • Added 'Remove' and 'Configure' links to list view in Dashboard
                    • Fixed "Tests Passed" showing when there are "No operation" errors on Dashboard
                    • Fixed individual test error count not being updated on Dashboard
                    • Fixed tooltip text for Start button when tests are started/stopped/reset on Dashboard
                    • Fixed invalid CPU temperature readings appearing when specifying a single CPU in the CPU tests

                    Comment


                    • #25
                      Petermann,

                      Comments on your last set of posts.

                      USB Loopbacks failing:
                      There are thread synchronization changes in the latest beta. They might reduce the number of errors. Some USB host controllers have device drivers that fail under high load. So this is a work around of sorts.

                      Multi-Process Torture Test Settings:
                      We'll alter the UI slightly to make it clearer.
                      The two numbers are,
                      The number of test processes to launch (e.g. 7)
                      The percent of total RAM each process attempts to reserve and test (e.g. 10%)
                      So 7 x 10% means 70% of total RAM.

                      Networking test setup:
                      In some cases some users might want an error when there is a physical port, but the port isn't working (e.g. due to broken physical connection, or driver problems which results in the lack of an IP address).
                      So I don't think it is always a valid assumption that non working ports can be ignored.
                      You could disable the ports that never get used or use the advanced test options.
                      And maybe we could add a standard test option like, "All available physical network ports which have an IP address"

                      USB3 plugs on USB2 port:
                      We added an "Auto" speed detect option in V9 Beta 5. This gives the same behaviour as V8.
                      The new V9 options are useful if you want to detect USB3 ports that are partially faultly and only run at USB2 speeds.

                      3D graphic monitor selection:
                      Yes, maybe we could add an option for placement on 2nd monitor

                      Graphics load:
                      Possible load on the GPU will depend on the CPU & GPU in use. A faster CPU will load the GPU more.
                      Also it is VERY important if you want maximum 3D load to turn off all the other tests, including video play back and 2D tests. In your screen shot you appear to have 3 other load tests running at the same time.
                      Also running it at 4K will ramp up the GPU load, compare to lower resolutions.
                      What we might be able to do is increase the multisampling level to x4, if the video card has enough RAM.

                      Test selection & Duty cycle window:
                      Easter Egg. Use F3 on the keyboard to access the old window.
                      Maybe we should leave the menu item as well.

                      Comment


                      • #26
                        Beta release 6 was made public today.

                        Changes between beta 5 and beta 6 are,
                        • Memory Test now waits for USB3 benchmark test phase to complete before starting. This was done because the benchmark mode in the USB test can use a lot of RAM, 64 x 2MB buffers per plug. Lots of buffers are needed to get maximum speed from the USB ports. So ten USB3 plugs would use 1.3GB of RAM (which is a lot) for a short period while the benchmark is run. Previously we saw what looked like memory allocation errors in the USB3 test if the RAM test had already started and the swap file was off. Benchmark mode only runs for a few seconds at the start of the USB3 test. So the delay in starting the memory test shouldn't be too high.
                        • Added memory allocation check when creating packet buffers for USB3 benchmark & loopback tests. So failure to allocate RAM for buffers shouldn't crash BIT anymore.
                        • USB3 Test was changed output in test window to fix display overwriting itself on single view and when when more than one plug was running in expanded view
                        • Updated help file
                        • Added 2K & 4K resolutions to 2d/3d/video playback tests
                        • The temperature tab now has a split list view into three lists, CPU, GPU and HDD
                        • Network Test has added new option to only select network cards that are connected/have an IP
                        • Video Playback test, Added option to mute audio when playing video
                        • Removed popup 'Status Window' when stopping tests. Instead, the status is displayed in the Dashboard Status Bar
                        • Added option to include the duty cycle of each test in the Report Information window
                        • Turned optical test off by default. Optical drives are likley to disappear over time. So we are defaulting this test to being off.
                        • Made some changes to the memory test preferences window tab for better readability
                        • Made DX12 test default to MSAA 4x to increase the load on the video card (which has the effect of slightly lower frame rates).
                        • Fixed square frame appearing in 'Check for new version' window
                        • Restored 'Test Selection & Duty Cycles' to the main menu

                        Comment


                        • #27
                          Fantastic! Many thanks! The points underlined below are the points I really appreciate!
                          • Memory Test now waits for USB3 benchmark test phase to complete before starting. This was done because the benchmark mode in the USB test can use a lot of RAM, 64 x 2MB buffers per plug. Lots of buffers are needed to get maximum speed from the USB ports. So ten USB3 plugs would use 1.3GB of RAM (which is a lot) for a short period while the benchmark is run. Previously we saw what looked like memory allocation errors in the USB3 test if the RAM test had already started and the swap file was off. Benchmark mode only runs for a few seconds at the start of the USB3 test. So the delay in starting the memory test shouldn't be too high.
                          • Added memory allocation check when creating packet buffers for USB3 benchmark & loopback tests. So failure to allocate RAM for buffers shouldn't crash BIT anymore.
                          • USB3 Test was changed output in test window to fix display overwriting itself on single view and when when more than one plug was running in expanded view
                          • Updated help file
                          • Added 2K & 4K resolutions to 2d/3d/video playback tests<----Great to have these options!
                          • The temperature tab now has a split list view into three lists, CPU, GPU and HDD<----Very helpful, makes relevant information much better for screen captures. Thanks!
                          • Network Test has added new option to only select network cards that are connected/have an IP <----So far this looks much more convenient. Can't wait to test it out!!!
                          • Video Playback test, Added option to mute audio when playing video<----Thanks. Hopefully this stops the interference with simultaneous audio testing.
                          • Removed popup 'Status Window' when stopping tests. Instead, the status is displayed in the Dashboard Status Bar
                          • Added option to include the duty cycle of each test in the Report Information window <----This makes are documenting job much easier. Very good for historical reference when troubleshooting and root causes analysis!!!
                          • Turned optical test off by default. Optical drives are likely to disappear over time. So we are defaulting this test to being off.
                          • Made some changes to the memory test preferences window tab for better readability
                          • Made DX12 test default to MSAA 4x to increase the load on the video card (which has the effect of slightly lower frame rates). <----Looking forward to testing this and seeing if it pulls more power from the massive Quadro cards like the P6000.
                          • Fixed square frame appearing in 'Check for new version' window
                          • Restored 'Test Selection & Duty Cycles' to the main menu <---Thank you!

                          Much appreciated Passmark team! I will hopefully have time soon to test it out fully and try all the new improvements and added features. Thanks for all your hard work!

                          Comment


                          • #28
                            We've got a few more changes coming to the DX12 test. At the moment in Beta 6 there are some issues around running multiple instances of the DX12 test on different monitors.

                            Comment


                            • #29
                              Looking forward to that. Graphics card test is a big part of how we use Passmark burn-in test.

                              Comment


                              • #30
                                Feature request:
                                Include option "Include test duty cycle" in the "Save results report" window as well

                                Is there a possibility to include the "include test duty cycle" option in the "Save results report" window, along side where it currently is in the "Report information" window? In our experience, sometimes we open and use the "Report information" window but sometimes for some testing, we only instruct the test technician to use the "save results report" window. I can see benefit in having them in both locations but understand if there is some logic I'm not aware of or if it's a lot of work to implement. Either way, thanks for already including it in this new version!

                                Comment

                                Working...
                                X