Pre-Deployment Player Testing

January 18, 2017

In a previous post, we discussed the importance of reliability and how we monitor reliability across our displays in the field.  One advantage of this is that it will reveal problems that might be occurring after a new player version is released.  That’s an important part of our reliability tooling, but ideally, we would catch all potential problems prior to releasing any new changes.  There are a number of techniques that we use in our effort to reach that ideal, and in this post, we’ll be discussing one of them.

You might also like:The Reliability of Your Digital Signage

One of the techniques we use is a pre-deployment end to end test to confirm that the player to be deployed will properly display all of our most popular widgets.  These core widgets, are loaded onto a presentation and that presentation is run within a virtual display on our deployment server.  At each stage of the end-to-end test presentation, a screenshot is captured and compared to a screenshot that we use as a validating reference.  If the screenshots do not match, we abandon that deployment and begin troubleshooting before that version is ever released.  Let’s look at an example.

The three images below show the expected view, the captured view, and the difference between the two.  The expected view is our validating reference and we know that a properly running player will show exactly that screen within our deployment server environment.  The captured view is the image that was captured during the pre-deployment test and we expect it to match the validating reference.  On the right, we have the difference image which highlights any variance between the validating image and the captured test image.  In this case, the difference image is all gray, indicating that the captured view is identical to the expected view, and this part of our test passes.

player-testing

 

The top left section of the test presentation is a video that plays while the test is running.  After a few seconds, the video transitions to a different scene and we see the result below.  In this case, the expected view differs very slightly from the captured view.  The running presentation did not render the view exactly the same as expected and the difference image clearly shows the difference highlighted in red.  This results in an error condition in the automated test command and causes the build to fail.  Once the build fails, we can examine the difference output and begin an investigation.

player-testing

This scenario occurred several weeks ago and we were able to determine that there was a problem with one of the video libraries that we were using in a new version.   The library caused some unexpected offset that we hadn’t noticed in any of our other tests and the build failed when the pre-deployment test didn’t pass.

Thanks to our automated pre-deployment testing, we were able to catch this problem before releasing the new version and we avoided a major inconvenience for our customers.

As always, let us know if you have any questions. We're here to help!

Get Your Message Noticed.SIGN UP NOW