Orbiting Optimisation – Small-scale User Tests

One of the things we’ve been discussing lately is how important it is to test your user experience and gather solid end-user attitudes to your digital products. Even if you were to only include eight participants in a test—while testing more participants will yield a greater likelihood that you will see clear correlating patterns—it’s worth the undertaking. You may gain some crucial insights that you or your product team may not be best positioned to spot by yourselves.

For example, we’ve worked on client b2c sites with build investments of over £1M and, when moderating and observing a modest number of users in testing, noted that participants found the navigation frustrating to use, profitable product lines were hidden from users, and that customers far prefer the offering of the client’s competitor site. All for reasons that were glaringly obvious once the users began to work through the site and that could not be dismissed because the participant numbers were small.

Experienced Usability experts who advocate carrying out large-scale tests are also very candid that testing a handful of users on a website will unfold more insights than not testing the site at all. If you have a site on which you depend to generate revenue, it’s a given that it is worthwhile reviewing the different aspects of the site—content and experience, visual design, usability, analytics (or a combination of these elements). Finally, something that shouldn’t be undervalued is an estimation (but a data-backed one is better) of your product’s ability to harmonise with your wider customer experience to ensure that its potential to achieve your aspirations is maximized. And, we’re not just talking about sites designed for the desktop. In a world where mobile is accelerating beyond desktop use, it’s also a given that any mobile and tablet sites are tested too.

Test. Apply insights. Launch. Test and Revise recursively—all the while orienting according to wider audience expectations. Getting a product’s design right both before and after a project is launched is a process of continuous optimisation. A process that needs to be based on responding to user behaviour on the live site and what real users say when they sit in front of your digital offering and see how well it works, or if it really works at all.

Related

Other than User Testing, there are alternative methods by which you can evaluate your digital product. You can read more about other user experience development and evaluation methods like the cognitive walkthrough in the article below:

How a Cognitive Walkthrough Can Help Your Product’s Usability

Have a cookie // We use cookies to ensure that we give you the best experience on our website.