This Monday we’ll be holding a series of playtests at our studios in Stanford. As an mobile startup creating content for young children, we’re especially sensitive to questions of usability and design.
What kind of interface do children find intuitive? Which of our games will children find engaging? What design choices can we make so that our activities don’t distract from or overwhelm the Bible stories we’re telling?
Because we always have so much to learn, we try to approach testing as effectively as possible. So in preparation for Monday, I’d like to share with you how we set up our user testing sessions:
How do we conduct user testing?
We invite parents and children to our offices and set them loose on our apps (as opposed to surveys, interviews or participant observation). Our usability studies can’t reach as large a sample of users as surveys or interviews, but they allow us to directly observe how parents and children interact with each other and with Bible Heroes. The downside to usability studies is that because they’re conducted in our offices, they don’t necessarily present an accurate idea of how our apps will actually be used—whether at restaurants or at home or on the plane. All the same, we find usability tests the most cost-effective way to conduct user testing.
What kind of users do we test?
Our current offerings are targeted at children aged 3 to 7, so we try to invite testers in that age group. One of the surprises we had when we first released Bible Heroes: Noah was the number of downloads we got from around the world as far as Asia and the Middle-East. Ideally we’d like to test with people from across the world, but as a matter of logistics we’re limited to children in the Bay Area.
How many users do we test?
Even if you can only test one person, it will always be worth the effort. But in some often-quoted research, Jakob Nielsen showed that testing 5 users was enough to identify about 85% of design issues (given that the average design issue occurs 31% of the time). With 15 users, the percentage of design issues identified approaches 100%, so that’s a good number to aim for.
However, as Nielsen recommends, it’s best to take an iterative approach to user testing and test 3 groups of 5 rather than 15 at once. This gives you a chance to make improvements after each group and to see if the issues are indeed resolved. Our aim at 4Soils is to test at least 5 users after each major round of revisions.
We’re excited about learning from our users this coming Monday and in a future post I’ll share with you some of our observations and the design changes they lead to.