By Mentally Friendly

Tim Noonan

Voice, UX & Accessibility Consultant

We talk about the challenges of sound design, thinking outside the screen and whether it’s okay to fake a human.


Tim is a voice user interface designer and usability consultant who is focused on making accessibility a real usable experience. 

Being an invisible interface designer throughout all of his career, Tim reveals the commonly overseen value of visual elements and underestimated difficulties of creating pleasant experiences without them. 



In sound design, what we are doing is making sure that we are providing context to a listener the same as we do in a visual way.
If you think about a computer screen, it is two dimensional so you can use your eyes to look at any part of that screen and know what is on it without any interaction with the technology. But if you are a blind person using a screen reader you have to ask the screen reader what is at a particular point on the screen.
Because we are on the dimension of time the sequencing of information is really important. If you put the important information too late people have tuned out, if you put it too early they weren’t ready to listen because they did not have a marker cue.


It’s very hard to find people, whether they’re coders or designers who can think out of the screen … when you are moving into voice interfaces, you normally do what hasn’t been done … pair that with the fact that we notice what doesn’t work in sound interfaces, and what does work is invisible to us and you have a paradox.
We have a big problem in the domain of sound because noise happens in time and imagery and text happen in space… If you are embarking on a voice interface, you are actually embarking on an R&D venture rather than a design need super smart people in order to conceptualise stuff.


What is happening when you are engaging with the system? How do you work out whether you are talking to a real person or to a computer? Does that distinction matter? The truth is is matters hugely.

If people think something is a human and it is technology, then they are going to be assuming super intelligence from it which doesn’t yet exist. If you make it too robotic they won’t be able to relate to it…we have to experiment with that line.
While in reality with sound interface you can actually do much less. Users expect much more. As soon as you hear human voice, you expect human performance.

We talk back to our technology, we all swear at our computers, but we are more likely to talk back and be frustrated with sound interface. The minute someone becomes more human we expect it to behave better, and if it does not behave we want to let it know that it is not behaving.