Do voice assistants understand what we speak to them? Or is it simply “listening” for a trigger word to begin its search for the information you’re requesting.
According to Smartsheet, “Assistant apps or smart speakers are always listening for their wake words. By default, the words “Hey Siri,” “OK Google,” “Hey Google,” and “Alexa” are the standards on their respective devices, but users can personalize their wake words to some degree.” While we are living in an era that is depending on voice assistance there is still work to be done. With every wave of technology there are bugs that need to be adjusted, context is the main issue. Setting the appropriate tone for the conversation is hard to do with a monotone medium who really isn’t there for you. In the article “Alexa, Siri, and Google Don’t Understand A Word You Say” there is a statement that stood out:
“You may not need to learn a new language, but you need to learn a new dialect.”
Finding the right footing to get to the point is what seems to be the most difficult and maybe one day in the near future we can find a solution to this.
With the voice interface of assistants, we expect to communicate with it as seamlessly as anyone else we know. Yet existing systems do not create a perfect algorithm to bridge us together. Voice assistance often only respond to rigid, almost formulaic commands. When users ask a question on a search engine their wording is almost robotic and simple, changing how we would ask a question while speaking when interacting with assistants. For example if you ask Siri to remind you of an event she will say something along the lines of “Ok. I’ve set a reminder”. The interaction is less fluid and does not use words like a friend would. We have our work cut out for us to be able to surpass language fluidity issues and take voice assistance into full language comprehension.