So my idea for this would be taking something that we are already attempting to do and think about ways we can do it better in the future. For me, this means – when we are using voice control we want to be hands free. But in all reality, devices like Siri and voice controlled apps – you use your hands for a lot of things. You’ve got to dig it out of your pocket, open the app you want, even read the results.
What if we had a device that had no visual interface and was only voice controlled? Maybe it could sync with your phone but in many cases – why do we need one? If we are driving – just tell us to go right or left, maybe we’re hurrying to class and struggle texting and walking at the same time.
Maybe some of these features eventually translate to a visual interface but I think a lot of things can be removed. I guess a lot of my inspiration comes from sci fi movies in which an AI has a certain ‘personality’ and responds to being talked to as a person (like Siri).
There’s a lot of room for things we can do with voice control that we aren’t doing. I want to explore that in this project.
I’ve been toying with how to interact with the device in your ear, will it be something that you press to speak while it’s in your ear? Or maybe it’s something that’s implanted there – so it listens when you say the name of the AI. This is where I came across implanted hearing aids and thought it was interesting.
Added Siri too for obvious connections
And the Ford Sync – a car which has a ton of voice control options that are cool. For changing radio stations and such.