Made it! Spend most of my time trying to figure out how to visually show something like this with using real pictures.
There were limitations to google glass so I wanted to use an ‘extended’ version of google glass with two glass panels in front of your eyes so you could have a full field of augmented reality.
Anyway, here’s the final presentation: Deco Glass Presentation
So I’ve decided to run with a 3rd party application for google glass which you can preview furniture in your apartment that you would buy online to see if it would fit there/work with your decor/get a good idea how big things actually are. I would intend that people could use this for clothes too and might feel more inclined to buy clothes if we get an idea what it looks like with our complexion, how it fits, etc.
So for my experience prototype I wanted to go through Amazon with some friends and talk about the things that we would absolutely refuse to buy on Amazon and see if it would give me a better idea of what things I should be focusing on.
So here’s a list of things we wouldn’t buy (inspired by things amazon was trying to sell on cyber monday):
- Hot tubs
- Clothes, suits, jackets
- Wine (unless we knew the brand already)
- Many types of furniture
I think with some of these things, it’s not something that google glass could be much help with. Wine, perfume, etc. But for things like watches, bicycles, and furniture I think it could really help. We were talking about how you can buy cheap bikes on amazon but that it wouldn’t make much sense there unless you knew the height of the bike you were supposed to get. But if you could see it in space next to you and compare to you through google glass, you might change your mind (and see how high quality the parts are/aren’t).
I’ve got started on some low fidelity design mockups. I’m working through how I want users to navigate through interfaces. I can imagine people may want to browse amazon on their computers as well as through google glass so that’s where the favorites feature comes in. There they can save the items they were interested to view on google glass later. But I thought they should also be able to browse without having to go back to the computer.
Since google glass implements a lot of voice control I wanted to use that as the main way of navigating through the interface. There would be some learned commands such as “browse <blank>” or “next page/previous page” to view the next page of favorites/browsing. To view the specific item you would need to read the name aloud that was next to the item.
- Describe what your interface does in 2 sentences or less.
Removes the need of your hands and eyes to do typical everyday things that are usually cumbersome to use a phone when you are out and about.
- Who would use your interface?
Everyone who listens to music, makes phone calls, needs reminders, wants to play back something they heard earlier. Business people. Working professionals.
- What would they hope to gain?
Productivity, safer driving, and information that reaches them quicker. More control with less effort over their lives.
- What is the context/environment in which people will use your interface? Would it be used in public/private? Alone or in groups?
It depends on how awkward they feel talking to a computer in public. Many people would use this alone.
- What sorts of physical items might a user have to interact with?
- What questions do you need answered about your interface to see if it is necessary or effective?
How useful would it be to write emails with your voice/send texts? Does that make it more or less just a phone call or a voice message? Is there a need to translate from words to written text or are we attracted to silent communication? Would we rather fumble for our phones silently with our hands full than ask “Computer, what’s my grocery list?” in the store?
How useful would it be to have all the information you need always hands free? Is that something we want?
So my idea for this would be taking something that we are already attempting to do and think about ways we can do it better in the future. For me, this means – when we are using voice control we want to be hands free. But in all reality, devices like Siri and voice controlled apps – you use your hands for a lot of things. You’ve got to dig it out of your pocket, open the app you want, even read the results.
What if we had a device that had no visual interface and was only voice controlled? Maybe it could sync with your phone but in many cases – why do we need one? If we are driving – just tell us to go right or left, maybe we’re hurrying to class and struggle texting and walking at the same time.
Maybe some of these features eventually translate to a visual interface but I think a lot of things can be removed. I guess a lot of my inspiration comes from sci fi movies in which an AI has a certain ‘personality’ and responds to being talked to as a person (like Siri).
There’s a lot of room for things we can do with voice control that we aren’t doing. I want to explore that in this project.
I’ve been toying with how to interact with the device in your ear, will it be something that you press to speak while it’s in your ear? Or maybe it’s something that’s implanted there – so it listens when you say the name of the AI. This is where I came across implanted hearing aids and thought it was interesting.
Added Siri too for obvious connections
And the Ford Sync – a car which has a ton of voice control options that are cool. For changing radio stations and such.
This is the final presentation 🙂
Tried to try three different design directions.