back @ this year’s CHI 2016. We just have 3 Late Breaking Work submissions accepted and it seems they are currently for free download at the ACM website
[Read More]Lessons from the Dagstuhl Seminar on Eyewear Computing
We took a risk in organizing the Eyewear Computing Seminar as we deviated largely from the standard model, yet I believe it payed off.
[Read More]Looking back on UbiComp/ISWC in Osaka
It felt nice to be back in Kansai for the biggest conferences in my field.
Overall very pleasant and inspiring event, nice to see some old friends and meet new ones.
[Read More]After my first SIGGRAPH
I'm impressed mostly by the Hacking/Making Studios and the interactivity/demo exhibits. Siggraph is my new favorite research conference.
[Read More]Affective Wear- Recognizing facial expressions
Katsutoshi Masai, one of my Master students had the idea to track facial expressions using low cost sensors in glasses. Quite nice work ;)
[Read More]Super Human Sports: Augmenting Blind Soccer
I’m getting more and more fascinated by augmenting Blind Soccer. After 3 blind soccer trainings, we had now a couple of meetings to discuss how to extend and enhance the play experience.
For me there are 3 interesting points about blind soccer:
- It’s very hard to learn. Can we make it easier for blind people to learn it? If you can play it, it’s very fast and empowering. We train with a soccer player from the Japanese national team. He plays better than me without being blind folded (ok … that’s maybe not really an achievement, I’m terrible at soccer).
- Can we make it easier for seeing people learn blind soccer and in turn understand more about the blind and improve their hearing skills?
- Can we level the playing field making it possible for blind and seeing to play together using tech?
However, the most interesting point, I think blind soccer can teach us that “disability” is a question of definition and the environment.
If you can play it, it’s very fast and empowering. We train with a soccer player from the Japanese national team. He plays better than me without being blind folded (ok … that’s maybe not really an achievement, I’m terrible at soccer).
The biggest take-away for me, I rely too much on vision to make sense of my environment. The training made me more aware of sounds. I find myself to listen more and more. Sometimes in a train on the street etc., I now close my eyes and explore the environment just by sound. It’s fascinating how much we can hear. This opened a new world for me. Looking into it more, I believe sound is an underestimated modality for augmented and virtual realities, which is worth exploring more. I stumbled over a couple of papers about sonic interface design. Looking forward to applying some of the findings we get out of the blind soccer use case to our lives ;)
If I have some more time, I’ll write a bit more about the training and the ideathlons we did so far. In the mean time, I recommend you try it sometime (if you are near Tokyo, you can maybe also join our sessions).
31C3 Talk Slides: Eye Wear Computing
Here are the video and slides from my talk. Hope you like it. Please if you have some critique write me a mail.
I got tremendous, positive feedback. Thanks a lot! Even Heise had a news post about it. http://www.heise.de/newsticker/meldung/31C3-Mit-smarten-Brillen-das-Gehirn-ausforschen-2507482.html (Although I cannot and don’t want to read your thoughts, as the article implies ;-) ).
Video on Youtube:
Slides on Speakerdeck:
I got mixed some feedback on twitter. Some people mentioned that I’m working on spy wear helping the surveillance state… I believe that the research I do is necessary to be out in the open for the society to discuss the merits and problems. I want to make sure that we can maximise the benefit for the individual for the tech I develop and minimize abuse from military, companies and governments. Please contact me if you want to discuss privacy issues or found concrete problems with my work.
31C3 Aftermath
Finally, I have a little time to write about the congress. As the last couple of years, I attended the 31st Chaos Communication Congress between Christmas and New Year.
Here’s my talk selection in random order (I definitely forgot some as I haven’t watched all):
The Machine to be Another great research and talk. Mesmerizing, I’m thinking about how to use this effect for my work ;)
From Computation to Consciousness. Nice “Philosopy” talk by Joscha.
Rocket Science David Madlener gives a nice, entertaining intro in why it’s important to go to space and how to build rockets.
Why are computers so @#!*, and what can we do about it?.
Traue keinem Scan, den du nicht selbst gefälscht hast In German, yet I think there’s a translation. Extremely funny, hope the translation captures it.
Don’t watch the keynote. I wonder who picked the speaker … terrible.
This year, I spent a substantial amount with at the Food Hacking Base. Interesting talks. I’m thinking more and more to do some research in this direction. Especially after I listened to the new Resonator Podcast about the connection between our gut and our brain (recording is unfortunately in German).
Eye-Wear Computing
Smart glasses and, in general, eyewear are a fairly novel device class with a lot of possibilities for unobtrusive activity tracking. That's why I'm very excited to be working in the Team of Masahiko Inami Sensei at Keio Media Design to do research on J!NS MEME.
You might have seen the J!NS academic videos by now, I added embedded versions to the end of the post.
Bellow is the full video of the sneak peek of our work in the J!NS promotion. Special thanks to Shoya Ishimaru and Katsuma Tanaka , two talented students Koichi Kise Sensei (Osaka Prefecture University) and I are co-supervising. Check out their other (private) work if you are into programming for smart phones, Mac, iOS and Google Glass ;). The video is a summary of research work mostly done by Shoya.
In the video we show applications using an early prototype of J!NS MEME, smart glasses with integrated electrodes to detect eye movements (Electrooculography, EOG) and motion sensors (accelerometer and gyroscope) to monitor head motions. We show several demonstrations: a simple eye movement visualization, detecting left/right eye motion and blink. Additionally, users can play a game, “Blinky Bird”. They need to help a bird avoid obstacles using eye movements. Using a combination of blink, eye movement and head motion we can also detect reading and talking behavior. We can give people a long term view of their reading, talking, and also walking activity over the day.
Publications still pending, so I cannot talk about the features, algorithms used etc. In the mean time, here is a demo we gave at UbiComp this year.
Smarter Eyewear- Using Commercial EOG Glasses for Activity Recognition. Ishimaru, Shoya and Kunze, Kai and Tanaka, Katsuma and Uema, Uji and Kise, Koichi and Inami, Masahiko. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.
J!NS Academic Video:
Oh and if you haven’t had enough: Here’s an extended Interview with Inami Sensei and me. Me wearing the optical camouflage for the first time at 0:04 :D (very short).
Google Glass for Older Adults
As the first article about my grandparents using Google Glass received a lot of interest, I decided to delve a little bit more into the topic.
My grandparents conducted a longer Google Glass usability study for me. I’m happy they agreed that I can use their images and insights to share here.
##Evaluation of the Current Google Glass
My grandparents mentioned that the current functionality of the device is quite limited. This might be due to the English only menu and bad to no Internet connectivity during usage. The experimental setup seems unobtrusive, as both of them are used to wear glasses, they got easily accustomed to carry Glass. All confirmed that the head-mounted display was not hindering them performing everyday tasks. Only
my grandmother mentioned discomfort as the device got unusual hot after a longer usage session of recording video and displaying directions.
While both were able to activate the photo functions easily, they had difficulties to take the picture they intended. Glass seems to be optimized for taking photos in a slight distance (e.g. taking tourist pictures). However, my grandparents mostly wanted to take pictures to remember what they were doing (e.g. what things they had in the hand). It took a while for them to realize that the camera won’t make a photo of what they see (depicted in the both pictures above).
It was difficult for the two to use the touch panel for navigating Glass. The activation tab and scrolling usually works, yet the cancel gesture (”swiping down”) is more problematic and often not recognized. We assume this is due to fingers getting dry when getting older.
The active card display of Glass – swiping right to see cards relevant right now (e.g. weather, appointments), swiping left to see older events (e.g. taken pictures) – was intuitive. Yet, they had difficulties to use some of the hierarchical menu structures (e.g. for settings and doing a video call). They mentioned that it is hard to realize in which context they are currently operating, as Glass gives no indication of the hierarchy level.
###Usage Patterns Already during the two days of system use and the shopping tour a number of usage patterns emerged. They used the camera feature the most. A common use case was memory augmentation. Making pictures of things they don’t want to forget. For example, taking a picture of medication, so they can remember that it was already taken or taking pictures of interesting items while shopping.
Both preferred the “hands free” operations using the speech interface (although it was in English) compared to the touch interface during house work. Yet, being in town, they switched to the touch interface. During cooking and house work, my grandmother appreciated the timer provided by Glass. However, it was difficult for them to set the timer using the current interface, as this involves a hierarchical menu. It was not clear if they currently change the hours, minutes or seconds while in the respective sub menu. Wearing the timer on the body is highly appreciated, as it’s not possible to miss the timer going off.
For gardening and cooking, they would like to do video chats, for example to ask friends about tips and get their advice. Unfortunately, the limited internet reception at participants’ place did not allow video chats during the test phase.
###Requirements
Through the initial assessment of the system’s functionalities and through the observation we found a number of requirements. The focus on readability is even more crucial for older adults. Although Glass was already designed with this in mind, it seems font size is not the only thing that matters. Contrast seems equally important, as participants found it very difficult to read the light grey fonts used in some of the screens provided by Glass.
My grandparents request intuitive, hands-free interactions. They appreciated the speech interface if they are not in public or the “blink to take a picture” feature as they don’t need to interact with the touch panel. They also did not want to make the device dirty, especially during cooking and gardening. The touch interface was sometimes difficult to use for them. A potential reason is that they are not used to capacitive touch devices such as current smartphones. Scrolling worked, yet the cancel gesture (swiping down) was difficult to perform. It needed 2 or 3 tries every time they wanted to use it.
##Application Ideas
Short Term Memory Augmentation – As we described above, participants frequently took pictures to use them as reminder (e.g. taking medication). Using the time card interface of Glass, it was already easy to check if they performed the ac- tion or task in question by browsing through the taken pic- tures. Each picture has also a timestamp with it (see Figure 2). However, this only works for the last couple of days, as other- wise the user needs to scroll too far back. The most requested feature was Zoom for images. While shopping, users took pictures to remember interesting items. Prices from items can be recorded, yet as the device does not support zoom for pic- tures, it’s impossible to read the price on the head mounted display (see Figure 5).
Long Term Capture and Access– The participants saw po- tential in having a long term capture and access interface. Checking how and what they worked on/did a couple months or even years back. Search on specific activities (e.g. bak- ing a apple cake) should be possible for access. Participants thought other types of indexing (e.g. location or time) would be not so useful.
Timer and Reminders – Although the interface was not op- timal for them the users already found the timer application useful. The raised the need for several simultaneous timers and reminders. Right now the installed timer application just supports a single task.
Instructions – For the gardening, cooking and workshop scenario, my grandparents would like to get instructions (e.g. ingredient lists, work steps) for more complex tasks they do not perform often. They prefer the Glass display to paper or instruction manuals, as they don’t need a context switch, clean their hands, stop what they do. Yet, they emphasized that the instructions need to be easily browsable. They would prefer the ”right” information at the ”right” time. Participant 2: ”Can’t Glass infer that I’m backing a cake right now and show me the ingredients I need for it?”
##Summary
These application scenarios are very similar to applications discussed for maintenance and, in general, industrial domains. Yet, as seen from the requirements usability and interface need to be adjusted significantly before wearable computing can be used by older adults without help.
##Finally, see you @ UbiComp
This is a serious of articles about our UbiComp Submissions if you want to read more, check out the Poster paper: Wearable Computing for Older Adults -Initial Insights into Head-Mounted Display Usage. Kunze, Kai and Henze, Niels and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.
If you are attending UbiComp/ISWC this year please drop by at our poster. Oh and if you found the write-up useful, please cite us or share the article :)