Super Human Sports: Augmenting Blind Soccer

I’m getting more and more fascinated by augmenting Blind Soccer. After 3 blind soccer trainings, we had now a couple of meetings to discuss how to extend and enhance the play experience.

stretch kick

For me there are 3 interesting points about blind soccer:

  1. It’s very hard to learn. Can we make it easier for blind people to learn it? If you can play it, it’s very fast and empowering. We train with a soccer player from the Japanese national team. He plays better than me without being blind folded (ok … that’s maybe not really an achievement, I’m terrible at soccer).
  2. Can we make it easier for seeing people learn blind soccer and in turn understand more about the blind and improve their hearing skills?
  3. Can we level the playing field making it possible for blind and seeing to play together using tech?

However, the most interesting point, I think blind soccer can teach us that “disability” is a question of definition and the environment.

If you can play it, it’s very fast and empowering. We train with a soccer player from the Japanese national team. He plays better than me without being blind folded (ok … that’s maybe not really an achievement, I’m terrible at soccer).

The biggest take-away for me, I rely too much on vision to make sense of my environment. The training made me more aware of sounds. I find myself to listen more and more. Sometimes in a train on the street etc., I now close my eyes and explore the environment just by sound. It’s fascinating how much we can hear. This opened a new world for me. Looking into it more, I believe sound is an underestimated modality for augmented and virtual realities, which is worth exploring more. I stumbled over a couple of papers about sonic interface design. Looking forward to applying some of the findings we get out of the blind soccer use case to our lives ;)

If I have some more time, I’ll write a bit more about the training and the ideathlons we did so far. In the mean time, I recommend you try it sometime (if you are near Tokyo, you can maybe also join our sessions).

31C3 Talk Slides: Eye Wear Computing

I got tremendous, positive feedback. Thanks a lot! Even Heise had a news post about it. http://www.heise.de/newsticker/meldung/31C3-Mit-smarten-Brillen-das-Gehirn-ausforschen-2507482.html (Although I cannot and don’t want to read your thoughts, as the article implies ;-) ).

Video on Youtube:

Slides on Speakerdeck:

I got mixed some feedback on twitter. Some people mentioned that I’m working on spy wear helping the surveillance state… I believe that the research I do is necessary to be out in the open for the society to discuss the merits and problems. I want to make sure that we can maximise the benefit for the individual for the tech I develop and minimize abuse from military, companies and governments. Please contact me if you want to discuss privacy issues or found concrete problems with my work.

31C3 Aftermath

Here’s my talk selection in random order (I definitely forgot some as I haven’t watched all):

The Machine to be Another great research and talk. Mesmerizing, I’m thinking about how to use this effect for my work ;)

From Computation to Consciousness. Nice “Philosopy” talk by Joscha.

Rocket Science David Madlener gives a nice, entertaining intro in why it’s important to go to space and how to build rockets.

Why are computers so @#!*, and what can we do about it?.

Traue keinem Scan, den du nicht selbst gefälscht hast In German, yet I think there’s a translation. Extremely funny, hope the translation captures it.

Don’t watch the keynote. I wonder who picked the speaker … terrible.

This year, I spent a substantial amount with at the Food Hacking Base. Interesting talks. I’m thinking more and more to do some research in this direction. Especially after I listened to the new Resonator Podcast about the connection between our gut and our brain (recording is unfortunately in German).

Eye-Wear Computing

You might have seen the J!NS academic videos by now, I added embedded versions to the end of the post.

Bellow is the full video of the sneak peek of our work in the J!NS promotion. Special thanks to Shoya Ishimaru and Katsuma Tanaka , two talented students Koichi Kise Sensei (Osaka Prefecture University) and I are co-supervising. Check out their other (private) work if you are into programming for smart phones, Mac, iOS and Google Glass ;). The video is a summary of research work mostly done by Shoya.

In the video we show applications using an early prototype of J!NS MEME, smart glasses with integrated electrodes to detect eye movements (Electrooculography, EOG) and motion sensors (accelerometer and gyroscope) to monitor head motions. We show several demonstrations: a simple eye movement visualization, detecting left/right eye motion and blink. Additionally, users can play a game, “Blinky Bird”. They need to help a bird avoid obstacles using eye movements. Using a combination of blink, eye movement and head motion we can also detect reading and talking behavior. We can give people a long term view of their reading, talking, and also walking activity over the day.

Publications still pending, so I cannot talk about the features, algorithms used etc. In the mean time, here is a demo we gave at UbiComp this year.

J!NS Academic Video:

Oh and if you haven’t had enough: Here’s an extended Interview with Inami Sensei and me. Me wearing the optical camouflage for the first time at 0:04 :D (very short).

Google Glass for Older Adults

My grandparents conducted a longer Google Glass usability study for me. I’m happy they agreed that I can use their images and insights to share here.

oma opa

##Evaluation of the Current Google Glass

My grandparents mentioned that the current functionality of the device is quite limited. This might be due to the English only menu and bad to no Internet connectivity during usage. The experimental setup seems unobtrusive, as both of them are used to wear glasses, they got easily accustomed to carry Glass. All confirmed that the head-mounted display was not hindering them performing everyday tasks. Only my grandmother mentioned discomfort as the device got unusual hot after a longer usage session of recording video and displaying directions. During the simple reading test, they could read text of font sizes of 40px and higher on the 640 x 360 screen if a white font on a black background was used (best contrast). However, other font colors were problematic and needed larger sizes to be legible. Especially the light grey font color sed by some standard Glass applications was hard to read. For example, they could not read the night temperature for the weather card. Participants were able to use most speech commands and the “head wake” feature (tilting the head back to activate Glass) without problems. Only the “google search” speech command was not recognized, probably due to German accent ;)

oma opa

While both were able to activate the photo functions easily, they had difficulties to take the picture they intended. Glass seems to be optimized for taking photos in a slight distance (e.g. taking tourist pictures). However, my grandparents mostly wanted to take pictures to remember what they were doing (e.g. what things they had in the hand). It took a while for them to realize that the camera won’t make a photo of what they see (depicted in the both pictures above).

It was difficult for the two to use the touch panel for navigating Glass. The activation tab and scrolling usually works, yet the cancel gesture (”swiping down”) is more problematic and often not recognized. We assume this is due to fingers getting dry when getting older.

The active card display of Glass – swiping right to see cards relevant right now (e.g. weather, appointments), swiping left to see older events (e.g. taken pictures) – was intuitive. Yet, they had difficulties to use some of the hierarchical menu structures (e.g. for settings and doing a video call). They mentioned that it is hard to realize in which context they are currently operating, as Glass gives no indication of the hierarchy level.

###Usage Patterns Already during the two days of system use and the shopping tour a number of usage patterns emerged. They used the camera feature the most. A common use case was memory augmentation. Making pictures of things they don’t want to forget. For example, taking a picture of medication, so they can remember that it was already taken or taking pictures of interesting items while shopping.

Both preferred the “hands free” operations using the speech interface (although it was in English) compared to the touch interface during house work. Yet, being in town, they switched to the touch interface. During cooking and house work, my grandmother appreciated the timer provided by Glass. However, it was difficult for them to set the timer using the current interface, as this involves a hierarchical menu. It was not clear if they currently change the hours, minutes or seconds while in the respective sub menu. Wearing the timer on the body is highly appreciated, as it’s not possible to miss the timer going off.

For gardening and cooking, they would like to do video chats, for example to ask friends about tips and get their advice. Unfortunately, the limited internet reception at participants’ place did not allow video chats during the test phase.

###Requirements

Through the initial assessment of the system’s functionalities and through the observation we found a number of requirements. The focus on readability is even more crucial for older adults. Although Glass was already designed with this in mind, it seems font size is not the only thing that matters. Contrast seems equally important, as participants found it very difficult to read the light grey fonts used in some of the screens provided by Glass.

My grandparents request intuitive, hands-free interactions. They appreciated the speech interface if they are not in public or the “blink to take a picture” feature as they don’t need to interact with the touch panel. They also did not want to make the device dirty, especially during cooking and gardening. The touch interface was sometimes difficult to use for them. A potential reason is that they are not used to capacitive touch devices such as current smartphones. Scrolling worked, yet the cancel gesture (swiping down) was difficult to perform. It needed 2 or 3 tries every time they wanted to use it.

##Application Ideas work

Short Term Memory Augmentation – As we described above, participants frequently took pictures to use them as reminder (e.g. taking medication). Using the time card interface of Glass, it was already easy to check if they performed the ac- tion or task in question by browsing through the taken pic- tures. Each picture has also a timestamp with it (see Figure 2). However, this only works for the last couple of days, as other- wise the user needs to scroll too far back. The most requested feature was Zoom for images. While shopping, users took pictures to remember interesting items. Prices from items can be recorded, yet as the device does not support zoom for pic- tures, it’s impossible to read the price on the head mounted display (see Figure 5).

Long Term Capture and Access– The participants saw po- tential in having a long term capture and access interface. Checking how and what they worked on/did a couple months or even years back. Search on specific activities (e.g. bak- ing a apple cake) should be possible for access. Participants thought other types of indexing (e.g. location or time) would be not so useful.

Timer and Reminders – Although the interface was not op- timal for them the users already found the timer application useful. The raised the need for several simultaneous timers and reminders. Right now the installed timer application just supports a single task.

Instructions – For the gardening, cooking and workshop scenario, my grandparents would like to get instructions (e.g. ingredient lists, work steps) for more complex tasks they do not perform often. They prefer the Glass display to paper or instruction manuals, as they don’t need a context switch, clean their hands, stop what they do. Yet, they emphasized that the instructions need to be easily browsable. They would prefer the ”right” information at the ”right” time. Participant 2: ”Can’t Glass infer that I’m backing a cake right now and show me the ingredients I need for it?”

##Summary

These application scenarios are very similar to applications discussed for maintenance and, in general, industrial domains. Yet, as seen from the requirements usability and interface need to be adjusted significantly before wearable computing can be used by older adults without help.

##Finally, see you @ UbiComp

This is a serious of articles about our UbiComp Submissions if you want to read more, check out the Poster paper: Wearable Computing for Older Adults -Initial Insights into Head-Mounted Display Usage. Kunze, Kai and Henze, Niels and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

If you are attending UbiComp/ISWC this year please drop by at our poster. Oh and if you found the write-up useful, please cite us or share the article :)

Looking forward to ISWC/Ubicomp 2014

I hope to see you in Seattle. This year we have again a couple papers outline work from our students. Attached is a list with draft versions of the papers. I will write a bit about each topic in the next coming weeks.

Oh and if you attend please think about stopping by our Workshop on Ubiquitous Technologies for Augmenting the Human Mind.

Implicit Gaze based Annotations to Support Second Language Learning. Okoso, Ayano and Kunze, Kai and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Wearable Computing for Older Adults -Initial Insights into Head-Mounted Display Usage. Kunze, Kai and Henze, Niels and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Memory Specs-An Annotation System on Google Glass using Document Image Retrieval. Tanaka, Katsuma and Kunze, Kai and Iwata, Motoi and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Smarter Eyewear- Using Commercial EOG Glasses for Activity Recognition. Ishimaru, Shoya and Kunze, Kai and Tanaka, Katsuma and Uema, Uji and Kise, Koichi and Inami, Masahiko. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.


Position Paper: Brain Teasers - Toward Wearable Computing that Engages Our Mind. Ishimaru, Shoya and Kunze, Kai and Kise, Koichi and Inami, Masahiko. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Augmented Human 2014

Ok, I’m “a bit” biased as I’m one of the conference co-chairs. Still I enjoyed this years Augmented Human. Below is the tag cloud from all abstracts, to give you a brief overview about the topics.

Tag Cloud

Considering the small size of the conference, the quality of the work is exceptional. It’s not one of the conferences that gets the rejected papers from CHI, Ubicomp, PerComp etc. The steering committee really set up a venue for far-out, novel ideas. Also it’s a good opportunity to meet great researchers up close; last year for example Thad Starner, Albrecht Schmidt etc. this year, Jun Rekimoto, Masahiko Inami, Paul Lukowicz and especially Yoshiyuki Sankai … pretty impressive if you ask me. They might be around at other bigger events, yet try to catch them and talk to them, impossible. At AH, it’s very easy. I recommend any young researcher interested in the research topics to attend next year’s AH. Surely, I will try to get some papers accepted ;)

I believe we will see a lot of the work presented at AH2014 at CHI or Ubicomp next year. Yet, decide for yourself.

In the following, I’ll show you just a couple of highlights. I’m sorry, I cannot mention all of the cool work (I realized by writing that the blog post got bigger and bigger and decided to stop at some point so I can finally publish it …).

#Sports# As already the tag cloud suggested, augmenting sports was a hot topic at the conference.

So just in case you want to play a round of Quidditch or Shaolin Soccer in the real world, we might have the tech for it. Rekimoto research about “Hover Ball”. This topic has also been picked up by the New Scientist. I also recommend to check out some work by Takuya Nojima Sensei (TAMA and PhotoelasticBall).

The best paper award also went to a sports themed paper: “Around Me: A System for Providing Sports Player’s Self-images with an Escort Robot”. Nice!

#Around the Eye# As you might know, I have a personal interest in Eyetracking and related research, as I think it’s a very promising direction (especially inferring types of information that you otherwise cannot easily get hold off). So I was very curious about related work presented at AH about the topic and was not disappointed.

I’m wondering if I feel comfortable sharing my sad emotions, as suggested by Tearsense (Marina Mitani, Yasuaki Kakehi). Maybe in a dark cinema this is alright. As a part of life logging it might be also interesting. We had a couple of interesting discussions also during the social about the technology.

Asako Hosobori and Yasuaki Kakehi want to support face to face interaction with Eyefeel & EyeChime. Although the setup still seems a bit unnatural, I love the direction of the research using technology to enrich our social life and make us focus more on things that are important (away from looking at smartphone screens). Yet, judge yourself.

#Haptics#

The most far out work regarding output devices was definitely “A Haptic Foot Interface for Language Communication” by Erik Hill et. al. They use vibration motors to convey text messages on your foot. Made me wonder why we don’t use our feet more for HCI, regarding how sensitive our feet are and that a large part of our brain is also dedicated to sensing on the foot.

Max Peiffer et. al. showed how to make free-hand interactions (e.g. with a kinect or similar body tracking system) more realistic using haptic feedback. Nice work!

The half implant device on a fingernail by Emi Tamaki and Ken Iwasaki was also nice; especially considering that it’s already (or soon) a commercial product.

As always, I particularly liked Inami-Sensei’s work. Suzanne Low presented “Pressure Detection on Mobile Phone By Camera and Flash”. Very innovative use of the camera and nice demonstrations.

#Our work# We had 3 papers and 1 poster at the conference.


On the Tip of my Tongue - A Non-Invasive Pressure-Based Tongue Interface. Cheng, Jingyuan and Okoso, Ayano and Kunze, Kai and Henze, Niels and Schmidt, Albrecht and Lukowicz, Paul and Kise. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. Ishimaru, Shoya and Kunze, Kai and Kise, Koichi and Weppner, Jens and Dengel, Andreas and Lukowicz, Paul and Bulling, Andreas. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


What’s on your mind? Mental Task Awareness Using Single Electrode Brain Computer Interfaces. Shirazi, Alireza Sahami and Hassib, Mariam and Henze, Niels and Schmidt, Albrecht and Kunze, Kai. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


Haven’t we met before? - A Realistic Memory Assistance System to Remind You of The Person in Front of You. Iwamura, Masakazu and Kunze, Kai and Kato, Yuya and Utsumi, Yuzuko and Kise, Koichi. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.

I’m particularly proud of Okoso’s and Shoya’s work. They are both still Bachelor students and their research is already published in an international conference.

As Shoya was still visiting DFKI in Germany, he sadly could not attend. Okoso gave the Tongue Interface presentation and I was impressed by her. It’s her first talk at a conference and she’s a 3rd year bachelor. The English was perfect, the talk easy to understand and entertaining. Well done!

#Concluding#

The full program can be found at the AH website in case you’re looking for the references. See you next year at AH in Singapore.

Beyond FuturICT

Although the FuturICT project did not get funding from the EU so far (I still believe this was a grave mistake), I can see that the spirit and our ideas live on. The Japanese COI-T Program focuses on the same issues and problems as FuturICT.

FuturICT

Dirk Helbing’s presentation gave an overview of the FuturICT effort and the refined research agenda. I really enjoyed seeing how the material has matured. Also the talks from Shunri Oda and Maso Fukuma addressed similar problems and presented similar conclusions.

In the afternoon, Cornelius Herstatt gave some interesting observations about the future potential of the Japanese Market. Especially, I liked his conclusions about the use of robots in society: Seeing robots not as replacements to workers but as complementary, allowing more independence and privacy for older adults.

#Take Home Message# Not sure, if I got it right as social and system sciences are quite new to me, but what I took home from the symposium: System behavior is determined by connectivity patterns, coupling strength, interactions etc. and small changes in any of them can lead to dramatic changes in the overall system. Technology today has fundamentally changed connectivity and coupling in human interactions. Yet, so far we develop technology as a standalone in an uncontrolled way. So far, we have little idea how it influences society.

The Internet made any piece of digital, archival knowledge instantly, globally available. We are now at the verge of any real life event becoming, instantly globally connected to the digital domain. With this “Internet of Things” we might have a perfect substrate and basis to explore these effects. A key word seems “participatory social sensing”. FuturICT

#Discussions and Plenary Summary# The discussions centered around the big picture of society and how to induce beneficial change (as well as to prevent negative effects). Yet, most strikingly, the panel members also mentioned some concrete ideas about change and addressed especially the research community. There was a long discussion about how to accelerate research, in which Dirk Helbing again emphasized his concept of an “Idea Github for researchers”: a platform to share ideas /implementations and research results more freely with easy reproducibility and attribution to the corresponding inventors. The main premise is it should not take us 2-3 years to finally publish our findings.

I ran into this problem, also in the wearable computing field. It is hard for “outsiders” (researchers not in the community) to enter the field. If they just read papers and work on the published research, they can never work on bleeding edge research. You have to meet with people of the different labs and know what they are working on to find interesting topics (and more important they have to trust you …).

#Concluding# I’m very happy that the FuturICT lives on. In general, I find the Japanese research community is very open towards social computing, especially the idea of participatory social sensing. Maybe it’s more fruitful to continue the project ideas first here in Japan. It’s a bit sad that Europe has given away the chance of being a innovation leader in this field. Yet meeting Dirk again and seeing how his ideas and research towards social computing matured and developed was also great. Europe might have missed a chance, yet there’s still hope ;)

If you got interested in research about society and social science, I can recommed “Society is a Complex Matter” by Philip Ball. It’s an engaging read for novices to the topic. I could get a copy of at the event.

Glass Talk at Hacker News Kansai

Glass 1 Glass 2

I’m sorry it took so long to put online. Too much to do, especially due to Augmented Human 2014 (really looking forward to the beginning of march) :)

As part of our monthly Hacker News meetupin Kansai, I gave a small introduction on how to use (and hack for) Google Glass.

The slides are also available as pdf download (7 MB).

The “Hacks” are fairly simple. I more or less show how to develop for Glass using the Native Development Kit and some demonstrations how to read out sensors or other demo code I scraped from github and adjusted (see the slides for links to the sources). The examples include CameraZoom, Face Detection and Blink Recognition.

Glass 3 Glass 4