Augmented Human 2014

Ok, I’m “a bit” biased as I’m one of the conference co-chairs. Still I enjoyed this years Augmented Human. Below is the tag cloud from all abstracts, to give you a brief overview about the topics.

Tag Cloud

Considering the small size of the conference, the quality of the work is exceptional. It’s not one of the conferences that gets the rejected papers from CHI, Ubicomp, PerComp etc. The steering committee really set up a venue for far-out, novel ideas. Also it’s a good opportunity to meet great researchers up close; last year for example Thad Starner, Albrecht Schmidt etc. this year, Jun Rekimoto, Masahiko Inami, Paul Lukowicz and especially Yoshiyuki Sankai … pretty impressive if you ask me. They might be around at other bigger events, yet try to catch them and talk to them, impossible. At AH, it’s very easy. I recommend any young researcher interested in the research topics to attend next year’s AH. Surely, I will try to get some papers accepted ;)

I believe we will see a lot of the work presented at AH2014 at CHI or Ubicomp next year. Yet, decide for yourself.

In the following, I’ll show you just a couple of highlights. I’m sorry, I cannot mention all of the cool work (I realized by writing that the blog post got bigger and bigger and decided to stop at some point so I can finally publish it …).

#Sports# As already the tag cloud suggested, augmenting sports was a hot topic at the conference.

So just in case you want to play a round of Quidditch or Shaolin Soccer in the real world, we might have the tech for it. Rekimoto research about “Hover Ball”. This topic has also been picked up by the New Scientist. I also recommend to check out some work by Takuya Nojima Sensei (TAMA and PhotoelasticBall).

The best paper award also went to a sports themed paper: “Around Me: A System for Providing Sports Player’s Self-images with an Escort Robot”. Nice!

#Around the Eye# As you might know, I have a personal interest in Eyetracking and related research, as I think it’s a very promising direction (especially inferring types of information that you otherwise cannot easily get hold off). So I was very curious about related work presented at AH about the topic and was not disappointed.

I’m wondering if I feel comfortable sharing my sad emotions, as suggested by Tearsense (Marina Mitani, Yasuaki Kakehi). Maybe in a dark cinema this is alright. As a part of life logging it might be also interesting. We had a couple of interesting discussions also during the social about the technology.

Asako Hosobori and Yasuaki Kakehi want to support face to face interaction with Eyefeel & EyeChime. Although the setup still seems a bit unnatural, I love the direction of the research using technology to enrich our social life and make us focus more on things that are important (away from looking at smartphone screens). Yet, judge yourself.

#Haptics#

The most far out work regarding output devices was definitely “A Haptic Foot Interface for Language Communication” by Erik Hill et. al. They use vibration motors to convey text messages on your foot. Made me wonder why we don’t use our feet more for HCI, regarding how sensitive our feet are and that a large part of our brain is also dedicated to sensing on the foot.

Max Peiffer et. al. showed how to make free-hand interactions (e.g. with a kinect or similar body tracking system) more realistic using haptic feedback. Nice work!

The half implant device on a fingernail by Emi Tamaki and Ken Iwasaki was also nice; especially considering that it’s already (or soon) a commercial product.

As always, I particularly liked Inami-Sensei’s work. Suzanne Low presented “Pressure Detection on Mobile Phone By Camera and Flash”. Very innovative use of the camera and nice demonstrations.

#Our work# We had 3 papers and 1 poster at the conference.


On the Tip of my Tongue - A Non-Invasive Pressure-Based Tongue Interface. Cheng, Jingyuan and Okoso, Ayano and Kunze, Kai and Henze, Niels and Schmidt, Albrecht and Lukowicz, Paul and Kise. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. Ishimaru, Shoya and Kunze, Kai and Kise, Koichi and Weppner, Jens and Dengel, Andreas and Lukowicz, Paul and Bulling, Andreas. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


What’s on your mind? Mental Task Awareness Using Single Electrode Brain Computer Interfaces. Shirazi, Alireza Sahami and Hassib, Mariam and Henze, Niels and Schmidt, Albrecht and Kunze, Kai. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


Haven’t we met before? - A Realistic Memory Assistance System to Remind You of The Person in Front of You. Iwamura, Masakazu and Kunze, Kai and Kato, Yuya and Utsumi, Yuzuko and Kise, Koichi. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.

I’m particularly proud of Okoso’s and Shoya’s work. They are both still Bachelor students and their research is already published in an international conference.

As Shoya was still visiting DFKI in Germany, he sadly could not attend. Okoso gave the Tongue Interface presentation and I was impressed by her. It’s her first talk at a conference and she’s a 3rd year bachelor. The English was perfect, the talk easy to understand and entertaining. Well done!

#Concluding#

The full program can be found at the AH website in case you’re looking for the references. See you next year at AH in Singapore.