Ubicomp ISWC Impressions

Usually, I’m not such a big fan of conference openings, yet Friedemann Mattern provided a great intro giving an overview about the origins of Pervasive and Ubicom mentioning all important people and showing nice vintage pictures from Hans Gellersen, Alois Ferscha, Marc Langheinrich, Albrecht Schmidt, Kristof Van Laerhoven etc.

Deeply impressed by the organization, social and general talk quality, I was a bit sceptical before the merger of Pervasive / Ubicom and collocating ISWC, yet it was completely unfounded.

We got some great feedback for Kazuma’s and Shoya’s demos. They both did a great job introducing their work about:

We got also a lof of interest and feedback to Andreas Bulling’s and my work about recognizing document types using only eye gaze. By the way, below are the talk slides and the abstract of the paper. ##ISWC Talk Slides##

##Abstract## Reading is a ubiquitous activity that many people even per- form in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about ex- pertise level and potential knowledge of users – towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically de- tected from visual behaviour recorded using a mobile eye tracker. We present an initial recognition approach that com- bines special purpose eye movement features as well as ma- chine learning for document type detection. We evaluate our approach in a user study with eight participants and five Japanese document types and achieve a recognition perfor- mance of 74% using user-independent training.

Full paper link: I know what you are reading – Recognition of Document Types Using Mobile Eye Tracking

Excited about Ubicomp and ISWC

This year I’m really looking forward to Ubicomp and ISWC, it’s the first time that Ubicomp and Pervasive merged into one conference and it’s the first time the venue sold out with 700 participants.

I cannot wait to chat with old friends and experts (most are both :)).

The field slowly matures. Especially, the wearable research is really pushing towards prime-time. Most prominently, Google Glass is getting a lot of focus also discussing its impacts on privacy. Yet, there is more and more talk about fitness bracelets/trackers and smart watches. I expect that we see more intelligent clothes and activity recognition work in commercial products in the coming years.

By the way, we have 3 poster papers and 2 demos at Ubicomp and a short paper at ISWC.

###Ubicomp Demos and Posters###

###ISWC paper###

Drop by at the demo,poster sessions and/or see me my talk on Thursday.

On a side note, Ubicomp really picks great locations. This year it’s Zurich, next year Seattle and the year after it will be in Osaka. Seems I might be staying longer in Japan, than I originally planned ;)

Some of my favorites from the 29c3 recordings

Over the last weeks, I finally got around to watch some of the 29c3 recordings. Here are some of my favorites. I will update the list accordingly.

I link to the official recording available from the CCC domain. The talks however are also on youtube. Just search for the talk title.

In General, I found most talks focused on security, sadly not really my main interest. I missed some research and culture talks that were present the last years. Examples from the last years:Data Mining for Hackers awesome talk!! or one of Bicyclemark episodes. Bicylcemark we miss you :)

English

Out of the hacking talks, for me by far the most entertaining was Hacking Cisco Phones. Scary and so cool. Ang Cui and Michael Costello are also quite good presenters. The hand-drawn slides give the visuals also a nice touch. I won’t spoil the contents. just watch it.

So far my most favorite talk is Romantic Hackers by Anne Marggraf-Turley and Prof. Richard Marggraf-Turley. About surveillance andy hackers in the Romantic period. I was not aware that the privacy problems and the ideas about pervasive surveillance had been discussed and encountered so early in human history. Very Insightful and fun.

The Tamagochi Talk was fun. Although the speaker seemed to be a bit nervous (listening to her voice), she gave some great insides how Tamagochis work and how to hack them.

The keynote from Jacob Applebaum, Not my department is a call to action for the tech community discussing about the responsibilities we have regarding our research and how it might be used. Although Applebaum is a great public speaker and the topic is of utmost importance, for some people new to the discussion it might seem a bit out of context and difficult to understand.

German

If you can speak German or want to practice it, check them out … Of course, the usual subjects Fnord News Show and Security Nightmares are always great candidates to watch.

I’m also always looking forward to the yearly Martin Haase Talk. Unfortunately, the official release is not online yet. Interesting especially for language geeks.

The talk Are fair computers possible? explores what needs to change in manufacturing standards etc. to produce computers without child labor and fair employment conditions for all workers involved.

ACM Multimedia 2012 Tutorials and Workshops

I attended the Tutorials “Interacting with Image Collections – Visualisation and Browsing of Image Repositories” and “Continuous Analysis of Emotions for Multimedia Applications” on the first day.

The last day I went to “Workshop on Audio and Multimedia Methods for Large Scale Video Analysis” and to the “Workshop on Interactive Multimedia on Mobile and Portable Devices”.

This is meant as a scratchpad … I’ll add more later if I have time.

Interacting with Image Collections – Visualisation and Browsing of Image Repositories

Schaefer gave a overview about how to browse large scale image repositories. Interesting, yet of not really related to my research interests. He showed 3 approaches for retrieval: mapping-based, clustering-based and graph-based. I would have loved if he could have gone a bit more in detail in the mobile section at the end.

Continuous Analysis of Emotions for Multimedia Applications

Hatice Gunes and Bjoern Schuller introduced a state of the art in emotion analysis. Their problems seem very similar to what we have to cope with in activity recognition, especially in terms of segmentation and continuous recognition problems. Their inference pipeline is comparable to ours in context recognition.

Where Affective Computing seems to have an edge is in the standardized data sets. There are already quite a lot (mainly focusing on video and audio). I guess it’s also easier compared to the very multi-modal datasets we deal with in activity recogntion.

Hatice Gunes showed two videos of two girls, one is faking a laugh the other one is authentic. Interestingly enough, the whole audience was wrong in picking the authentic laugh. The fake laughing girl was overdoing it and laughed constantly. However, authentic laughter has a time component (coming in waves: increasing, decreasing, increasing again etc.).

The tools section contained the obvious candidates (opencv, kinect, weka …). Sadly they did not mention the new set of tools I love to use. Check out Pandas and iPython.

Good overview about the state of the art. I would have loved to get more information about the subjective nature of emotion. For me it’s not as obvious as activity (already there is a lot of room of ambiguity). Also, depending on personal experience and cultural background, the emotional response to specific stimuli can be diverse.

Semaine Corpus

Media Eval

EmoVoice Audio Emotion classifier

qsensor

London eye mood