Ubicomp ISWC Impressions

Usually, I’m not such a big fan of conference openings, yet Friedemann Mattern provided a great intro giving an overview about the origins of Pervasive and Ubicom mentioning all important people and showing nice vintage pictures from Hans Gellersen, Alois Ferscha, Marc Langheinrich, Albrecht Schmidt, Kristof Van Laerhoven etc.

Deeply impressed by the organization, social and general talk quality, I was a bit sceptical before the merger of Pervasive / Ubicom and collocating ISWC, yet it was completely unfounded.

We got some great feedback for Kazuma’s and Shoya’s demos. They both did a great job introducing their work about:

We got also a lof of interest and feedback to Andreas Bulling’s and my work about recognizing document types using only eye gaze. By the way, below are the talk slides and the abstract of the paper. ##ISWC Talk Slides##

##Abstract## Reading is a ubiquitous activity that many people even per- form in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about ex- pertise level and potential knowledge of users – towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically de- tected from visual behaviour recorded using a mobile eye tracker. We present an initial recognition approach that com- bines special purpose eye movement features as well as ma- chine learning for document type detection. We evaluate our approach in a user study with eight participants and five Japanese document types and achieve a recognition perfor- mance of 74% using user-independent training.

Full paper link: I know what you are reading – Recognition of Document Types Using Mobile Eye Tracking

Excited about Ubicomp and ISWC

This year I’m really looking forward to Ubicomp and ISWC, it’s the first time that Ubicomp and Pervasive merged into one conference and it’s the first time the venue sold out with 700 participants.

I cannot wait to chat with old friends and experts (most are both :)).

The field slowly matures. Especially, the wearable research is really pushing towards prime-time. Most prominently, Google Glass is getting a lot of focus also discussing its impacts on privacy. Yet, there is more and more talk about fitness bracelets/trackers and smart watches. I expect that we see more intelligent clothes and activity recognition work in commercial products in the coming years.

By the way, we have 3 poster papers and 2 demos at Ubicomp and a short paper at ISWC.

###Ubicomp Demos and Posters###

###ISWC paper###

Drop by at the demo,poster sessions and/or see me my talk on Thursday.

On a side note, Ubicomp really picks great locations. This year it’s Zurich, next year Seattle and the year after it will be in Osaka. Seems I might be staying longer in Japan, than I originally planned ;)

ACM Multimedia 2012 Tutorials and Workshops

I attended the Tutorials “Interacting with Image Collections – Visualisation and Browsing of Image Repositories” and “Continuous Analysis of Emotions for Multimedia Applications” on the first day.

The last day I went to “Workshop on Audio and Multimedia Methods for Large Scale Video Analysis” and to the “Workshop on Interactive Multimedia on Mobile and Portable Devices”.

This is meant as a scratchpad … I’ll add more later if I have time.

Interacting with Image Collections – Visualisation and Browsing of Image Repositories

Schaefer gave a overview about how to browse large scale image repositories. Interesting, yet of not really related to my research interests. He showed 3 approaches for retrieval: mapping-based, clustering-based and graph-based. I would have loved if he could have gone a bit more in detail in the mobile section at the end.

Continuous Analysis of Emotions for Multimedia Applications

Hatice Gunes and Bjoern Schuller introduced a state of the art in emotion analysis. Their problems seem very similar to what we have to cope with in activity recognition, especially in terms of segmentation and continuous recognition problems. Their inference pipeline is comparable to ours in context recognition.

Where Affective Computing seems to have an edge is in the standardized data sets. There are already quite a lot (mainly focusing on video and audio). I guess it’s also easier compared to the very multi-modal datasets we deal with in activity recogntion.

Hatice Gunes showed two videos of two girls, one is faking a laugh the other one is authentic. Interestingly enough, the whole audience was wrong in picking the authentic laugh. The fake laughing girl was overdoing it and laughed constantly. However, authentic laughter has a time component (coming in waves: increasing, decreasing, increasing again etc.).

The tools section contained the obvious candidates (opencv, kinect, weka …). Sadly they did not mention the new set of tools I love to use. Check out Pandas and iPython.

Good overview about the state of the art. I would have loved to get more information about the subjective nature of emotion. For me it’s not as obvious as activity (already there is a lot of room of ambiguity). Also, depending on personal experience and cultural background, the emotional response to specific stimuli can be diverse.

Semaine Corpus

Media Eval

EmoVoice Audio Emotion classifier

qsensor

London eye mood

Workshop on Audio and Multimedia Methods for Large Scale Video Analysis

Workshop on Interactive Multimedia on Mobile and Portable Devices