Laughing Faces App in the AppStore

Over the last couple of weeks, I was getting settled in my new job. As I’m working with computer vision researchers now, I started playing with the camera api for the iPhone.

Again, I’m very surprised by the accessibility and quality of Apples apis and their sample code.

Laughing Face

As a start, this little app is a “privacy enhanced” camera app for entertainment purposes. It uses face detection and draws a little laughing face on top of each recognized head in real time. I hesitated putting it in the store, yet was asked by some friends to do so (had to exchange the laughing face due to copyright constraints).

Grab it while it’s hot … it’s quite popular in Japan (understandable given the background, see below), China and Saudi Arabia (of all places, … if somebody can tell me why, please send me a mail): Laughing Faces AppStore Link

By the way, I had over 250 downloads the first day :) Oh if you wonder, the inspiration came from Ghost in the Shell Standalone Complex

If people bug me enough, I will make the png exchangable. Cannot tell you too much, yet expect an update when iOS6 hits.

AAAI activity context workshop notes

I enjoyed the AAAI context activity workshop a lot.

The keynote How to make Face Recognition work (pdf) by Ashis Kapoor showed how to increase face recognition introducing very simple “context” constrains (two people in the same image cannot be the same person etc.). Very interesting work, I wonder how much better you can get introducing some more dynamic context recognition to the face recognition task.

Gail Murphy gave the other keynote Task Context for Knowledge Workers (pdf). She introduces context modelling for tasks in GTD scenarios. Also quite interesting, as completely complimentary to my work (no mobile clients, sensors etc.).

A lot of people were aware of our efforts during the Opportunity Project and the standard datasets we want to put out.

Rim Helaoui presented work about using Probabilistic Description Logics (pdf) for activity recognition, an interesting approach trying to combine data driven and rule-based activity inference. They used the opportunity dataset ;)

Bostjan Kaluza shared the call for more standardized datasets in context recognition in his talk about The Activity Recognition Repository (pdf). A very important endeavor, I already also discussed several times. I think a broad effort in the field is necessary.

All the final papers are up on the workshop website.

Towards Dynamically Configurable Context Recognition Systems

Here’s a draft version of my publication for the Activity Context Workshop in Toronto. Bellow the abstract.

Here’s the link to the source code for snsrlog for iPhone (which I mentioned during my talk).


Abstract

General representation, abstraction and exchange definitions are crucial for dynamically configurable context recognition. However, to evaluate potential definitions, suitable standard datasets are needed. This paper presents our effort to create and maintain large scale, multimodal standard datasets for context recognition research. We ourselves used these datasets in previous research to deal with placement effects and presented low-level sensor abstractions in motion based on-body sensing. Researchers, conducting novel data collections, can rely on the toolchain and the the low-level sensor abstractions summarized in this paper. Additionally, they can draw from our experiences developing and conducting context recognition experiments. Our toolchain is already a valuable rapid prototyping tool. Still, we plan to extend it to crowd-based sensing, enabling the general public to gather context data, learn more about their lives and contribute to context recognition research. Applying higher level context reasoning on the gathered context data is a obvious extension to our work.

Some of my publications are online

I am currently in the process of uploading my research publications to this website. In the publications section, you’ll find select papers, including PDF drafts of my work. I will be regularly updating this collection with additional publications and their corresponding BibTeX citations.

Compensating for On-body Placement Effects in Activity Recognition

phD. thesis latex sources on github

Compensating for On-body Placement Effects in Activity Recognition

I’m pleased to announce the completion of my Ph.D. from the University of Passau under the guidance of my advisors:

Principal Advisor: Prof. Dr. Paul Lukowicz Secondary Advisor: Prof. Dr. Hans Gellersen

My doctoral thesis has been published and is available atOpus Bayern. The pdf is open access, so feel free to read it (careful it’s a 19 MB pdf): Compensating for On-Body Placement Effects in Activity Recognition as pdf

However, the sources were not available. Finally, I got around to push the latex sources of my dissertation up to github.

[Read More]

Device Motion in Html/Javascript

A simple demo

This project demonstrates real-time accelerometer data streaming from mobile devices using WebSocket technology. The system combines a Node.js web server with a Processing.org visualization interface. When a mobile device connects to the server through its browser, the visualization displays a distinctive red cube among randomly positioned background cubes. The cube features a dynamic transparent aura that responds to the intensity of the device’s movement - the more vigorously you shake your phone, the more pronounced the visual effect becomes.

[Read More]