Laughing Faces App in the AppStore

Over the last couple of weeks, I was getting settled in my new job. As I’m working with computer vision researchers now, I started playing with the camera api for the iPhone.

Again, I’m very surprised by the accessibility and quality of Apples apis and their sample code.

Laughing Face

As a start, this little app is a “privacy enhanced” camera app for entertainment purposes. It uses face detection and draws a little laughing face on top of each recognized head in real time. I hesitated putting it in the store, yet was asked by some friends to do so (had to exchange the laughing face due to copyright constraints).

Grab it while it’s hot … it’s quite popular in Japan (understandable given the background, see below), China and Saudi Arabia (of all places, … if somebody can tell me why, please send me a mail): Laughing Faces AppStore Link

By the way, I had over 250 downloads the first day :) Oh if you wonder, the inspiration came from Ghost in the Shell Standalone Complex

If people bug me enough, I will make the png exchangable. Cannot tell you too much, yet expect an update when iOS6 hits.

AAAI activity context workshop notes

I enjoyed the AAAI context activity workshop a lot.

The keynote How to make Face Recognition work (pdf) by Ashis Kapoor showed how to increase face recognition introducing very simple “context” constrains (two people in the same image cannot be the same person etc.). Very interesting work, I wonder how much better you can get introducing some more dynamic context recognition to the face recognition task.

Gail Murphy gave the other keynote Task Context for Knowledge Workers (pdf). She introduces context modelling for tasks in GTD scenarios. Also quite interesting, as completely complimentary to my work (no mobile clients, sensors etc.).

A lot of people were aware of our efforts during the Opportunity Project and the standard datasets we want to put out.

Rim Helaoui presented work about using Probabilistic Description Logics (pdf) for activity recognition, an interesting approach trying to combine data driven and rule-based activity inference. They used the opportunity dataset ;)

Bostjan Kaluza shared the call for more standardized datasets in context recognition in his talk about The Activity Recognition Repository (pdf). A very important endeavor, I already also discussed several times. I think a broad effort in the field is necessary.

All the final papers are up on the workshop website.

Towards Dynamically Configurable Context Recognition Systems

Here’s a draft version of my publication for the Activity Context Workshop in Toronto. Bellow the abstract.

Here’s the link to the source code for snsrlog for iPhone (which I mentioned during my talk).


Abstract

General representation, abstraction and exchange definitions are crucial for dynamically configurable context recognition. However, to evaluate potential definitions, suitable standard datasets are needed. This paper presents our effort to create and maintain large scale, multimodal standard datasets for context recognition research. We ourselves used these datasets in previous research to deal with placement effects and presented low-level sensor abstractions in motion based on-body sensing. Researchers, conducting novel data collections, can rely on the toolchain and the the low-level sensor abstractions summarized in this paper. Additionally, they can draw from our experiences developing and conducting context recognition experiments. Our toolchain is already a valuable rapid prototyping tool. Still, we plan to extend it to crowd-based sensing, enabling the general public to gather context data, learn more about their lives and contribute to context recognition research. Applying higher level context reasoning on the gathered context data is a obvious extension to our work.

Some of my publications are online

I’m slowly uploading a couple of references and the pdf draft versions of them. Please find some of my publications in the corresponding section of this website.

Stay tuned for the bibtex description and some more papers.

Compensating for On-body Placement Effects in Activity Recognition

phD. thesis latex sources on github

Compensating for On-body Placement Effects in Activity Recognition
/images/2012-thesis.jpg

Finally finished my phD. last year at Passau University. phD. Advisor Prof. Dr. Paul Lukowicz, second advisor Prof. Dr. Hans Gellersen. The thesis is already published at Opus Bayern. The pdf is open access, so feel free to read it (careful it’s a 19 MB pdf): Compensating for On-Body Placement Effects in Activity Recognition as pdf

However, the sources were not available. Finally, I got around to push the latex sources of my dissertation up to github.

[Read More]

Device Motion in Html/Javascript

A simple demo

A while ago, I built a simple demonstration on how to stream accelerometer data from a mobile device over websockets to a server just using html and javascript. It consists of a nodejs web server and a processing.org visualization. As soon as a mobile browser connects to the server a new red cube is shown on the screen (placed between randomly generated cubes). The transparent area around the cube changes depending on how strong one shakes the phone.

[Read More]