Recognizing Reading Activities

Just finished my keynote talk at CBDAR (Workshop of ICDAR), got a lot of questions and have a lot of new research ideas :)

I’m pretty ignorant about Document Analysis (and Computer Vision in general), so it’s great to talk to some experts in the field. Pervasive Computing and Document Analysis are very complementary and as such interesting to combine.

Here are my talk slides, followed by the talk abstract.

##Real-life Activity Recognition - Talk Abstract##

Most applications in intelligent environments so far strongly rely on specific sensor combinations at predefined positions, orientations etc. While this might be acceptable for some application domains (e.g. industry), it hinders the wide adoption of pervasive computing. How can we extract high level information about human actions and complex real world situations from heterogeneous ensembles of simple, often unreliable sensors embedded in commodity devices?

This talk mostly focuses on how to use body-worn devices for activity recognition in general, and how to combine them with infrastructure sensing and computer vision approaches for a specific high level human activity, namely better understanding knowledge acquisition (e.g. recognizing reading activities).

We discuss how placement variations of electronic appliances carried by the user influence the possibility of using sensors integrated in those appliances for human activity recognition. I categorize possible variations into four classes: environmental placements, placement on different body parts (e.g. jacket pocket on the chest, vs. a hip holster vs. the trousers pocket), small displacement within a given coarse location (e.g. device shifting in a pocket), and different orientations.For each of these variations, I give an overview of our efforts to deal with them.

In the second part of the talk, we combine several pervasive sensing approaches (computer vision, motion-based activity recognition etc.) to tackle the problem of recognizing and classifying knowledge acquisition tasks with a special focus on reading. We discuss which sensing modalities can be used for digital and offline reading recognition, as well as how to combine them dynamically.


See also