Towards Dynamically Configurable Context Recognition Systems

Here’s a draft version of my publication for the Activity Context Workshop in Toronto. Bellow the abstract.

Here’s the link to the source code for snsrlog for iPhone (which I mentioned during my talk).


Abstract

General representation, abstraction and exchange definitions are crucial for dynamically configurable context recognition. However, to evaluate potential definitions, suitable standard datasets are needed. This paper presents our effort to create and maintain large scale, multimodal standard datasets for context recognition research. We ourselves used these datasets in previous research to deal with placement effects and presented low-level sensor abstractions in motion based on-body sensing. Researchers, conducting novel data collections, can rely on the toolchain and the the low-level sensor abstractions summarized in this paper. Additionally, they can draw from our experiences developing and conducting context recognition experiments. Our toolchain is already a valuable rapid prototyping tool. Still, we plan to extend it to crowd-based sensing, enabling the general public to gather context data, learn more about their lives and contribute to context recognition research. Applying higher level context reasoning on the gathered context data is a obvious extension to our work.