Looking forward to ISWC/Ubicomp 2014

With roughly around 1 month to go, we are busy with demo preparations etc.

I hope to see you in Seattle. This year we have again a couple papers outline work from our students. Attached is a list with draft versions of the papers. I will write a bit about each topic in the next coming weeks.

Oh and if you attend please think about stopping by our Workshop on Ubiquitous Technologies for Augmenting the Human Mind.

Implicit Gaze based Annotations to Support Second Language Learning. Okoso, Ayano and Kunze, Kai and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Wearable Computing for Older Adults -Initial Insights into Head-Mounted Display Usage. Kunze, Kai and Henze, Niels and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Memory Specs-An Annotation System on Google Glass using Document Image Retrieval. Tanaka, Katsuma and Kunze, Kai and Iwata, Motoi and Kise, Koichi. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Smarter Eyewear- Using Commercial EOG Glasses for Activity Recognition. Ishimaru, Shoya and Kunze, Kai and Tanaka, Katsuma and Uema, Uji and Kise, Koichi and Inami, Masahiko. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.


Position Paper: Brain Teasers - Toward Wearable Computing that Engages Our Mind. Ishimaru, Shoya and Kunze, Kai and Kise, Koichi and Inami, Masahiko. Proceedings of UbiComp'14 Adjunct. 2014. Bibtex.

Augmented Human 2014

Innovative research, that makes you first laugh and then think. The conference develops into one of my favorite venues.

Ok, I’m “a bit” biased as I’m one of the conference co-chairs. Still I enjoyed this years Augmented Human. Below is the tag cloud from all abstracts, to give you a brief overview about the topics.

Tag Cloud

Considering the small size of the conference, the quality of the work is exceptional. It’s not one of the conferences that gets the rejected papers from CHI, Ubicomp, PerComp etc. The steering committee really set up a venue for far-out, novel ideas. Also it’s a good opportunity to meet great researchers up close; last year for example Thad Starner, Albrecht Schmidt etc. this year, Jun Rekimoto, Masahiko Inami, Paul Lukowicz and especially Yoshiyuki Sankai … pretty impressive if you ask me. They might be around at other bigger events, yet try to catch them and talk to them, impossible. At AH, it’s very easy. I recommend any young researcher interested in the research topics to attend next year’s AH. Surely, I will try to get some papers accepted ;)

I believe we will see a lot of the work presented at AH2014 at CHI or Ubicomp next year. Yet, decide for yourself.

In the following, I’ll show you just a couple of highlights. I’m sorry, I cannot mention all of the cool work (I realized by writing that the blog post got bigger and bigger and decided to stop at some point so I can finally publish it …).

#Sports# As already the tag cloud suggested, augmenting sports was a hot topic at the conference.

So just in case you want to play a round of Quidditch or Shaolin Soccer in the real world, we might have the tech for it. Rekimoto research about “Hover Ball”. This topic has also been picked up by the New Scientist. I also recommend to check out some work by Takuya Nojima Sensei (TAMA and PhotoelasticBall).

The best paper award also went to a sports themed paper: “Around Me: A System for Providing Sports Player’s Self-images with an Escort Robot”. Nice!

#Around the Eye# As you might know, I have a personal interest in Eyetracking and related research, as I think it’s a very promising direction (especially inferring types of information that you otherwise cannot easily get hold off). So I was very curious about related work presented at AH about the topic and was not disappointed.

I’m wondering if I feel comfortable sharing my sad emotions, as suggested by Tearsense (Marina Mitani, Yasuaki Kakehi). Maybe in a dark cinema this is alright. As a part of life logging it might be also interesting. We had a couple of interesting discussions also during the social about the technology.

Asako Hosobori and Yasuaki Kakehi want to support face to face interaction with Eyefeel & EyeChime. Although the setup still seems a bit unnatural, I love the direction of the research using technology to enrich our social life and make us focus more on things that are important (away from looking at smartphone screens). Yet, judge yourself.

#Haptics#

The most far out work regarding output devices was definitely “A Haptic Foot Interface for Language Communication” by Erik Hill et. al. They use vibration motors to convey text messages on your foot. Made me wonder why we don’t use our feet more for HCI, regarding how sensitive our feet are and that a large part of our brain is also dedicated to sensing on the foot.

Max Peiffer et. al. showed how to make free-hand interactions (e.g. with a kinect or similar body tracking system) more realistic using haptic feedback. Nice work!

The half implant device on a fingernail by Emi Tamaki and Ken Iwasaki was also nice; especially considering that it’s already (or soon) a commercial product.

As always, I particularly liked Inami-Sensei’s work. Suzanne Low presented “Pressure Detection on Mobile Phone By Camera and Flash”. Very innovative use of the camera and nice demonstrations.

Also the multi-touch car steering wheel presented by Shunsuke Koyama (you can do gestures anywhere on the wheel) was really well thought out and cool research.

#Our work# We had 3 papers and 1 poster at the conference.


On the Tip of my Tongue - A Non-Invasive Pressure-Based Tongue Interface. Cheng, Jingyuan and Okoso, Ayano and Kunze, Kai and Henze, Niels and Schmidt, Albrecht and Lukowicz, Paul and Kise. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


In the Blink of an Eye - Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass. Ishimaru, Shoya and Kunze, Kai and Kise, Koichi and Weppner, Jens and Dengel, Andreas and Lukowicz, Paul and Bulling, Andreas. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


What’s on your mind? Mental Task Awareness Using Single Electrode Brain Computer Interfaces. Shirazi, Alireza Sahami and Hassib, Mariam and Henze, Niels and Schmidt, Albrecht and Kunze, Kai. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.


Haven’t we met before? - A Realistic Memory Assistance System to Remind You of The Person in Front of You. Iwamura, Masakazu and Kunze, Kai and Kato, Yuya and Utsumi, Yuzuko and Kise, Koichi. Proceedings of the 5th Augmented Human International Conference. 2014. Bibtex.

I’m particularly proud of Okoso’s and Shoya’s work. They are both still Bachelor students and their research is already published in an international conference.

As Shoya was still visiting DFKI in Germany, he sadly could not attend. Okoso gave the Tongue Interface presentation and I was impressed by her. It’s her first talk at a conference and she’s a 3rd year bachelor. The English was perfect, the talk easy to understand and entertaining. Well done!

#Concluding#

The full program can be found at the AH website in case you’re looking for the references. See you next year at AH in Singapore.

Beyond FuturICT

Attending the International Symposium on Service Systems Science on the 26.02.2014 in Tokyo, I got a glimpse on how the next steps of FuturICT and how similar projects and efforts are on their way in Japan.

Although the FuturICT project did not get funding from the EU so far (I still believe this was a grave mistake), I can see that the spirit and our ideas live on. The Japanese COI-T Program focuses on the same issues and problems as FuturICT.

FuturICT

Dirk Helbing’s presentation gave an overview of the FuturICT effort and the refined research agenda. I really enjoyed seeing how the material has matured. Also the talks from Shunri Oda and Maso Fukuma addressed similar problems and presented similar conclusions.

In the afternoon, Cornelius Herstatt gave some interesting observations about the future potential of the Japanese Market. Especially, I liked his conclusions about the use of robots in society: Seeing robots not as replacements to workers but as complementary, allowing more independence and privacy for older adults.

#Take Home Message# Not sure, if I got it right as social and system sciences are quite new to me, but what I took home from the symposium: System behavior is determined by connectivity patterns, coupling strength, interactions etc. and small changes in any of them can lead to dramatic changes in the overall system. Technology today has fundamentally changed connectivity and coupling in human interactions. Yet, so far we develop technology as a standalone in an uncontrolled way. So far, we have little idea how it influences society.

The Internet made any piece of digital, archival knowledge instantly, globally available. We are now at the verge of any real life event becoming, instantly globally connected to the digital domain. With this “Internet of Things” we might have a perfect substrate and basis to explore these effects. A key word seems “participatory social sensing”. FuturICT

#Discussions and Plenary Summary# The discussions centered around the big picture of society and how to induce beneficial change (as well as to prevent negative effects). Yet, most strikingly, the panel members also mentioned some concrete ideas about change and addressed especially the research community. There was a long discussion about how to accelerate research, in which Dirk Helbing again emphasized his concept of an “Idea Github for researchers”: a platform to share ideas /implementations and research results more freely with easy reproducibility and attribution to the corresponding inventors. The main premise is it should not take us 2-3 years to finally publish our findings.

I ran into this problem, also in the wearable computing field. It is hard for “outsiders” (researchers not in the community) to enter the field. If they just read papers and work on the published research, they can never work on bleeding edge research. You have to meet with people of the different labs and know what they are working on to find interesting topics (and more important they have to trust you …).

#Concluding# I’m very happy that the FuturICT lives on. In general, I find the Japanese research community is very open towards social computing, especially the idea of participatory social sensing. Maybe it’s more fruitful to continue the project ideas first here in Japan. It’s a bit sad that Europe has given away the chance of being a innovation leader in this field. Yet meeting Dirk again and seeing how his ideas and research towards social computing matured and developed was also great. Europe might have missed a chance, yet there’s still hope ;)

If you got interested in research about society and social science, I can recommed “Society is a Complex Matter” by Philip Ball. It’s an engaging read for novices to the topic. I could get a copy of at the event.

Glass Talk at Hacker News Kansai

Introducing Google Glass to Hacker News Kansai Community was a lot of fun.

Glass 1 Glass 2

I’m sorry it took so long to put online. Too much to do, especially due to Augmented Human 2014 (really looking forward to the beginning of march) :)

As part of our monthly Hacker News meetupin Kansai, I gave a small introduction on how to use (and hack for) Google Glass.

The slides are also available as pdf download (7 MB).

The “Hacks” are fairly simple. I more or less show how to develop for Glass using the Native Development Kit and some demonstrations how to read out sensors or other demo code I scraped from github and adjusted (see the slides for links to the sources). The examples include CameraZoom, Face Detection and Blink Recognition.

Glass 3 Glass 4

Looking back at 30C3

Impressions and Talk Recommendations. Finally, a couple of days with normal people ...

Honestly I was impressed by the professionalism of the 30th Chaos Communication Congress. It changed a lot from the last time I visited ( 25c3), grew bigger without loosing its atmosphere. With the assemblies and workshops the event starts to get more and more interactive.

##Talk Recommendations##

I link to the youtube streams of the recordings yet you can get them also over at media.ccc.de

Due to recent events, there were a lot of Snowden/NSA themed talks. I skipped most of them (as some were just plain to catch attention, others were simply to depressing).

Seeing The Secret State: Six Landscapes is a must watch on the topic. I don’t want to spoil to much, just watch it.

Machines that make DIY is always fun. Although talking with Nadya Peek after the presentation, I think she could have given a better talk (the chat was more interesting).

How to Build a Mind Relatively high level talk about artificial intelligence and related topics, quite entertaining.

FPGA 101 Karsten Becker gives a good overview about why and when you want/don’t want to use FPGAs. I have not much experience, yet usually programming FPGAs sucked the few days I tried it. The toolchain he introduces sounds cool. As a friend of mine said the right level of nerdiness enough to be interesting and not too much to lose the audience.

SD-Card Exploits So SD-cards have micro processors and guess what you can program them yourself. Quite scary :)

##My Talk Feedback## I was a bit surprised how positive it was. As the talk is kind of scary. If you imagine we are really able to infer how much somebody understands about a material using eye gaze, who should be allowed to access this information? Is it alright, to trust Google, Apple, Samsung etc. with this type of information? It might no longer be “just” our personal communication that can be recorded and analyzed, but also our reading habits and comprehension level.

If you haven’t seen it yet, feel free to watch it. You can also give feedback on the talk if you want. Highly appreciated.

30c3 

30C3 Toward a Cognitive Quantified Self

Discussing activity recognition for the Mind and its potential applications.

Here are my talk slides for my 30c3 talk … Sorry it took so long. Thanks for the great feedback, will write a bit more if I have sometime ;)

The video is also online. This happened very quick after the event. I’m impressed by the 30c3 content team.

Thanks a lot for the great reviews and suggestions. They are highly appreciated. You can still review the talk.

##Talk Abstract##

The talk gives an overview about our work of quantifying knowledge acquisition tasks in real-life environments, focusing on reading. We combine several pervasive sensing approaches (computer vision, motion-based activity recognition etc.) to tackle the problem of recognizing and classifying knowledge acquisition tasks with a special focus on reading.

Hacking Glass

So I got my hands on another Glass device and can now play with it a bit longer.

Disclaimer: Rooting and flashing your device voids your warranty and can brick your Glass. Also you won’t receive OTA updates afterwards. This is not an instruction manual. I just use it as a scratch pad to give a record what I did and what worked for me. The commands below will erase all data on your device. Proceed at your own risk.

##Rooting and Flashing Images## To get root follow the instructions from Google. Unfortunately, the fastboot under Mac OS does not work. I could use a virtual machine on my Mac with ubuntu to get root and flash images.

adb devices
adb reboot-bootloader
fastboot devices

fastboot oem unlock

I needed to execute the last command twice. The first time it just asked me if I was sure if I want to void my warranty etc.

Next I flashed the boot image from the Glass developer page.

fastboot flash boot boot.img
fastboot reboot
adb root
adb shell

If you want to update to a new OTA (in my case XE12) and rooted your device, you can download the zip with all necessary images from Google. It’s cool that they support rooting and flashing (even if it voids your warranty).

fastboot flash boot boot.img
fastboot flash system system.img
fastboot flash recovery recovery.img
fastboot flash userdata userdata.img
fastboot erase cache

##Reading out the Proximity Sensor## I’m most interested in accessing the proximity sensor facing the eye. So thanks to Philip Scholl’s and Shoya’s help, I was able to do it. The device is under

adb root
adb shell
> cat /sys/bus/i2c/devices/4-0035/proxraw

Gives back one raw proximity value from the sensor. Unfortunately without timestamp. If you want to read out the proximity data from Android Apps etc. you need to change the access rights.

>chmod 664 /sys/bus/i2c/devices/4-0035/proxraw

##Privacy Enhancement##

As I will be visiting the Chaos Communication Congress next weekend, I wanted to “privacy enhance” GLASS for the event. I want to wear Glass but don’t really need the camera functionality.

So I used my 3Doodler to make a simple attachment to block the camera of.

Doodler

The tricky part is that the light sensor for adjusting screen brightness is directly under the camera. If it’s blocked the screen will be very dark. Here’s the “privacy enhanced” Google Glass version.

Glass Enhanced

and a picture taken by it. It’s not completely black due to the issue with the light sensor, yet I think it’s a start ;)

Glass Enhanced

Here is the very basic stencil I used to build the attachment.

glass 

Bits and Bytes instead of a Bookshelf

An interview made me wonder about how reading habits are changing and how we will narrate stories in the future.

Recently, I gave an interview for the German online issue of the Scientific American (Spectrum der Wissenschaft) for a special about reading habits (in German, paywall).

As I’m interested in the topic, I often hear that “endless scrolling” is bad as it destroys the mental map we make of books and pages or that reading from backlit screens is eye-straining inducing headaches. Personally, I cannot really understand these complaints. I’m doing most of my reading on tablet devices or computer screens though I never experienced these problems directly.

However, active reading -the process of working with the text through highlight, notes, marks- is still better on paper. In Human Computer interaction terms, people talk about affordances. The affordance of paper is very high for active reading. So I find myself still printing out drafts or review papers, especially if it’s a close call and I need to concentrate on the contents. In later case, I even like to change the reading environment, moving away from my laptop/desktop to a meeting table or a bank outside. I believe this helps me concentrate, deliberately shutting out any distractions.

We just started to modify the reading experience using electronic devices. So far most of the applications and reading devices directly mimic the book. We have “ebooks”, “e-reading” software use pages and page turns etc. I believe there is a lot of room for improvement related to reading on screens.

Given the possibility to assess the user’s mental state using Cognitive Activity Recognition, we can change content, structure and style of reading materials dynamically. Most straight forward, if an application detects that a reader looses interest, it could prompt her with an interactive challenge/video or similar. Changing fonts, colors and lettering according to mood and context could be also interesting. There is a fairly new playground opening up for anybody curious and interested in defining new forms of reading.

Interesting further reading:

Shilit et. al. Beyond Paper: Supporting Active Reading with Free Form Digital Ink Annotations

Hartson. Cognitive, physical, sensory, and functional affordances in interaction design

Piper et. al. Tabletop Displays for Small Group Study: Affordances of Paper and Digital Materials

Amazing Okinawa - Attending the ASVAI Workshop

Cool research discussions at a nice location. The workshop was perfect fit to my research interests.

The ASVAI workshop gave a good overview about several research efforts part of and related to the JST CREST and the JSPS Core-to-Core Sanken Program.

Prof. Yasushi Yagi showed how to infer intention from gait analysis. Interestingly, he showed research about the relationship of gaze and gait.

Dr. Alireza Fathi presented cool work about ego centric cameras. He showed how to estimate gaze using ego centric cameras during cooking tasks and psychological studies.

Prof. Hanako Yoshida explores social learning in infants (equipping children with mobile eye trackers … awesome!), inferring developmental stages giving more insights in the learning process.

Prof. Masahiro Shiomi spoke about his research trying to adapt robot behavior to fit into social public spaces ( videos about people running away from a robot included ;) ). Currently, they focus on service robots and model their behavior according to successful human service personnel.

Prof. Yoichi Sato presented work related to detecting visual attention. They use visual saliency on video to train an appearance-based eye tracking. Really interesting work, I had a chance to talk a bit more with Yusuke Sugano, cool research :)

Of course, Koichi also gave an overview about our work. If you want to read more, checkout the IEEE Computer article.

I’m looking forward to the main conference. Here’s a tag cloud using the abstracts of ACPR and ASVAI papers:

Tag cloud

We present demonstrations and new results of the eye tracking on commodity tablets/smart phones and a sharing infrastructure for our document annotation for smart phones.

A Week with Glass

How my grandparents interact with GLASS, showed me that Google seems to be onto something.

As there were a lot of researchers visiting for the Ubicomp /ISWC conferences, I could grab a Google Glass from one of the participants for a week. Thanks to the anonymous donor (to avoid speculations it was none of the people mentioned below). In the following some unsorted thoughts ... Sorry for typos etc. It's more a scratch pad so I don't forget the impressions I had.

##First Impressions## The Glass device feels expensive and a little bit futuristic. I’m impressed by the build quality and design. It has also a “google” feel to it, e.g. “funny” jokes in the manual (“don’t use glass for scuba diving …”). The display works extremely well and although glass is made for micro-interactions (quickly checking an email/sms, google now updates, making pictures), I could watch videos and read longer emails/documents on it without trouble and any sight problems (I experienced no headaches as happend with other setups, see below). It would be perfect for boring meetings if other people could not see what you are doing … I assume the Glass design team made the conscious decision, to let other people know if you interact with glass. People can see if the screen is on and even recognize what’s on the screen if they get close enough.

##Grandparents and Mother with Google Glass##

I have a basic test for technology or research topics in general. I try to explain it to my grandparents and mother to see if they understand it and find it interesting. Various head-mounted displays, tablets and activity recognition algorithms were tested this way … E.g. they were not so big fans of tablets/slates or smart phones until they played with an iPhone and iPad.

oma opa

Surprisingly, my grandparents did not have the reservation they have towards other computing devices. Usually, they have the feeling that they could destroy something and are extra careful/hesitant. Yet, Google Glass looks like glasses, so it was easy for them to setup and use. The system worked quite well (although so far only English is supported), speech recognition and touch interface were simple to learn after a quick 5 min. introduction. I was surprised myself …

Sadly, the speech interface does a poor job with German names, e.g. googleing for “Apfelkuchen Rezept” (Apple cake recipe) did not work as intended.

Yet, both of them saw potential in Glass and could imagine wearing it during the day. I was most astound by the application cases they came up with.

opa pills

My grandfather took a picture of his pills he needs to take after each meal. He told me, he always wonders if he has taken them or not and sometimes checks 2-3 times after a meal to be certain. Taking a picture and using the touch panel to browse recent pictures (with timestamp), he can easily figure out when he took them the last time.

My grandmother would love to use Glass for gardening. It happens sometimes, that she gets a phone call during garden work and then she has to change shoes, take of gloves etc. and hurry to the portable phone. Additionally, she likes to get the advice of my mum or friends about where to put which flower seeds etc. so she asked me if it’s possible to show the video stream from Glass to other people over the Internet :)

We also did a practise test, My grandmother and mother wore Glass during shopping in Karlsruhe. Both of them wear glasses, so not too many people noticed or looked at them. I think they assumed it’s some kind of medical device or sight improvement etc.

oma ka mum ka

My mother used the time line in glass to track when she made the pictures and traced back when she saw something nice to figure out at which store the item was. She tried taking pictures of price tags. Unfortunately, the resolution on the screen is not high enough to read the price, yet this could be easily fixed with a zoom function for pictures. Interestingly, she also carries a smart phone, yet she never got the idea to use it for shopping like Glass.

##Public Reactions##

As mentioned my mum and grandmother wore Glass nearly unnoticed. This is quite different to my experience … If I wear it in public, most people in Karlsruhe and Mannheim (the two cities I tried) eyed at me with wary faces (you can see the questions in their eyes : “What is he wearing ?? Some medical device ?? NERD!! “). This was particularly bad when I spoke with a clerk or a person directly, as they kept staring at Glass instead of looking into my eyes ;) Social reception was better when I was with my family. Strangely, people asked mostly my grandmother what I was wearing. Very few approached me directly. Reactions fell into 3 broad categories:

  1. “WOW Cool … Glass! How is it? Can I try??” – Note : Before it’s released in public, I strongly recommend not wearing it on any campus with a larger IT faculty. I did not account for that and it was quite difficult to get over Karlsruhe University Campus :)
  2. “Stop violating my privacy!” – During the week I had only one person directly approach me about privacy concerns. The person was quite angry at first. I believe it’s mostly due to misinformation (something Google needs to take serious), as he believed Glass would stream automatically everything to Google and listen to all the conversations etc.. After I showed him the functionality of the device, how to use it and how to see if somebody is using it, he was calmer and actually liked it (could see the potential of a wearable display).
  3. “What’s wrong with this guy?” – Especially if I was traveling alone people stared at me. I asked 1 or 2 of the most obnoxious persons starring at me about it and they answered they thought I was wearing a medical device and they wondered “what’s wrong with me” as I looked otherwise “normal”.

##Some Issues##

The 3 biggest issues I had with it:

  1. Weight and placement - You need to get used to its weight. As I’m not wearing prescription glasses, it feels strange to me wearing something on my nose. It’s definitely heavier than glasses. After a couple of hours it is ok. Also it’s always in your peripheral view, you need to get used to it.
  2. Battery life - Ok, I played a lot with it, given I could use Glass only for a week. At the end (when me playing with it got fewer) I could get barely a day of usage. I expect that’s something they can easily fix. Pst… you can also plug-in a portable USB battery to charge during usage :)
  3. Social acceptance - This is the hardest one to crack. Having used Glass, I don’t understand most of the privacy fears people raise. It’s very obvious if a person is using the device/taking a picture etc. If I want to take covert pictures/videos of people, I believe it’s easier to do with today’s smart phones or spy cameras (available on Amazon for example) …

##Some more Context##

When I unboxed Glass, I remembered how Paul, my phD. advisor, and Thad (Glass project manager) chatted about how in future everybody would wear some kind of head-mounted display and a computing device always connected to the Internet, helping us with everyday tasks - augmentations to our brain.

In the past, Paul was not a huge enthusiast about wearable displays and I agreed with him. I attempted to use the MicroOptical (the display used by Thad) several times and had always terrible headaches afterwards … Just not for me.

Me

Around 2004 - 2010, I played with various wearable setups to use during everyday life during my phD. each only for a week or couple of days. If you work on wearable computing you have to try at least. As seen in the picture above, the only setup working for me was a Prototype HMD from Zeiss with the Qbic, an awesome belt-integrated linux pc by ETH (black belt buckle in the picture), and Twiddler 2. Yet, I stopped using it as the glasses were quite heavy, maintaining/adjusting the software was a hassle (compared to the advantages) and -I have to admit- due to social pressure, imagine living as a cyborg in a small Bavarian town, mostly occupied by law and business students … I found my small, black, analog notebook more handy and less intimidating to other people. Today, I’m an avid iPhone user (Things, Clear, Habit List, Textastic, Prompt and Lendromat …).

##To sum up## In total I was quite sceptical at first, the design reminded me too much on the Microoptical and the headaches I got using it. Completely unfounded! Even given the social acceptance issue, I cannot wait to get Glass for a longer test. However, I really need a good note taking app, running vim on glass would already be a selling point for me, replacing my black notebook (and maybe smart phone?). I undusted my Twiddler2 (took a long time to find it in the cellar) with hacked bluetooth connection, started practicing again and hope I can try it soon with Vim for Glass :D This is definitely not an application case for the mass market … My grandparents told me that they believe there is a broader demand for such a device also by “normal” people (they actually want to use it!). So let’s see.

Plus the researcher in my cannot wait to get easy accessible motion sensors onto the heads of a lot of people. Combined with the sensors in your pocket it’s activity recognition heaven!

Let’s discuss on Hacker News if you want.

glass