I’m getting more and more fascinated by augmenting Blind Soccer. After 3 blind soccer trainings, we had now a couple of meetings to discuss how to extend and enhance the play experience.
For me there are 3 interesting points about blind soccer:
- It’s very hard to learn. Can we make it easier for blind people to learn it? If you can play it, it’s very fast and empowering. We train with a soccer player from the Japanese national team. He plays better than me without being blind folded (ok … that’s maybe not really an achievement, I’m terrible at soccer).
- Can we make it easier for seeing people learn blind soccer and in turn understand more about the blind and improve their hearing skills?
- Can we level the playing field making it possible for blind and seeing to play together using tech?
However, the most interesting point, I think blind soccer can teach us that “disability” is a question of definition and the environment.
If you can play it, it’s very fast and empowering. We train with a soccer player from the Japanese national team. He plays better than me without being blind folded (ok … that’s maybe not really an achievement, I’m terrible at soccer).
The biggest take-away for me, I rely too much on vision to make sense of my environment. The training made me more aware of sounds. I find myself to listen more and more. Sometimes in a train on the street etc., I now close my eyes and explore the environment just by sound. It’s fascinating how much we can hear. This opened a new world for me. Looking into it more, I believe sound is an underestimated modality for augmented and virtual realities, which is worth exploring more. I stumbled over a couple of papers about sonic interface design. Looking forward to applying some of the findings we get out of the blind soccer use case to our lives ;)
If I have some more time, I’ll write a bit more about the training and the ideathlons we did so far. In the mean time, I recommend you try it sometime (if you are near Tokyo, you can maybe also join our sessions).