Last week I had the amazing opportunity to participate in the summer school for NSF IRES students at the Bauhaus University in Weimar, Germany. I recognized the name Weimar from my european history education, but I had never known about the rich tradition of Bauhaus architecture and design that was born there. The Bauhaus school was founded on the principle that design, technology, and function inform each other and do not need to be separate areas of study. This idea is also the foundation of the Media Arts and Sciences major at Wellesley. The intersection of technology and design was on full display throughout the week in Weimar as we viewed lectures and participated in design exercises and team projects.
Several prominent members of the HCI community with impressive backgrounds took the time to describe their research and their contributions to the field. Dr. Katherine Isbister from UC Santa Cruz introduced us to the Mechanics-Dynamics-Aesthetics framework for game design and designing for proxemics. I truly found myself thinking differently about the implicit biases that come from the beginning of technology as we know it today––that our devices are individual, personal, and private. After Katherine’s lecture I found myself thinking about how technology can both prevent and catalyze interactions within the physical plane. Dr. Florian Echtler walked us through a depth camera exercise and spoke on the technical challenges associated with different types of depth cameras, a technology that I was very unfamiliar with. It was very exciting to play around with one of these cameras.
Dr. Alex Kulik presented on his research in group-to-group telepresence and I got to try out the technology, which presents unique opportunities for collaboration and communication in augmented reality. It was incredible to be able to see the same 3D model from multiple perspectives at the same time as several others. Spatialized audio made the environment feel even more realistic––it felt like I could really high-five the guy on the other side! I am excited to see how this technology progresses.
Apparently I suffer from visual change blindness, as I learned from Dr. Sara Riggs from Clemson University, who presented on her research in another version of this phenomenon––tactile change blindness. Tactile change blindness affects multimodal displays of information, which are growing increasingly popular in environments like hospitals. Sara’s lecture inspired me to learn more about multimodal interfaces, a topic that I was previously unfamiliar with. I learned all about affordances from Don Norman’s The Design of Everyday Things and in my Human-Computer Interaction class last fall, but Pedro Lopes from HPI Potsdam/UChicago introduced me to a whole new way of thinking about affordances––instead of an object’s design communicating the way it should be used, a user can interpret affordance from electrical impulses sent to their arm. Pedro’s researched showed that by moving a user’s wrist back and forth while they hold a spray can with electrical impulses, they know that it needs to be shaken before it is used.
Dr. Lewis Chuang from Ludwig-Maximilians-Universität München gave a lecture on how the eye works, something that is especially important to consider when working on visual displays in augmented and virtual reality. It was very interesting to hear about how his background in psychology has influenced the work he has done in technology, proving the importance of interdisciplinary study. Thank you to all of these lecturers for taking the time to tell us about your work!
In addition to hearing others present, I also presented some of my own research progress on how self-driving cars interact with pedestrians. I got some great feedback and suggestions from everyone that I am excited to incorporate into my project––thank you to Eva and Lewis for reminding me about red-green colorblindness and to Sara and Pedro for your suggestions on heightening risk and quantifying risk-taking behavior!
After a much needed break of 6 from the Microsoft HoloLens, I had the opportunity to jump back into it again thanks to Marion Koelle and Uwe Grünefeld from the University of Oldenburg. I got to learn about working with visual markers in augmented reality and combining the HoloLens with an arduino for tactile interactions. It was very exciting to use the HoloLens in a different way!
On our last full day, we split up into groups and visited the Goethe museum to develop a concept for an augmented/virtual reality museum experience. Up until this summer, all of my research has been in augmented reality museum experiences, so this was a unique challenge for me! I am very proud of my group for coming up with a very interesting idea. We noticed in Goethe’s house that there were a lot of stairs and other obstacles that would’ve prevented elderly or differently-abled people from enjoying the museum. How unfortunate, we realized, that it is often older people that can really appreciate a museum, but it is younger people who are better equipped to visit these museums. Our solution was AvatAR, which fits young museum participants with a head-mounted camera and a wearable device through which another user can “control” them remotely by sending vibration cues. In this way, we hoped to make the museum experience more interesting and more interactive for younger museum-goers and more accessible for people who want to go to museums but are prohibited from doing so by inaccessible environments. I enjoyed seeing what other groups came up with for making the museum experience more interactive––I’m certainly bringing some new ideas and a fresh perspective on museums back to the HoloMuse project at the Wellesley HCI Lab!
A huge thank you to Bauhaus University for hosting us last week and to organizers Wilko Heuten, Eva Hornecker, Orit Shaer, Andrew Kun, and Albrecht Schmidt. A special thanks as well to the NSF IRES program and ERC Amplify for funding this fantastic learning experience. I can’t wait to apply what I’ve learned to my own research!