2016 UNH IRES student research – part 2

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. This is the second installment of these brief reports, where we look at some of the research conducted by the students. (You can see the first installment here.)

Natalie Warren worked EEG recording devices:

Learning about EEG during the past two months under the supervision of Valentin and Jakob has been very rewarding. I’ve learned a huge amount about signal processing, experiment design, MATLAB, coding stimulus presentations, and brain activity, not to mention using EEG recording systems! We also got to put our knowledge to use early in the program by measuring electrical activity generated by the eye movement of some of our colleagues (like Anna, pictured here).

Whitney Fahnbulleh worked on augmenting human memory:

This summer I have been developing a photo gallery application for the “recall” project, a project that explores ways to augment human memory. I have been implementing various ways users can interact with the gallery through touch gestures, mid-air gestures, speech recognition, and keyboard input. My end goal for this project is to flesh out the user interface design and run user studies on the application. I have learned so much about computer vision this summer, and I look forward to working on future projects for recall.

Aditi Joshi worked on visualizing uncertainty:

For the past two months, I have been working on designing and implementing a study investigating uncertainty visualizations. In the future, the amount of uncertain information that we will have access to will increase and often they will have conflicting information. With this study, we are trying to understand how people aggregate uncertainty information so we can implement these techniques in future technologies. In this picture Anna is participating in the study and providing us with some great data.

Donovan O.A. Toure how the realism of virtual faces affects the human observer:

This summer, I worked on the perception of computer generated/virtual faces within the Uncanny Valley by analyzing brain waves as an individual is presented with virtual faces with varying levels of detail. In addition to learning about EEG, digital signal processing, and the uncanny valley, I worked on stimulus creation–including 3D modelling–to help carry out the experiment design.

2016 UNH IRES student research – part 1

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. Here’s the first installment of these brief reports, where we look at some of the research conducted by the students.

Taylor Gotfrid worked on using augmented reality in assistive systems:

During my time here I learned about experiment design and augmented reality. Over this summer I’ve been working on designing and conducting a user study to determine whether picture instructions or projected instructions have better recall for assembly tasks over a long period of time. This experiment assesses which form of media would lead to fewer errors, faster assembly times, and better recall over a span of three weeks. The picture above is of the projector system indicating where the next LEGO piece needs to be placed to complete the next step in the assembly process.

Dillon Montag worked on tactile interfaces for people with visual impairments:

HyperBraille: I am working with Mauro Avila on developing tactile interfaces for people with visual impairments. Our tool developed this summer will allow users to explore scientific papers while receiving both audio and tactile feedback. We hope this new tool will allow people with visual impairments to enhance their understanding and navigation through papers.

Anna Wong worked on touch recognition:

For my project with the University of Stuttgart lab, I was tasked with using images like the one on the left to detect the user’s hand, and then classify the finger being used to touch a touch screen. This involved transforming the images in a variety of ways, such as finding the edges using a canny edge detector as in the top image, and then using machine learning algorithms to classify the finger.

Elizabeth Stowell worked on smart notifications:

I worked with smart objects and the use of notifications to support aging-in-place. I enjoyed building prototypes for a smart pillbox, sketching designs for a smart calendar, and exploring how people who are elderly interact with such objects. In these two months, I learned a lot about notification management within the context of smart homes.

Announcement: Lars Lischke talk on May 5, 2016

Large Display Interaction
Lars Lischke

Thursday, May 5, 2016, 11 AM
Location: Kingsbury N129

 

Abstract. Marc Weiser’s vision “Computing for the 21st century” introduces three classes of devices to interact with digital content: “tabs,” “pads” and “boards.” “Tabs” and “pads” have already become commonplace with smartphones and tablet computers. In contrast, digital “boards” are still rarely used. However, there is a good chance that wall-sized display-“boards” will become commonplace within the next decade. Today, wall-sized displays are mainly used to visualize large and complex data. This is in particular beneficial, because humans are able to scan large areas quickly for objects and visual cues.

In the future, wall-sized displays will not only be used in the context of professional visualizations and public displays, they will also become commonplace in office and home environments. The success of wall-sized display installations is highly dependent on the well-designed input techniques and appropriate UI-Design guidelines. In this talk I will present latest research focusing on wall-sized display interaction. This will include eye-gaze based interaction and mid-air gestures. Furthermore, multi device interaction in combination with wall-sized displays is enabling novel concepts for single and collaborative work.

Besides appropriate input techniques new graphical user interfaces are needed for successful wall-sized display systems. Due to this, I will discuss how interfaces for wall-sized displays could look like.

Bio. Lars Lischke is a third year PhD Student at the HCILab at the University of Stuttgart, Germany. He studied computer science (Diploma, MSc equivalent) at the University of Stuttgart and Chalmers University of Technology in Gothenburg, Sweden. His research interests are in the field of human computer interaction with a focus on interacting with large high-resolution displays in office environments and for data exploration.

Announcement: Bastian Pfleging talk on April 29, 2016

My car as an interactive computing environment: Supporting non-driving-related activities
Bastian Pfleging

Friday, April 29, 2016, 2 PM
Kingsbury S320

Abstract. Today, driving a modern car is much more than sitting in a vehicle to get to a distant location. Drivers face the challenge to simultaneously maneuver the car but also operate in-car computer systems. They perform non-driving-related activities such as adjusting the air conditioning, selecting the next playlist, or communicate with family and friends. Performing such activities while driving a car often distracts the driver and puts the driver and the environment at risk. Providing car user interfaces that offer safe, diverse, exciting, and easy-to-use means to perform a multitude of non-driving-related activities is thus a challenge for research and development. Especially with the transition towards assisted and automated driving the car will turn into a “computing platform, living room, and office on wheels”. Here, enabling non-driving-related activities becomes even more important and their support will be crucial for commercial success. In my talk, I will present examples on how to support the design and development of automotive user interfaces that enable safe non-driving-related activities. This includes interfaces to improve driving safety while communicating and understanding the driver’s state.

Bio. Bastian Pfleging is a senior researcher at the Human-Machine Interaction Group at the University of Munich (LMU), Germany. His research interests are automotive user interfaces, with a focus on multimodal interaction and the support of non-driving-related activities in the car (e.g., communication). Before joining LMU Munich, he was a researcher and PhD student at the Institute for Visualization and Interactive Systems at the University of Stuttgart. From 2010 to 2011 Bastian was a visiting researcher at the BMW Technology Office USA in California. He holds a Diploma in Computer Science from TU Dortmund, Germany.

In the HCI community, Bastian is involved in many scientific activities. This includes co-organizing different conferences, including AutomotiveUI (Work-in-Progress & Demo Chair, Publication Chair), MobileHCI, and Augmented Human. Additionally, he co-organizes various workshops (e.g., Workshop on Automotive Natural User Interfaces, Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interaction, both co-located with AutomotiveUI). Also, he serves as reviewer or member of the program committee for various HCI-related journals, magazines, conferences, and workshops.

University of New Hampshire