All posts by Andrew Kun

2016 UNH IRES student research – part 2

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. This is the second installment of these brief reports, where we look at some of the research conducted by the students. (You can see the first installment here.)

Natalie Warren worked EEG recording devices:

Learning about EEG during the past two months under the supervision of Valentin and Jakob has been very rewarding. I’ve learned a huge amount about signal processing, experiment design, MATLAB, coding stimulus presentations, and brain activity, not to mention using EEG recording systems! We also got to put our knowledge to use early in the program by measuring electrical activity generated by the eye movement of some of our colleagues (like Anna, pictured here).

Whitney Fahnbulleh worked on augmenting human memory:

This summer I have been developing a photo gallery application for the “recall” project, a project that explores ways to augment human memory. I have been implementing various ways users can interact with the gallery through touch gestures, mid-air gestures, speech recognition, and keyboard input. My end goal for this project is to flesh out the user interface design and run user studies on the application. I have learned so much about computer vision this summer, and I look forward to working on future projects for recall.

Aditi Joshi worked on visualizing uncertainty:

For the past two months, I have been working on designing and implementing a study investigating uncertainty visualizations. In the future, the amount of uncertain information that we will have access to will increase and often they will have conflicting information. With this study, we are trying to understand how people aggregate uncertainty information so we can implement these techniques in future technologies. In this picture Anna is participating in the study and providing us with some great data.

Donovan O.A. Toure how the realism of virtual faces affects the human observer:

This summer, I worked on the perception of computer generated/virtual faces within the Uncanny Valley by analyzing brain waves as an individual is presented with virtual faces with varying levels of detail. In addition to learning about EEG, digital signal processing, and the uncanny valley, I worked on stimulus creation–including 3D modelling–to help carry out the experiment design.

2016 UNH IRES student research – part 1

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. Here’s the first installment of these brief reports, where we look at some of the research conducted by the students.

Taylor Gotfrid worked on using augmented reality in assistive systems:

During my time here I learned about experiment design and augmented reality. Over this summer I’ve been working on designing and conducting a user study to determine whether picture instructions or projected instructions have better recall for assembly tasks over a long period of time. This experiment assesses which form of media would lead to fewer errors, faster assembly times, and better recall over a span of three weeks. The picture above is of the projector system indicating where the next LEGO piece needs to be placed to complete the next step in the assembly process.

Dillon Montag worked on tactile interfaces for people with visual impairments:

HyperBraille: I am working with Mauro Avila on developing tactile interfaces for people with visual impairments. Our tool developed this summer will allow users to explore scientific papers while receiving both audio and tactile feedback. We hope this new tool will allow people with visual impairments to enhance their understanding and navigation through papers.

Anna Wong worked on touch recognition:

For my project with the University of Stuttgart lab, I was tasked with using images like the one on the left to detect the user’s hand, and then classify the finger being used to touch a touch screen. This involved transforming the images in a variety of ways, such as finding the edges using a canny edge detector as in the top image, and then using machine learning algorithms to classify the finger.

Elizabeth Stowell worked on smart notifications:

I worked with smart objects and the use of notifications to support aging-in-place. I enjoyed building prototypes for a smart pillbox, sketching designs for a smart calendar, and exploring how people who are elderly interact with such objects. In these two months, I learned a lot about notification management within the context of smart homes.

Announcement: Lars Lischke talk on May 5, 2016

Large Display Interaction
Lars Lischke

Thursday, May 5, 2016, 11 AM
Location: Kingsbury N129

 

Abstract. Marc Weiser’s vision “Computing for the 21st century” introduces three classes of devices to interact with digital content: “tabs,” “pads” and “boards.” “Tabs” and “pads” have already become commonplace with smartphones and tablet computers. In contrast, digital “boards” are still rarely used. However, there is a good chance that wall-sized display-“boards” will become commonplace within the next decade. Today, wall-sized displays are mainly used to visualize large and complex data. This is in particular beneficial, because humans are able to scan large areas quickly for objects and visual cues.

In the future, wall-sized displays will not only be used in the context of professional visualizations and public displays, they will also become commonplace in office and home environments. The success of wall-sized display installations is highly dependent on the well-designed input techniques and appropriate UI-Design guidelines. In this talk I will present latest research focusing on wall-sized display interaction. This will include eye-gaze based interaction and mid-air gestures. Furthermore, multi device interaction in combination with wall-sized displays is enabling novel concepts for single and collaborative work.

Besides appropriate input techniques new graphical user interfaces are needed for successful wall-sized display systems. Due to this, I will discuss how interfaces for wall-sized displays could look like.

Bio. Lars Lischke is a third year PhD Student at the HCILab at the University of Stuttgart, Germany. He studied computer science (Diploma, MSc equivalent) at the University of Stuttgart and Chalmers University of Technology in Gothenburg, Sweden. His research interests are in the field of human computer interaction with a focus on interacting with large high-resolution displays in office environments and for data exploration.

Announcement: Bastian Pfleging talk on April 29, 2016

My car as an interactive computing environment: Supporting non-driving-related activities
Bastian Pfleging

Friday, April 29, 2016, 2 PM
Kingsbury S320

Abstract. Today, driving a modern car is much more than sitting in a vehicle to get to a distant location. Drivers face the challenge to simultaneously maneuver the car but also operate in-car computer systems. They perform non-driving-related activities such as adjusting the air conditioning, selecting the next playlist, or communicate with family and friends. Performing such activities while driving a car often distracts the driver and puts the driver and the environment at risk. Providing car user interfaces that offer safe, diverse, exciting, and easy-to-use means to perform a multitude of non-driving-related activities is thus a challenge for research and development. Especially with the transition towards assisted and automated driving the car will turn into a “computing platform, living room, and office on wheels”. Here, enabling non-driving-related activities becomes even more important and their support will be crucial for commercial success. In my talk, I will present examples on how to support the design and development of automotive user interfaces that enable safe non-driving-related activities. This includes interfaces to improve driving safety while communicating and understanding the driver’s state.

Bio. Bastian Pfleging is a senior researcher at the Human-Machine Interaction Group at the University of Munich (LMU), Germany. His research interests are automotive user interfaces, with a focus on multimodal interaction and the support of non-driving-related activities in the car (e.g., communication). Before joining LMU Munich, he was a researcher and PhD student at the Institute for Visualization and Interactive Systems at the University of Stuttgart. From 2010 to 2011 Bastian was a visiting researcher at the BMW Technology Office USA in California. He holds a Diploma in Computer Science from TU Dortmund, Germany.

In the HCI community, Bastian is involved in many scientific activities. This includes co-organizing different conferences, including AutomotiveUI (Work-in-Progress & Demo Chair, Publication Chair), MobileHCI, and Augmented Human. Additionally, he co-organizes various workshops (e.g., Workshop on Automotive Natural User Interfaces, Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interaction, both co-located with AutomotiveUI). Also, he serves as reviewer or member of the program committee for various HCI-related journals, magazines, conferences, and workshops.

Introducing the 2016 UNH IRES team

We are pleased to introduce the eight students who will participate in the 2016 UNH IRES program. The program is funded by the National Science Foundation, Office of International and Integrative Activities. We are grateful for the support.

This year we received a large number of exceptionally strong applications. After careful deliberation, we selected the eight students listed below to participate in the program. This summer they will conduct research at the HCILab at the University of Stuttgart under the supervision of Albrecht Schmidt. Congratulations to all eight! We are looking forward to a productive and fun summer.

Whitney Fahnbulleh is a junior at Wellesley College majoring Media Arts and Sciences and minoring in Chinese. Whitney is spending her junior year studying data analytics and visualization, human computer interaction, and is self-studying game design. She is most excited in the possibilities of virtual and augmented reality for creating immersive environments for gaming and knowledge delivery. She looks forward to graduate studies in HCI and game design.

Taylor Gotfrid is a senior double majoring in Computer Engineering and Cognitive Science at University of California, Santa Cruz. She is greatly interested in user experience research and making technology more accessible for those with developmental disabilities. She currently conducts research in the Interactive Systems for Individuals with Special Needs lab under Professor Sri Kurniawan developing games for individuals with developmental disabilities that will assess their understanding of basic concepts such as object relations and their problem-solving abilities. After she graduate she intends to pursue a Ph.D in HCI or Interaction Design.

Aditi Joshi is a senior at Olin College majoring in Engineering Design. She is especially interested in human centered design and connecting actual products and features to the people on the other end of the screen. She thinks that it is important to think of design as a constant process, starting from user research and talking to real people, co-designing with them and getting feedback, and actual implementation of these ideas. In her professional life she hopes to create products that make a social impact in the world, using engineering to help empower, educate, and assist the extremely different types of people in today’s world.

Dillon Montag is a senior mathematics and computer science double major at Westmont College. Previously, Dillon has conducted research in the fields of network science and big data, where he analyzed student performance within the UCLA mathematics department. He is interested in the intersection of computer science with the other social sciences and is excited to be transitioning into the industry.

Elizabeth Stowell is a PhD student in Personal Health Informatics at Northeastern University in Boston. She earned her B.A. in Health and Society at Wellesley College. She currently works in the Wellness Technology Lab that conducts that creates and evaluates wellness technologies to address health inequities. Elizabeth is interested in using technology to empower people to make informed decisions about their own health, and in using technology to facilitate health activism.

Donovan Toure is a recent graduate of New York University where he earned a Master of Science in Integrated Digital Media. His focus was on on Virtual and Augmented Reality, Game Design as well as Human Factors Engineering. He is interested in how Virtual and Augmented Reality technologies can be utilized in mission planning, training, and as psychological countermeasures in long-term space missions for humans living off world.

Natalie Warren is a junior Cognitive Science major at Yale University. As a member of the Yale Social Robotics Lab, she enjoys exploring technology’s role in improving cognitive functioning and social interaction. Natalie is interested in studying the effects of digital media on attention and performance, especially in children.

Anna Wong is a sophomore studying at Carnegie Mellon University. She is a Statistics and Machine Learning major. Anna entered the HCI field as a research assistant in CMU’s Human Computer Interaction Institute, where she has worked with data collected from wearable technology. Currently she is fascinated by big data and biometric data. She is most interested in developing tools that will allow users to interact with their personal data in a direct and accessible way.