Category Archives: hci

Introducing the 2017 IRES team

We are pleased to introduce the six students who will participate in the 2017 HCI in Ubicomp IRES program. The program is a collaborative effort between Andrew Kun of UNH and Orit Shaer of Wellesley College, and it is funded by the National Science Foundation, Office of International Science and Engineering (OISE). We are grateful for the support.

This year we received a number of exceptionally strong applications. After careful deliberation, we selected the six students listed below to participate in the program. This summer three of them will conduct research at the HCILab at the University of Stuttgart under the supervision of Albrecht Schmidt, and three will work at the University of Oldenburg under the supervision of Susanne Boll. Congratulations to all six! We are looking forward to a productive and fun summer.

Lauren Futami is a junior majoring in Media Arts and Sciences at Wellesley College. She is greatly interested in human computer interaction, product design, and video production. She is also excited to participate in research to discover how large displays and augmented reality can combine to engage people in new learning techniques.

Dana Hsiao is a senior at Wellesley College majoring in Computer Science. She is excited about the potential that Augmented and Virtual Reality have in both video games and practical pursuits. She is also interested in the processes and methods of computer security.

Maleah Maxie is a junior at Wellesley College. She is majoring in Cognitive & Linguistic Sciences and Music. Next year, she will be studying the effectiveness of digital technology in the classroom. She is interested in the safety implications of user interface design in autonomous vehicles and other technology critical to our society’s infrastructure.

Calvin Liang is a master’s student in Human Factors at Tufts University where he also earned his B.S. in Engineering Psychology. He currently conducts Brain-Computer Interaction research under Professor Rob Jacob. Calvin is most interested in using technology as a way to optimize the human experience and hopes to pursue a PhD in HCI in the future. In his free time, he enjoys swimming, reading, and listening to podcasts.

Michelle Quin is a sophomore at Wellesley College double majoring in Media Arts & Sciences and English. She is currently focusing on HCI and Front-End Web Development, and will be studying Computer Science with an emphasis on Machine Learning at the University of Oxford her junior year. Michelle hopes to go into HCI graduate studies in the future and is interested in working to make user interfaces more intuitive as well as reflective of today’s diverse society.

Midori Yang is a sophomore at Wellesley College majoring in Media Arts and Sciences. She currently works at the college’s HCI lab designing applications for large touchscreen surfaces, but wants to branch out into interactive design for mixed/virtual reality. She is interested in designing interfaces that can be used to facilitate digital design experiences for non-technological artists.

Announcement: Orit Shaer talk on March 30, 2017

Designing multi-device environments to enhance collaborative decision making

Orit Shaer

12:30 PM, Thursday, March 30, 2016
Location: IOL Training Room
Please register.

Abstract. Large multitouch displays are becoming increasingly available, offering the promise of enhancing colocated collaboration by enabling multiple users to manipulate information using natural interactions such as touch and gestures. Combining a number of multi-touch displays, large and small, facilitates the development of interactive spaces where users can move freely across tasks and working styles.

However, the availability of these exciting devices is not enough to design effective collaborative environments. We also need a deep understanding of how different design characteristics of the environment affect users’ ability to collaborate. To date, little work has examined co-located collaboration in multi-device environments that involve large-scale displays. We are leveraging infrastructure at Wellesley College, consisting of a large-scale interactive tabletop surface and data wall to investigate co-located collaboration in medium-size teams of 8, working on decision-making tasks. To gain deep understanding of individual and group behaviors while using the collaborative environment, we augment traditional measures such as completion time, performance, user satisfaction, and NASA TLX with new computational methods for objective real-time measurements that combine input from multiple eye trackers with logging of user actions.

Bio. Orit Shaer is the Class of 1966 Associate Professor of Computer Science and co-director of the Media Arts and Sciences Program at Wellesley College. She found and directs the Wellesley College Human-Computer Interaction (HCI) Lab. Her research focuses on next generation user interfaces including virtual and augmented reality, tangible, gestural, tactile, and multi touch interaction. Current projects funded by the National Science Foundation (NSF) and by industry grants include the design and evaluation of smart environments for collaborative decision-making, the design and evaluation of novel interactive visualizations for personal genomics, the development of computational tools for enhancing learning and innovation in bio-design, and the conceptualization and prototyping of interactive STEM exhibits for discovery museums. Shaer received her PhD and MSc in Computer Science from Tufts University. She has been a research fellow in the Design Machine Group at the University of Washington and in the University College London Interaction Center.

Dr Shaer is a recipient of several NSF and industry awards including the prestigious NSF CAREER Award, Agilent Technologies Research Award, and Google App Engine Education Award. At Wellesley she was awarded the Pinanski Prize for Excellent Teaching. Dr Shaer has served on dozens of program committees, editorial boards, and review panels, including NSF division of Computers in Science and Engineering, ACM CHI, CSCW, UIST, and TEI conferences, and the editorial board of Foundations and Trends in Human Computer Interaction. She currently serves as co-Program Chair for ACM TEI 2017. She chaired the ACM conference on Interactive Surfaces and Tabletops (2012).

2016 UNH IRES student research – part 2

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. This is the second installment of these brief reports, where we look at some of the research conducted by the students. (You can see the first installment here.)

Natalie Warren worked EEG recording devices:

Learning about EEG during the past two months under the supervision of Valentin and Jakob has been very rewarding. I’ve learned a huge amount about signal processing, experiment design, MATLAB, coding stimulus presentations, and brain activity, not to mention using EEG recording systems! We also got to put our knowledge to use early in the program by measuring electrical activity generated by the eye movement of some of our colleagues (like Anna, pictured here).

Whitney Fahnbulleh worked on augmenting human memory:

This summer I have been developing a photo gallery application for the “recall” project, a project that explores ways to augment human memory. I have been implementing various ways users can interact with the gallery through touch gestures, mid-air gestures, speech recognition, and keyboard input. My end goal for this project is to flesh out the user interface design and run user studies on the application. I have learned so much about computer vision this summer, and I look forward to working on future projects for recall.

Aditi Joshi worked on visualizing uncertainty:

For the past two months, I have been working on designing and implementing a study investigating uncertainty visualizations. In the future, the amount of uncertain information that we will have access to will increase and often they will have conflicting information. With this study, we are trying to understand how people aggregate uncertainty information so we can implement these techniques in future technologies. In this picture Anna is participating in the study and providing us with some great data.

Donovan O.A. Toure how the realism of virtual faces affects the human observer:

This summer, I worked on the perception of computer generated/virtual faces within the Uncanny Valley by analyzing brain waves as an individual is presented with virtual faces with varying levels of detail. In addition to learning about EEG, digital signal processing, and the uncanny valley, I worked on stimulus creation–including 3D modelling–to help carry out the experiment design.

2016 UNH IRES student research – part 1

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. Here’s the first installment of these brief reports, where we look at some of the research conducted by the students.

Taylor Gotfrid worked on using augmented reality in assistive systems:

During my time here I learned about experiment design and augmented reality. Over this summer I’ve been working on designing and conducting a user study to determine whether picture instructions or projected instructions have better recall for assembly tasks over a long period of time. This experiment assesses which form of media would lead to fewer errors, faster assembly times, and better recall over a span of three weeks. The picture above is of the projector system indicating where the next LEGO piece needs to be placed to complete the next step in the assembly process.

Dillon Montag worked on tactile interfaces for people with visual impairments:

HyperBraille: I am working with Mauro Avila on developing tactile interfaces for people with visual impairments. Our tool developed this summer will allow users to explore scientific papers while receiving both audio and tactile feedback. We hope this new tool will allow people with visual impairments to enhance their understanding and navigation through papers.

Anna Wong worked on touch recognition:

For my project with the University of Stuttgart lab, I was tasked with using images like the one on the left to detect the user’s hand, and then classify the finger being used to touch a touch screen. This involved transforming the images in a variety of ways, such as finding the edges using a canny edge detector as in the top image, and then using machine learning algorithms to classify the finger.

Elizabeth Stowell worked on smart notifications:

I worked with smart objects and the use of notifications to support aging-in-place. I enjoyed building prototypes for a smart pillbox, sketching designs for a smart calendar, and exploring how people who are elderly interact with such objects. In these two months, I learned a lot about notification management within the context of smart homes.

Announcement: Lars Lischke talk on May 5, 2016

Large Display Interaction
Lars Lischke

Thursday, May 5, 2016, 11 AM
Location: Kingsbury N129

 

Abstract. Marc Weiser’s vision “Computing for the 21st century” introduces three classes of devices to interact with digital content: “tabs,” “pads” and “boards.” “Tabs” and “pads” have already become commonplace with smartphones and tablet computers. In contrast, digital “boards” are still rarely used. However, there is a good chance that wall-sized display-“boards” will become commonplace within the next decade. Today, wall-sized displays are mainly used to visualize large and complex data. This is in particular beneficial, because humans are able to scan large areas quickly for objects and visual cues.

In the future, wall-sized displays will not only be used in the context of professional visualizations and public displays, they will also become commonplace in office and home environments. The success of wall-sized display installations is highly dependent on the well-designed input techniques and appropriate UI-Design guidelines. In this talk I will present latest research focusing on wall-sized display interaction. This will include eye-gaze based interaction and mid-air gestures. Furthermore, multi device interaction in combination with wall-sized displays is enabling novel concepts for single and collaborative work.

Besides appropriate input techniques new graphical user interfaces are needed for successful wall-sized display systems. Due to this, I will discuss how interfaces for wall-sized displays could look like.

Bio. Lars Lischke is a third year PhD Student at the HCILab at the University of Stuttgart, Germany. He studied computer science (Diploma, MSc equivalent) at the University of Stuttgart and Chalmers University of Technology in Gothenburg, Sweden. His research interests are in the field of human computer interaction with a focus on interacting with large high-resolution displays in office environments and for data exploration.