All posts by Andrew Kun

Announcement: Orit Shaer talk on March 30, 2017

Designing multi-device environments to enhance collaborative decision making

Orit Shaer

12:30 PM, Thursday, March 30, 2016
Location: IOL Training Room
Please register.

Abstract. Large multitouch displays are becoming increasingly available, offering the promise of enhancing colocated collaboration by enabling multiple users to manipulate information using natural interactions such as touch and gestures. Combining a number of multi-touch displays, large and small, facilitates the development of interactive spaces where users can move freely across tasks and working styles.

However, the availability of these exciting devices is not enough to design effective collaborative environments. We also need a deep understanding of how different design characteristics of the environment affect users’ ability to collaborate. To date, little work has examined co-located collaboration in multi-device environments that involve large-scale displays. We are leveraging infrastructure at Wellesley College, consisting of a large-scale interactive tabletop surface and data wall to investigate co-located collaboration in medium-size teams of 8, working on decision-making tasks. To gain deep understanding of individual and group behaviors while using the collaborative environment, we augment traditional measures such as completion time, performance, user satisfaction, and NASA TLX with new computational methods for objective real-time measurements that combine input from multiple eye trackers with logging of user actions.

Bio. Orit Shaer is the Class of 1966 Associate Professor of Computer Science and co-director of the Media Arts and Sciences Program at Wellesley College. She found and directs the Wellesley College Human-Computer Interaction (HCI) Lab. Her research focuses on next generation user interfaces including virtual and augmented reality, tangible, gestural, tactile, and multi touch interaction. Current projects funded by the National Science Foundation (NSF) and by industry grants include the design and evaluation of smart environments for collaborative decision-making, the design and evaluation of novel interactive visualizations for personal genomics, the development of computational tools for enhancing learning and innovation in bio-design, and the conceptualization and prototyping of interactive STEM exhibits for discovery museums. Shaer received her PhD and MSc in Computer Science from Tufts University. She has been a research fellow in the Design Machine Group at the University of Washington and in the University College London Interaction Center.

Dr Shaer is a recipient of several NSF and industry awards including the prestigious NSF CAREER Award, Agilent Technologies Research Award, and Google App Engine Education Award. At Wellesley she was awarded the Pinanski Prize for Excellent Teaching. Dr Shaer has served on dozens of program committees, editorial boards, and review panels, including NSF division of Computers in Science and Engineering, ACM CHI, CSCW, UIST, and TEI conferences, and the editorial board of Foundations and Trends in Human Computer Interaction. She currently serves as co-Program Chair for ACM TEI 2017. She chaired the ACM conference on Interactive Surfaces and Tabletops (2012).

Thanks UNH IRES cohort of 2016!

I received a nice hand-made card from the UNH IRES cohort of 2016. The students express their thanks to Kelly Shaw of the UNH CEPS Business Services, and Patty Cook at University Travel. Kelly was in charge of paperwork for student stipends, while Patty helped with travel schedules. It’s really nice to receive a card like this – we would like to thank the students for representing us so well in Germany!

2016 UNH IRES student research – part 2

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. This is the second installment of these brief reports, where we look at some of the research conducted by the students. (You can see the first installment here.)

Natalie Warren worked EEG recording devices:

Learning about EEG during the past two months under the supervision of Valentin and Jakob has been very rewarding. I’ve learned a huge amount about signal processing, experiment design, MATLAB, coding stimulus presentations, and brain activity, not to mention using EEG recording systems! We also got to put our knowledge to use early in the program by measuring electrical activity generated by the eye movement of some of our colleagues (like Anna, pictured here).

Whitney Fahnbulleh worked on augmenting human memory:

This summer I have been developing a photo gallery application for the “recall” project, a project that explores ways to augment human memory. I have been implementing various ways users can interact with the gallery through touch gestures, mid-air gestures, speech recognition, and keyboard input. My end goal for this project is to flesh out the user interface design and run user studies on the application. I have learned so much about computer vision this summer, and I look forward to working on future projects for recall.

Aditi Joshi worked on visualizing uncertainty:

For the past two months, I have been working on designing and implementing a study investigating uncertainty visualizations. In the future, the amount of uncertain information that we will have access to will increase and often they will have conflicting information. With this study, we are trying to understand how people aggregate uncertainty information so we can implement these techniques in future technologies. In this picture Anna is participating in the study and providing us with some great data.

Donovan O.A. Toure how the realism of virtual faces affects the human observer:

This summer, I worked on the perception of computer generated/virtual faces within the Uncanny Valley by analyzing brain waves as an individual is presented with virtual faces with varying levels of detail. In addition to learning about EEG, digital signal processing, and the uncanny valley, I worked on stimulus creation–including 3D modelling–to help carry out the experiment design.

2016 UNH IRES student research – part 1

In its third and final year, the UNH International Research Experiences for Students (IRES) program has selected eight students to conduct research in the HCI Lab at the University of Stuttgart, under the supervision of my colleague Albrecht Schmidt. The UNH IRES program is funded by the National Science Foundation, Office of International and Integrative Activities, and we are grateful for their support. The eight students were each assigned to a group within the HCI Lab and participated in the research activities of that group.

I asked each of the students to write a one-paragraph report on their summer experience in Stuttgart, focusing on their research, and on their cultural experience. Here’s the first installment of these brief reports, where we look at some of the research conducted by the students.

Taylor Gotfrid worked on using augmented reality in assistive systems:

During my time here I learned about experiment design and augmented reality. Over this summer I’ve been working on designing and conducting a user study to determine whether picture instructions or projected instructions have better recall for assembly tasks over a long period of time. This experiment assesses which form of media would lead to fewer errors, faster assembly times, and better recall over a span of three weeks. The picture above is of the projector system indicating where the next LEGO piece needs to be placed to complete the next step in the assembly process.

Dillon Montag worked on tactile interfaces for people with visual impairments:

HyperBraille: I am working with Mauro Avila on developing tactile interfaces for people with visual impairments. Our tool developed this summer will allow users to explore scientific papers while receiving both audio and tactile feedback. We hope this new tool will allow people with visual impairments to enhance their understanding and navigation through papers.

Anna Wong worked on touch recognition:

For my project with the University of Stuttgart lab, I was tasked with using images like the one on the left to detect the user’s hand, and then classify the finger being used to touch a touch screen. This involved transforming the images in a variety of ways, such as finding the edges using a canny edge detector as in the top image, and then using machine learning algorithms to classify the finger.

Elizabeth Stowell worked on smart notifications:

I worked with smart objects and the use of notifications to support aging-in-place. I enjoyed building prototypes for a smart pillbox, sketching designs for a smart calendar, and exploring how people who are elderly interact with such objects. In these two months, I learned a lot about notification management within the context of smart homes.