Exploring the Gallery Through Voice: Using Amazon Alexa and Visual Descriptions to Connect Remote Users to the Museum

Professional Forum

Tim Schwartz, Alley Interactive, USA, Pamela Horn, Cooper Hewitt, Smithsonian Design Museum, USA, Sina Bahram, Prime Access Consulting, Inc., USA

Voice presents an exciting new frontier in human-computer interaction and how we experience the physical world. Digital voice assistants like Alexa, Siri, and Google Assistant have become mainstream. Children today will grow up feeling just as comfortable talking to computers as they are using a keyboard. The growth of voice-enabled computing provides a powerful way for all people, even those unable to type, to interact meaningfully with technology.

The Cooper Hewitt Smithsonian Design Museum, Prime Access Consulting, and Alley Interactive were awarded a Knight Foundation prototype grant and built a voice application for the Amazon Alexa that extends the museum exhibition experience to remote users and those with physical or vision impairments.

The project is a “choose your own adventure” voice application, where users can virtually move around the gallery, “look” at artworks, and explore information linked to those objects. The application uses visual descriptions (wording that conveys the objects’ appearance) to verbally portray the exhibition to visitors. Interaction is based on a digital replica of the gallery’s physical layout, enabling remote visitors to navigate the space the same way they would in-person. This project offers an enhanced gallery visit, not simply a replacement.

The team will present its process for envisioning and building this project, including human-centered design methodologies and iterative agile development strategies.