Inclusive Design Incubator
Friday, April 05, 2019: 9:00am - 10:30am - Back Bay AB: Inclusive Design Incubator
Ryan King, Smithsonian Freer|Sackler Asian Art Museum, USA
To tell a good story and create an engaging environment often involves compelling and rich media. We invest a lot of effort in producing such media and want it to be accessible to our ever growing and diverse audiences. How do we deliver the essence of our content in meaningful and intuitive modes to our visitors? Captioning, verbal description, and language offerings begin to bridge existing gaps, but often are static printed solutions that give a visitor no sense of where she may find herself when entering into a space mid-video.
Our solution so far:
We have created a pan-institutional collaborative project at the Smithsonian to develop and test real-time captioning for time-based media art (i.e. video art) that preserves the integrity of the artwork while also enabling audiences who are deaf, hard of hearing, or non-native english speakers to experience the piece. We are currently exploring technology-based solutions with an eye toward creating an easy to adapt, open-source model that can be adopted widely by other museums and cultural institutions.
Strength in numbers:
We are presenting our proposals and beta products to the larger community in this session as a springboard for discussion and to workshop our shared ideas. We have convened of a pan-institutional task force at the Smithsonian and through AAM, hope to expand this conversation with the larger museum community in this session. This group will benefit from the input of individuals from a variety of backgrounds including exhibition designers, curators and visitor services managers.
Background and Origins:
The premise of the proposed project is to use a Single-Board Computer (SBC) such as a Raspberry Pi or Asus Tinker Board to play Time-based Media in-gallery and report the time of every loop to a web server hosting an open-source content management system (CMS). A visitor would see that captions are available via a URL or QR code, and upon visiting a
Examined vtt.js [https://github.com/mozilla/vtt.js/tree/master](https://github.com/mozilla/vtt.js/tree/master)
Frontend implementation.[https://github.com/mozilla/vtt.js/tree/master](https://github.com/mozilla/vtt.js/tree/master) .