Summary of Access App Evaluation Efforts


The accompanying documents summarize the front-end, formative and summative evaluation efforts conducted by the Access App team over the course of the three-year IMLS grant. The reports detail the multiple methods the Access App team used to explore core project research questions, to iterate the mobile app interface design, and to garner critical feedback from our target audiences. Appendixes include examples and templates of instruments and protocols used to conduct evaluation efforts. These resources are provided as an aid to other cultural organizations interested in conducting their own accessibility-centered evaluations.

“How can museums provide equal access to their content and collections in the format and through the means that each individual prefers?”

-Access App Key Research Question

The Access App project focused on two target audiences: (1) cultural organizations who want to build accessible mobile apps and experiences; and (2) their visitors, who exhibit a wide range of needs and preferences in how they engage with content, as well as in their ability and desire to contribute relevant content.

A key underlying premise of the Access App project is that crowdsourced audio content enables everyone, including people who are blind or low vision, to “see” through the eyes of others. The project team postulated that through crowdsourcing audio descriptions, the Access App framework would not only facilitate simple accessibility, but also weave participatory involvement and universal design into a holistic experience that is rewarding for both the content creator and consumer.

In this vein, the Access App aimed to transform the nature and structure of experiences in museums from unidirectional broadcasts of knowledge from museum expert to visitor, to a rich dialogue achieved through peer-to-peer crowdsourced contributions. We believe that crowdsourcing descriptions help to enrich the visitor experience by providing a breadth and diversity of voices and perspectives that audiences can encounter.

Throughout the three-year period of the IMLS grant, the Access App team integrated evaluation into its process in order to test the core hypotheses and research questions surrounding the Access App framework. From early focus groups with peer cultural institutions, to prototyping the app interface with blind and low-vision users, to testing crowdsourcing prompts with general audiences, the Access App team compiled a rich body of data to inform and inspire the direction of our efforts.  

Front-end Evaluation

Focus Groups with Cultural Institutions (November 2014-January 2015)

Formative Evaluation

Paper Prototyping with Sighted Testers (February 2016)

Paper Prototyping with Blind/Low-Vision Testers (March 2016)

Prompt Testing with Sighted Testers (December 2016- March 2017)

Focus Group with Blind/Low-Vision Testers (April 2017)

Summative Evaluation

Access App Summative Evaluation (June-August 2017)