December 2016 – March 2017
A study was done to test two crowdsourcing prompts designed to engage visitors of the museum and to generate content that could be used by blind and low vision museum visitors.
- Do different prompts affect the audio description provided? Does the prompt used illicit different responses?
- How interested are visitors in contributing content? How interested are visitors in listening to crowdsourced content?
- How do visitors feel about creating crowdsourced visual descriptions?
Method 1: Recording Visual Descriptions
The study compared the crowdsourced descriptions created from two different prompts:
GROUP A: “How would you describe this to a friend who is not in the room?
GROUP B: “How would you describe this to a person who is blind?”
In each group, subjects were asked to describe three different artworks in order to become more comfortable with recording their description. See Appendices for Group A and B Protocols.
Method 2: Follow-up interview
Immediately following the visual description exercise, the data collector then asked the subject a set of follow-up interview questions. See Appendix for Interview Protocol.
Data collection: Both visual descriptions and interviews were recorded using digital recorders or the iPhone voice record feature. All recordings were transcribed.
Sample size: The team collected 19 interviews; 9 for the “friend” protocol (A), and 10 for the “blind low vision’ (B) protocol.
The results suggest that there are few significant differences between the two protocols in terms of the type of content created. However, those using the “blind low vision” protocol on average want to contribute to the project more than those using the “friend” protocol. In addition, those people in the “blind low vision” protocol group tended to also be more interested in listening to this type of commentary. Participants who describe themselves as being more familiar with art or public speaking are more comfortable providing the content.