Paper Prototyping was conducted with blind and low vision participants from the Perkins School for the Blind in Watertown, MA. Founded in 1829, Perkins was the first school for the blind in the United States. Today, Perkins serves approximately 200 students on campus each year and works with organizations in 67 countries to improve the lives of the 4.5 million children around the world without access to education due to blindness. The institution is internationally renowned for its work on behalf of the blind and deafblind, thus making it an ideal institution to serve as advisors and end-user testers. Throughout the grant period, Perkins administration, staff, and students have been incredibly supportive of our efforts. Their contributions have been vital to the project.
- Following the Paper Prototyping with Sighted Testers, refine Access App user interface with Blind and Low Vision Testers in preparation for beta test development
- Inform early planning stages for layout and navigation
- Define end-user perception (willingness to adopt and understand the purpose)
- Uncover issues beyond the user interface
Image: Raised line drawing of app screen using the inTact Sketchpad
The Access App evaluation team chose paper prototyping with raised-line drawings as the primary method for testing because it yielded thoughtful comments in a rapid timeframe without having to build something physical for this interim testing.
The PEM evaluation team conducted one tactile paper prototyping session with a group of three staff from Perkins, focusing on user interface design and user perception using the most recent (V5) project wireframes. Wireframes are a set of images that display the functional elements of a mobile application. They are typically used for planning a mobile app’s structure and functionality. The PEM team translated printouts of the project’s V5 wireframes into raised line drawings by hand. The team used an inTact Sketchpad to simplify designs down to their basic functional elements so that blind and low vision users could feel these designs as tactile maps. For more information on the method of tactile paper prototyping, please reference Appendix F: Miao, Mei, Wiebke Köhlmann, Maria Schiewe, and Gerhard Weber. “Tactile paper prototyping with blind subjects.” Haptic and audio interaction Design (2009): 81-90.
Prior to the evaluation session, the facilitators briefly described the project and explained how they would reenact the dynamism inherent in the app and act as a screen reader for all written content absent from the raised line drawings. Please see Appendix E: Paper Prototyping Materials Toolkit for Blind/Low Vision Testers for a detailed walkthrough of the screens/flows.
In addition to the wireframes, the “audio stream experience” was simulated by playing a clip of an object audio description, a traditional curatorial description, and an example from sound artist Halsey Burgund’s user-generated content. Links to these audio types can be found below:
Formal Audio Description Example
Umberto Boccioni – Unique Forms of Continuity in Space. 1913 (cast 1931)
Traditional Curatorial Description Example
Umberto Boccioni – Unique Forms of Continuity in Space. 1913 (cast 1931)
Crowdsourced Stream example from Halsey Burgund
ROUND: Cambridge Audio Sample
Feedback on raised line drawings method:
Participants were initially confused about the raised line buttons and asked, “Are these outlines for buttons or just lines?” They also queried, “Where is the physical indication of text placement in relation to overall design?” Participants seemed to quickly adapt to sensing the drawings in relation to the screen descriptions as time went on.
Below is one participant’s verbal description of one screen (Flow 6) through touch observation of raised line content:
“There’s a button top right and left
Then, 2 buttons side by side, vertical, another button on the left side, vertical
There’s a long button with a round button on the right side
Then 1, 2, 3 buttons
There’s a circle down at the bottom…”
- General note: the simplicity of these screens is a great except for screen 6, which is too busy.
- Where is this content stored?
- How does the museum know there’s been newly contributed content to vet/approve? Does the content automatically get fed in? How can the museum maintain a bit of control?
Screen Specific Feedback:
- I like that they’re horizontal buttons that fit the screen because they’re easier to not miss, especially if you’re swiping.
- One thing I would urge you to do is to use “Peabody Essex Museum” instead of “PEM”; better for screen reader and for branding
- Logo should be described in alt text
- All testers felt location sequence was out of whack
- Wouldn’t this be done before you’re in the app? Wouldn’t it automatically be done with download so information on screen 1 is customized?
- Push notifications and locations should be the first thing that happens
- Don’t understand this extra layer. Suggestion: let Apple do its thing at the beginning with its own prompt; customize language to say “This app uses your current location to recognize cultural institutions…”
- Is there a reason not to enable location?
- Noted that this screen feels very similar to the home screen—like the familiarity of orientation
- Would want to see language like ‘Select an exhibit’
- All testers agreed: Provide a tutorial integrated into the beginning experience that will let people know all their options right off the bat versus the current staggered orientation that feels more like a hurdle.
- This coaching screen seems out of place
- Be careful with overlays; they can be tricky with a voiceover, and for users with cognitive disabilities and low vision, text over text is very confusing.
- Recommended standard sizes for all content buttons versus buttons that vary in size based on word length; maintaining consistency is important when you’re scanning with your finger
- All agreed too much stuff is happening on this screen. All recommended simpler screens for this core experience, even if it means more screens (i.e., have separate screen for each subtab)
- If you activate button B, it should open a separate screen (with a back button) that allows me to play that specific content; otherwise there’s a lot going on in any given space.
- Don’t have buttons present when they’re not needed—e.g., only have the scrub bar when audio content is playing.
- If I selected round “T” button, I would expect something to be read to me.
- We like the sequencing of gallery overview first (big picture context is nice), then objects/subthemes.
- Audio Description (AD) content – Advocate for ability to filter out content if you’re sighted, or aren’t interested in hearing this content; people should have the option of turning it off.
- You really should give the user the option to choose what kind of content they want to hear.
- How can I jump directly into crowd-sourced content versus user comments?
- I don’t think a user would want to wade through all the AD and museum contributed content just to get to other users’ perspectives.
- I wanted to hear a description/introduction to what each type of content was – e.g., “physical description” or “curator’s description” . Participants recommended labeling the various types of descriptions
- Crowd-sourced content: if someone contributes, where does it go and how does it fit into all the other content?
- I liked the music, but it had the same focus volume-wise as the content versus serving as backdrop—competed too much with the content pieces
- I liked the sound/music—gave it some style
- Maybe an addition label or recording could tell the user, “All content is moderated, etc.…” or “At the sound of the tone make your contribution.”
- Blind user expected to be able to press the mic icon to record and to double tap the “T” or Mic icon to make their selection for contribution
- What’s the difference between the “X” button and the “<” button? Why don’t you just have one option to exit?
- Recommended 30-second limit for audio contributions
- Overall recommendation to reduce the number of steps to record
- One person said, “What’s the usefulness of these prompts? Why are they buttons? I should be able to contribute what I want.”
- Another said, “I’m not sure these are the right prompts, but I like the idea”
- Suggestions for new prompts/questions to help capture points of comparison:
- What thoughts does this provoke in you?
- How does this make you feel?
- What does this remind you of?
- What runs through your head when you see this?
- Once you have someone record something, make sure they can review it— Comparison to “Audio boo” app that participant uses, like audio tweeting
- A stop button? To stop what? You need a start button and stop button especially for users who aren’t familiar with this kind of stuff so there’s more control over the recording process and you’re not unexpectedly thrown into recording
- Instead of “Upload”, which is too vague a prompt, I would have these 3 prompts: “Review,” “Rerecord,” and “Submit.”
- Since the contribute icon is always there, I would hesitate to say “while you’re on a roll”/hesitate to give people the option of having them contribute to another bucket of content/area of exhibition they might not have experienced yet
- Suggestion to instead say “Please note you can always contribute another one. The contribute button is always in the bottom right.”
- I would just take people back to the core experience after contribution.
- They all would be inclined to contribute audio content
- Suggestion to limit any text contributions to 500 characters