Image: Paper Prototyping in action at the Peabody Essex Museum
- Refine Access App user interface in preparation for beta test development
- Inform early planning stages for layout and navigation
- Define end-user perception (willingness to adopt and understand the purpose)
- Uncover issues beyond the user interface
The Access App evaluation team chose paper prototyping as the primary method for testing because it yielded thoughtful comments in a rapid timeframe. In addition, the paper materials allow for the user to imagine and even add to the experience if something appears confusing. Prior to each session, the facilitator described the project and how a facilitator would reenact the dynamism inherent in the app. Please see Appendix C: Paper Prototyping Toolkit Materials and Appendix D: Observation Sheet for a detailed account of the paper prototyping process.
As a means to inform beta app development, each evaluation team member conducted paper prototyping sessions, focusing on user interface design and user perception using the most recent (V5) project wireframes. Wireframes are a set of images that display the functional elements of a mobile application. They are typically used for planning a mobile app’s structure and functionality.
Image: Beta App V5 Wireframes
In addition to the wireframes, the “audio stream experience” was simulated by playing a clip of an object audio description, a traditional curatorial description, and an example from sound artist Halsey Burgund’s user-generated content. Links to these audio types can be found below:
Formal Audio Description Example
Umberto Boccioni – Unique Forms of Continuity in Space. 1913 (cast 1931)
Traditional Curatorial Description Example
Umberto Boccioni – Unique Forms of Continuity in Space. 1913 (cast 1931)
Crowdsourced Stream example from Halsey Burgund
ROUND: Cambridge Audio Sample
Feedback below represents comments from Peabody Essex Museum (PEM) and Kennedy Center (KC) users.
General Usability Comments by Category
- Location aware prompt:
- PEM: Most expected to see the standard Apple/iOS language asking them to turn on location services; Wouldn’t it be something you enabled when you downloaded the app like any other app?
- PEM: One person was concerned about anonymity: If I don’t want to be tracked, I expect to be able to still access the content manually.
- KC: What content can you access if you don’t give the app permission to access your location?
- For Discussion/Recommendation: When testing the beta version, we should be mindful of this flow and if the location prompt seems more intuitive earlier in the experience
- Exhibition choices:
- PEM: Ropes – Am I physically there when I’m clicking on it? If I’m at the museum, how do I know if it’s off-site? Expected general information when they clicked on Ropes (e.g., hours, how to walk there, that it’s in another location).
- For Discussion/Recommendation: most users expected to access directions to a house or exhibition within the building. They often interpreted the “map” icon as something that would guide them to or within the experience.
- KC: For Live events—Can the content be synched with the performance (so someone could use Audio Description [AD] or captions during a show)?
- Not really designed for a live event
- Could be used to deliver educational content
- Coaching mark (“Add Your Voice! Press chat bubble to contribute your thoughts and observations about the experience):
- PEM: All expected to be able to press and contribute to something at that moment/that pressing it would activate something at that stage, but that you could X out of it if you didn’t want to.
- For Discussion/Recommendation: We want to study the perception of this coaching mark during beta testing.
- “T” text icon next to track:
- PEM: Most expected that this text was formal translation of audio content (another mode of experiencing the same) versus independent or contributed content in its own right
- For Discussion/Recommendation: This is something to keep in mind as we onboard institutions—How to use text if users expect to see translations, transcripts, and/or labels.
- General text and image icons:
- PEM/KC: Users were having difficulty differentiating “T” and “camera” icons for contributing versus absorbing/accessing content.
- For Discussion/Recommendation: testers associated the “T” and especially the “camera” icon with social media—they assumed by selecting those icons they could contribute content versus the “contribute” button at the bottom.
- Advancing within stream:
- PEM: Users were consistently confused about how to advance within the stream. Can you click on a different topic or simply advance using the controls below? What if I wanted to get out of this bucket of content (i.e. kitchen) altogether?
- KC: Users had difficulty deciphering between advancing content within buckets, and bucket to bucket—most didn’t understand that there are multiple tracks within and from museum user perspective
- For Discussion/Recommendation: we’ll need more clarity on how to advance within the stream so we can test it effectively. I imagine it will become clearer when viewing the beta app.
- Contribute audio prompts:
- PEM: Some expected the prompts to be buttons; all expected to be able to record by hitting the microphone icon at the bottom (in addition to selecting prompts, or in lieu of if you wanted to contribute unrelated content)
- For Discussion/Recommendation: some testers appreciated the written prompts, but most wanted to contribute without tailoring their response to a question.
- Content uploading:
- PEM: Some confusion about the process of start/stop, record/redo, or erase/save content
- PEM: Most confused about where the audio content goes/where it gets uploaded to; difficulty understanding what they’re actually contributing to/connections to content in the stream
- KC: Users want to know how often it’s updated; how soon can I hear/read my contribution?
- For Discussion/Recommendation: let’s discuss how we alert users to the final location of their contribution—is it immediately uploaded? If it’s written text—is it anonymous?
- PEM: Two were concerned with maintaining anonymity/obtaining consent when it comes to audio and/or text contributions
- KC: Proprietary information/recording:
- Can the content be controlled so patrons can only access some things when they are in the theater?
- Can you set it up so patrons can only access content if they provide you with information (like a ticket number) so you aren’t giving everything away?
- Can you turn off the contribute feature during a show?
- Can the content be downloaded?
- For Discussion/Recommendation: a few individuals mentioned they wouldn’t want to be recorded without their knowledge. From a guest services standpoint, we’ll need to keep this in mind when onboarding institutions.
- Impact on other Patrons/Audiences:
- KC: How bright is this going to be? The blue/green might make for a lot of glowing faces…What are the implications for other patrons with a bunch of glowing screens?
- KC: Could it include an event reminder? Performance/lecture/film starts in 10 minutes? Live notifications?
- KC: How easy is it to add content? We have a lot of stuff and sometimes it’s not here very long…
- KC: Who manages the contributed content? Is it approved by someone? Can anyone just upload anything at any time? Can you hold things for approval on the backend before it gets uploaded into the stream?
- KC: Is the content downloaded on the phone in advance or does it only work when people are in the building? Can you make different things available in different ways? (Some content only when you are in the building and other content outside of it?)
- KC: What sort of infrastructure is needed to support this? Do we have enough Wi-Fi coverage?