RuslanD RuslanD - 1 year ago 83
iOS Question

Letting VoiceOver users know what a popover is pointing to

I'm writing an iOS app where the first time a user sees a particular control, I show them a popover that explains what the control does.

With VoiceOver on, I've made it so the popover works like an alert - it will get the accessibility focus, and the text will be read out to the user. However, what that doesn't do is provide the user a specific spacial hint as to where the actual control is on the screen. Yeah, it's "above" the popover, but even if I indicated that in the VoiceOver text, there's still room for trial-and-error as the user repeatedly taps where they think the control should be.

I was thinking of solving that with gestures. One of Apple's sample apps is a dating-type app, where you can "like" or "pass" a potential match's profile by using the single-finger swipe-up or swipe-down gesture. I like that because it's unambiguous to the user, and quick because they don't have to try multiple times to do what they want.

I could reuse the swipe-up to let users interact with the control directly, and swipe-down to dismiss the popover, but that doesn't feel right because it's not how you typically dismiss popovers while in VoiceOver. Does anyone have any recommendations for making this smoother? I'm assuming I'm not the only person who's done accessibility for popovers :)

EDIT: Someone made the useful observation that it might be a bit too heavy-handed to try and override a system gesture for something that's only shown once, rather than it being a repeated user action in the UI. An alternative suggestion would be to modify the accessibility text to provide a spatial cue as to whether the control is, because anyway users will have to learn where to find it even in the absence of the first-time popover. What do you think?

Answer Source

The solution that I went with, after consulting with people devoted to making mobile apps accessible, was to include the spatial hint in the VoiceOver accessibility text, rather than using a gesture.

The rationale was that:

  • the UI only happens once, as opposed to a common action in the app
  • blind and vision-impaired users will anyway have to develop an intuition for where the control is on the screen, because the first-time experience won't be around to help them beyond, well, the first time.