At Redweb, we always strive to make our websites, internal tools and processes as accessible as possible. With an estimated 15% of the world’s population having some degree of disability, accessibility is more important than ever before – and creating with accessibility in mind can have beneficial side effects for everyone, in a phenomenon known as the “curb-cut effect”.
This leads us neatly onto the subject of this insight: making augmented or virtual reality experiences within Unity accessible – or more broadly, how to make anything built within Unity accessible.
I’ve previously dabbled with Unity, the game development platform behind titles such as Pokémon Go, Cuphead, and Monument Valley, however, when the question of accessibility arose during a review of some augmented reality experiments, I found myself in uncharted territory.
To answer the question, I promptly dived into Unity’s extensive documentation, only to find nothing. Unity didn’t offer anything to aid with the process of making things accessible, let alone any kind of acknowledgement of accessibility. Slightly confused by this, I started to look around Unity’s vast community resources and discovered many people recommending a third-party extension called ‘U.A.P’, or Unity Accessibility Plugin.
After reading into U.A.P, it seemed to offer a wide range of options for making 2D applications and user interfaces (UI) accessible, but was lacking in support or awareness of 3D scenes. After finding this out, I wanted to explore how companies building augmented reality applications and experiences were making them accessible.
Surprisingly, I found even the largest players – such as Google, Niantic, and Ikea – didn’t offer much in the way of support for accessibility, with their applications failing to expose crucial functionality and UI to accessibility services.
With that in mind, I decided to do something about it.
Diving into the deep end
As someone who is visually impaired, I found myself focusing solely on how people with visual impairments would access these applications and enjoy them to the same extent that someone with sight would.
I made countless prototypes and explored different methods to get data from Unity. I had to consider what kind of information a user would need to have a good idea of what they’re interacting with, and then what the best method would be to present this information.
After numerous iterations, I settled on using features of Unity’s physics system, specifically raycasting, to get an idea of the virtual environment. I paired this up with some custom scripts to turn the angles of rotation on a particular axis into something more tangible – in this case, time on a clock face (for example, twelve o’clock would be 0 degrees, whilst 40 degrees would be 4 o’clock).
With data from Unity being captured and presented neatly by the extensions in a way that’s understandable, I needed to explore how best to convey this information to the user. As mentioned earlier, Unity doesn’t expose itself at all to any on-device accessibility services, placing the onus on the developer to implement this functionality.
Unity, thankfully, can be extended via native code, which interacts directly with the underlying operating system. This allowed me to implement hooks into native functionality for text-to-speech, device locale detection and accessibility service status. I used the native text-to-speed functionality to convey all this data to the user.
To ensure a user wouldn’t get overwhelmed with information, I implemented an event queue, which prioritises which bits of information get spoken first.
Now over to you
Today, we’re releasing an alpha version of our own, aptly named ‘Unity Accessibility Extensions’ for developers and creators to play about with, and explore how they can make their AR, VR and other mediums accessible. Currently the extensions are focused towards those with visual impairments, specifically, giving context and awareness of the virtual world, or ‘play-space’ to them.
The version of the extensions that we’re releasing today includes the following functionality:
- Text-to-speech and locale detection on iOS, macOS, Windows and Android
- Raycast-based object feedback (e.g. “You’re currently looking at Cube, which is x metres away from you”)
- Rotation feedback (e.g. “You’re currently at 12 o’clock”)
- Description components that can be added to objects
- Editor GUI and feedback systems to allow for debugging and aiding production
There are, however, a few known issues and caveats with the extensions in their current state, including:
- The priority queue will remove all the events of a certain type, leaving the user with an audible, occasionally conflicting trail of where they were looking. We have attempted to automate pruning of the queue; however, this hasn’t been fully implemented yet.
- The text-to-speech functionality doesn’t always neatly integrate into services such as TalkBack, meaning that if TalkBack speaks whilst the TTS in Unity is speaking, the user will have to relaunch the Unity application to continue using the TTS functionality.
- macOS text to speech is dependent on the say command, rather than fully native hooks into NSSpeechSynthesizer
Within the package we’re releasing, we’re also including documentation and a few example scenes to showcase how exactly things work – so you can get going right away!
Please share any work or experiences you develop using these tools with email@example.com we’d love to see what you do with them and how far you can push the extensions.
I hope that these extensions, and the idea of building out accessibility features where they’re lacking, serves as an inspiration for you in your own future projects.