On the Product Detail Page, we saw how fixing VoiceOver issues in SwiftUI almost automatically produced the expected behavior for keyboard and voice interaction. In the Product List Page and Wishlist, SwiftUI and UIKit fixes targeting VoiceOver similarly translated to other assistive technologies.
However, the Product List Page exposed an important trade-off. To make the Cart and Wishlist buttons reachable by Full Keyboard Access and Voice Control, we had to change how VoiceOver navigated the list. Instead of a single swipe per product with contextual actions, VoiceOver users now had to swipe through three separate elements per item.
As I wrote in the previous article:
“Using VoiceOver, navigation now requires more swipes per product because each element is treated separately. From my perspective as a VoiceOver user, although the interface is functionally correct, I prefer the original single-element approach with accessibility actions, which allows quicker navigation and easier access to all available options.”
Conditionally Grouping PLP Elements (Actions Included)
At this point, we can treat the accessible implementation from the previous article as our baseline, and focus on improving the VoiceOver experience further, but only when VoiceOver is actually running.
SwiftUI exposes several accessibility-related environment values that describe the current accessibility context. Many of these relate to visual presentation, such as Increased Contrast or Differentiate Without Color. There’s no equivalent signal for Voice Control or Full Keyboard Access, but there is one for VoiceOver.
We can read it directly from the environment:
@Environment(\.accessibilityVoiceOverEnabled)
private var isVoiceOverEnabled
Because this value is a simple Bool, we can use it to conditionally apply VoiceOver-specific enhancements, without changing how the UI behaves for other assistive technologies.
With that information available, we can start adapting how elements are grouped depending on whether VoiceOver is running.
In the earlier articles, we used .accessibilityElement(children: .ignore) to prevent VoiceOver from navigating individual decorative elements (such as stars). This effectively hides the children from the accessibility tree, allowing us to expose a single parent element and manually provide its accessibility label and traits.
SwiftUI provides two additional grouping strategies particularly useful in this context:
- .combine, which makes VoiceOver merge all child elements into a single accessibility element
- .contain, which keeps inner elements independent but ensures they are navigated consecutively
By reading @Environment(\.accessibilityVoiceOverEnabled), the PLP can adapt its child grouping behavior at runtime. When VoiceOver is running, the product information and its actions are presented as a single combined element. When it is not, the elements remain separate and independently focusable for Full Keyboard Access and Voice Control.
HStack {
NavigationLink(destination: ProductDetailView(product: product)) {
ProductCellNoButtons(product: product)
}
VStack {
// Cart and Wishlist Buttons
}
}
.accessibilityElement(children: isVoiceOverEnabled ? .combine : .contain)
Now that the elements are grouped so VoiceOver can navigate each product as a single unit without affecting other assistive technologies, the question becomes: where should the accessibility actions live? In this case, nowhere at all.
When using .combine, buttons are not merged into the spoken label; instead, they are automatically exposed as accessibility actions. We just need to provide them with the correct label beforehand and the magic happens.
The result is a single, efficient element with contextual actions when VoiceOver is enabled, and three distinct, independently focusable elements when it is not.
In this case, optimizing for multiple assistive technologies resulted not only in a more accessible interface overall, but also in the same VoiceOver navigation with less code.
Once we accept conditional accessibility as a valid pattern, it opens the door to more ambitious VoiceOver optimizations elsewhere in the app.
PDP Options Becoming Adjustable
One advantage of native mobile development over the web is the richer toolkit it offers to assistive technology users, especially when it comes to optimizing navigation for screen readers.
SwiftUI offers the accessibilityRepresentation modifier, which lets us expose an alternative accessibility-only control for a complex UI.
For color and storage selection, a picker could be much faster to navigate for VoiceOver users, presenting a single element to focus on, where its value can be changed by flicking up and down. So, at the end of the VStack containing the color label and options, we can add:
.accessibilityRepresentation {
Picker("Please choose a color", selection: $selectedColor) {
ForEach(colors, id: \.self) {
Text($0.description)
}
}
.pickerStyle(.wheel)
}
This works beautifully once focused, but in iOS 18 it doesn’t always appear in the expected place in the swipe order.
To fix the order while preserving the same interaction model, we can manually implement adjustable behavior using accessibilityAdjustableAction. This lets us define what happens when the user performs an increment or decrement gesture. We also set an accessibility label, ignore children, and provide a meaningful value for VoiceOver to report when the user performs the adjust gesture.
So, instead of accessibilityRepresentation, the color selection VStack gets these modifiers:
.accessibilityElement(children: .ignore)
.accessibilityLabel("Available colors")
.accessibilityValue(colors[selectedColor].description.capitalized)
.accessibilityAdjustableAction { direction in
switch (direction) {
case .decrement:
if selectedColor > 0 { selectedColor -= 1}
case .increment:
if selectedColor < (colors.count - 1) { selectedColor += 1}
default:
break
}
}
How does VoiceOver behave now?
The UI doesn’t visually change, but VoiceOver now announces the whole group as “Available colors: Blue, adjustable.” Flicking up or down cycles through the color options, and the visual selection updates accordingly. The same logic was implemented in the storage options.
When One Fix Breaks Another
What happens to Full Keyboard Access once we introduce the adjustable elements used to improve VoiceOver navigation?
Full Keyboard Access now treats the color and storage selectors as single focusable elements, just like VoiceOver does. Using the left and right arrow keys adjusts their values.
Unlike VoiceOver, however, there’s no spoken feedback when the value changes. Whether this is an optimal experience for keyboard-only users is debatable.
This highlights an important point: improving one assistive technology can subtly affect others, sometimes positively, sometimes in ways that need further iteration.
And how does Voice Control react to these VoiceOver enhancements?
This is where things start to fall apart.
By turning the color and storage options into adjustable elements for VoiceOver, Voice Control effectively stops working for those controls. What were previously exposed as individual, actionable buttons can no longer be reliably targeted using voice commands once they become a single adjustable element.
Nothing changed visually and the VoiceOver experience improved significantly, but for Voice Control users these controls effectively disappeared.
Optimizing VoiceOver Without Collateral Damage
Once again, we can rely on @Environment(\.accessibilityVoiceOverEnabled) to apply VoiceOver-specific enhancements, without negatively affecting Full Keyboard Access or, especially, Voice Control users.
When VoiceOver is enabled, we use adjustable actions to make navigation faster and more efficient. When it is not, we fall back to the original implementation, exposing individual buttons that work well with Voice Control and Full Keyboard Access.
To avoid duplicating layout code, we can extract the original color options into a @ViewBuilder property and reuse it in both cases. The same pattern applies to the storage options.
if isVoiceOverEnabled {
colorOptions // our @ViewBuilder property
.accessibilityElement(children: .ignore)
// all previously used VoiceOver-specific adjustable modifiers
} else {
colorOptions // just the normal View
}
Does this approach restore Voice Control?
In the video, I turn VoiceOver off and use Voice Control to tap individual color and storage options, exactly as before introducing the VoiceOver enhancements. When VoiceOver is turned back on, those same controls are once again exposed as adjustable elements.
Each assistive technology now gets the interaction model it expects.
What about Full Keyboard Access?
Here too, the behavior returns to what keyboard users expect. I can navigate between individual buttons using the arrow keys, and selections only change when I explicitly activate a control, rather than adjusting values automatically with every left or right press.
Navigation is now accessible to Full Keyboard Access and Voice Control users, and when VoiceOver is running, the already accessible interface becomes even more efficient for VoiceOver users.
Same Pattern Added to the Wishlist
The same conditional accessibility pattern used in SwiftUI can also be applied to the Wishlist, which is implemented in UIKit using a UICollectionView.
UIKit doesn’t expose accessibility state through an environment like SwiftUI does. Instead, we need to query VoiceOver directly and react to changes imperatively.
We can check whether VoiceOver is currently running using UIAccessibility.isVoiceOverRunning. And because UIKit is notification-driven, we should also observe changes to that state:
NotificationCenter.default.addObserver(
self,
selector: #selector(voiceOverStatusDidChange),
name: UIAccessibility.voiceOverStatusDidChangeNotification,
object: nil)
This allows us to switch between two accessibility configurations at runtime: one optimized for VoiceOver, and one that preserves the expected behavior for Full Keyboard Access and Voice Control.
When VoiceOver is running, instead of exposing the product information stack and the Cart/Wishlist buttons as separate elements, we make the entire cell accessible as a single element and attach accessibility actions.
For VoiceOver users, this results in:
- One swipe per product
- Product information read as a single coherent summary
- A standard double-tap that opens the Product Detail Page
- Cart and Wishlist actions available via the rotor
To do this, we mark the entire cell as an accessibility element and provide a meaningful label, hint, and traits, and then we add the Cart and Wishlist actions:
self.isAccessibilityElement = true
self.accessibilityLabel = "\(product.name), \(product.price), rated \(product.ratingAsString)"
self.accessibilityHint = "Swipe up or down to select a custom action"
self.accessibilityTraits = [.button]
let addToCartAction = UIAccessibilityCustomAction(name: "Add to Cart") { _ in
self.onAddToCartButtonTap?()
return true
}
let wishlistAction = UIAccessibilityCustomAction(name: "Remove from Wishlist") { _ in
self.onWishlistButtonTap?()
return true
}
self.accessibilityCustomActions = [addToCartAction, wishlistAction]
Finally, we restore the default VoiceOver behavior of opening the Product Detail Page when the user double-taps the element. In UIKit, this is done by overriding accessibilityActivate:
override func accessibilityActivate() -> Bool {
self.onAccessibilityActivate?()
return true
}
The collection view controller already knows how to open the PDP via didSelectItemAt, so it simply provides the same logic through the onAccessibilityActivate closure.
One Wishlist, Various Wishes Fulfilled
So how does VoiceOver navigate now, and how does this affect the other assistive technologies?
In the video, I start with VoiceOver enabled. The Wishlist behaves as intended: each product is a single element, with custom actions for adding to the cart and removing from the wishlist, and a standard activate action that opens the Product Detail Page.
I then turn VoiceOver off and back on again. Because of an intentional bug in the demo, the VoiceOver-optimized configuration is not re-enabled, and the interface remains in its original accessible configuration even with VoiceOver running.
In this state, the product information and buttons are exposed as separate elements. This allows Full Keyboard Access users to navigate between the product information, the Cart button, and the Wishlist button using arrow keys or Control–Tab, and to activate each one directly. This mirrors the original behavior that also allows Voice Control users to reliably target and activate individual buttons.
If the VoiceOver-style custom actions were always enabled, Full Keyboard Access users would need to press Tab + Z to reach the Cart and Wishlist actions, and Voice Control users would lose access to those controls.
By switching accessibility behavior based on whether VoiceOver is running, each assistive technology gets the interaction model it expects, without forcing compromises on the others.
Final thoughts
Improving the VoiceOver experience is second nature to me as a VoiceOver user, and I usually know early what kind of navigation and interaction I want.
In the previous articles, I faced a familiar challenge: optimizing for VoiceOver can sometimes conflict with Full Keyboard Access or Voice Control. That’s why I first made the PDP, PLP, and Wishlist accessible to additional assistive technologies, and only then introduced VoiceOver-specific refinements in this article.
With the foundation in place, we could optimize the interface for VoiceOver users. In SwiftUI, this meant reading environment values to detect when VoiceOver was active; in UIKit, we queried UIAccessibility.isVoiceOverRunning and responded to state changes. We also implemented several strategies to streamline navigation and interaction: adjustable controls for faster option selection, .combine to merge elements while exposing buttons as actions, and custom accessibility actions in UIKit.
Nothing changed visually, and nothing was removed. Yet each assistive technology now receives an interaction model tailored to how it works. Different tools, different APIs, same result: a more efficient and intuitive VoiceOver experience, fully accessible to Full Keyboard Access and Voice Control users.
I’m deeply passionate about both iOS development and accessibility. If your iOS team needs someone who truly cares about accessibility and great user experience, I am open to new assignments. Drop me an email at
diogo@axesslab.com.