Etiqueta: Accessibility

  • Visual Accessibilty in iOS

    Visual Accessibilty in iOS

    Accessibility in iOS apps is a powerful way to highlight the importance of inclusive design while showcasing the robust accessibility tools Apple provides, such as VoiceOver, Dynamic Type, and Switch Control. It not only helps fellow developers understand how to create apps that are usable by everyone, including people with disabilities, but also demonstrates professionalism, empathy, and technical depth. By promoting best practices and raising awareness.

    For this post, we will focus only on visual accessibility aspects. Interaction and media-related topics will be covered in a future post. As we go through this post, I will also provide the project along with the source code used to explain the concepts.

    Accessibility Nutrition Labels

    Accessibility Nutrition Labels in iOS are a developer-driven concept inspired by food nutrition labels, designed to provide a clear, standardized summary of an app’s accessibility features. They help users quickly understand which accessibility tools—such as VoiceOver, Dynamic Type, or Switch Control—are supported, partially supported, or missing, making it easier for individuals with disabilities to choose apps that meet their needs. Though not a native iOS feature, these labels are often included in app. Eventhought accessibility is supported in almost all Apple platforms, some accessibility labels aren’t available check here.

    They can be set at the moment that you upload a new app version on Apple Store.

    Sufficient contrast

    Users can increase or adjust the contrast between text or icons and the background to improve readability. Adequate contrast benefits users with reduced vision due to a disability or temporary condition (e.g., glare from bright sunlight). You can indicate that your app supports “Sufficient Contrast” if its user interface for performing common tasks—including text, buttons, and other controls—meets general contrast guidelines (typically, most text elements should have a contrast ratio of at least 4.5:1). If your app does not meet this minimum contrast ratio by default, it should offer users the ability to customize it according to their needs, either by enabling a high-contrast mode or by applying your own high-contrast color palettes. If your app supports dark mode, be sure to check that the minimum contrast ratio is met in both light and dark modes.

     We have prepared following  following screen, that clearly does not follow this nutrition label:

    Deploy in the simulator and present the screen to audit and open Accesibility Inspector:

    Screenshot

    Deploy in the simulator and present the screen to audit and open Accesibility Inspector, select simulator and Run Audit:

    Important point, only are audited visible view layers:

    Buid and run on a simulator for cheking that all is working fine:

    Dark mode

    Dark Mode in SwiftUI is a user interface style that uses a darker color palette to reduce eye strain and improve visibility in low-light environments. SwiftUI automatically supports Dark Mode by adapting system colors like .primary, .background, and .label based on the user’s system settings. You can customize your UI to respond to Dark Mode using the @Environment(\.colorScheme) property.

    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 08.43.32
    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 08.43.41

    We’ve designed our app to support Dark Mode, but to ensure full compatibility, we’ll walk through some common tasks. We’ll also test the app with Smart Invert enabled—an accessibility feature that reverses interface colors.

    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 12.24.42
    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 12.25.15

    Once smart invert is activated all colors are set in their oposite color. This is something that we to have to avoid in some components of our app such as images.

                        AsyncImage(url: URL(string: "https://www.barcelo.com/guia-turismo/wp-content/uploads/2022/10/yakarta-monte-bromo-pal.jpg")) { image in
                             image
                                 .resizable()
                                 .scaledToFit()
                                 .frame(width: 300, height: 200)
                                 .cornerRadius(12)
                                 .shadow(radius: 10)
                         } placeholder: {
                             ProgressView()
                                 .frame(width: 300, height: 200)
                         }
                         .accessibilityIgnoresInvertColors(isDarkMode)

    In case of images or videos we have to avoid color inversion, we can make this happen by adding accessibilityIgnoreInvertColors modifier.

    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 08.43.32
    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 08.43.41

    It’s important to verify that media elements, like images and videos, aren’t unintentionally inverted. Once we’ve confirmed that our app maintains a predominantly dark background, we can confidently include Dark Interface in our Accessibility Nutrition Labels.

    Larger Text

    In Swift, «Larger Text» under Accessibility refers to iOS’s Dynamic Type system, which allows users to increase text size across apps for better readability. When building a nutrition label UI, developers should support these settings by using Dynamic Type-compatible fonts (like .body, .title, etc.), enabling automatic font scaling (adjustsFontForContentSizeCategory in UIKit or .dynamicTypeSize(...) in SwiftUI), and ensuring layouts adapt properly to larger sizes. This ensures the nutrition label remains readable and accessible to users with visual impairments, complying with best practices for inclusive app design.

    You can increase dynamic type in simulator in two ways, first one is by using accesibilty inspector, second one is by opening device Settings, Accessibilty, Display & Text Size, Larger text:

    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 13.00.31

    When se set the text to largest size we observe following:

    simulator_screenshot_F5937212-D584-449C-AE40-BFE3BEE486A3

    Only navigation title is being resized the rest of the content keeps the same size, this is not so much accessible!.

    When we execute accesibilty inspectoron this screen also complains.

    By replacing fixed font size by the type of font (.largeTitle, .title, .title2, .title3, .headline, .subheadline, .body, .callout, .footnote and .caption ). Remve also any frame fixed size that could cut any contained text. 

     func contentAccessible() -> some View {
            VStack(alignment: .leading, spacing: 8) {
                Text("Nutrition Facts")
                    .font(.title)
                    .bold()
                    .accessibilityAddTraits(.isHeader)
    
                Divider()
    
                HStack {
                    Text("Calories")
                        .font(.body)
                    Spacer()
                    Text("200")
                        .font(.body)
                }
    
                HStack {
                    Text("Total Fat")
                        .font(.body)
                    Spacer()
                    Text("8g")
                        .font(.body)
                }
    
                HStack {
                    Text("Sodium")
                        .font(.body)
                    Spacer()
                    Text("150mg")
                        .font(.body)
                }
            }
            .padding()
            .navigationTitle("Larger Text")
        }
    simulator_screenshot_80105F6D-0B6A-4899-ACEF-4CE44221E330

    We can observe how the view behaves when Dynamic Type is adjusted from the minimum to the maximum size. Notice that when the text «Nutrition Facts» no longer fits horizontally, it wraps onto two lines. The device is limited in horizontal space, but never vertically, as vertical overflow is handled by implementing a scroll view.

    Differentiate without color alone

    Let’s discuss color in design. It’s important to remember that not everyone perceives color the same way. Many apps rely on color—like red for errors or green for success—to convey status or meaning. However, users with color blindness might not be able to distinguish these cues. To ensure accessibility, always pair color with additional elements such as icons or text to communicate important information clearly to all users.

    Accessibility inspector, and also any color blinded person, will complain. For fixing use any shape or icon, apart from the color.l

    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 16.59.40
    Simulator Screenshot - iPhone 16 Pro - 2025-07-19 at 16.59.46

    Reduced motion

    Motion can enhance the user experience of an app. However, certain types of motion—such as zooming, rotating, or peripheral movement—can cause dizziness or nausea for people with vestibular sensitivity. If your app includes these kinds of motion effects, make them optional or provide alternative animations.

    Lets review the code:

    struct ReducedMotionView: View {
        @Environment(\.accessibilityReduceMotion) var reduceMotion
        @State private var spin = false
        @State private var scale: CGFloat = 1.0
        @State private var moveOffset: CGFloat = -200
    
        var body: some View {
            NavigationView {
                ZStack {
                    // Background rotating spiral
                    Circle()
                        .strokeBorder(Color.purple, lineWidth: 10)
                        .frame(width: 300, height: 300)
                        .rotationEffect(.degrees(spin ? 360 : 0))
                        .animation(reduceMotion ? nil : .linear(duration: 3).repeatForever(autoreverses: false), value: spin)
    
                    // Scaling and bouncing circle
                    Circle()
                        .fill(Color.orange)
                        .frame(width: 100, height: 100)
                        .scaleEffect(scale)
                        .offset(x: reduceMotion ? 0 : moveOffset)
                        .animation(reduceMotion ? nil : .easeInOut(duration: 1).repeatForever(autoreverses: true), value: scale)
                }
                .onAppear {
                    if !reduceMotion {
                        spin = true
                        scale = 1.5
                        moveOffset = 200
                    }
                }
                .padding()
                .overlay(
                    Text(reduceMotion ? "Reduce Motion Enabled" : "Extreme Motion Enabled")
                        .font(.headline)
                        .padding()
                        .background(Color.black.opacity(0.7))
                        .foregroundColor(.white)
                        .cornerRadius(12)
                        .padding(),
                    alignment: .top
                )
            }
            .navigationTitle("Reduced motion")
        }
    }

    The ReducedMotionView SwiftUI view creates a visual demonstration of motion effects that adapts based on the user’s accessibility setting for reduced motion. It displays a rotating purple spiral in the background and an orange circle in the foreground that scales and moves horizontally. When the user has Reduce Motion disabled, the spiral continuously rotates and the orange circle animates back and forth while scaling; when Reduce Motion is enabled, all animations are disabled and the shapes remain static. A label at the top dynamically indicates whether motion effects are enabled or reduced, providing a clear visual contrast for accessibility testing.

    Reduce Motion accessibility is not about removing animations from your app, but disable them when user has disabled Reduce Motion device setting.

     

    Pending

    Yes, this post is not complete yet. There are two families of Nutrition accessibility labels: Interaction and Media. I will cover them in a future post.

    Conclusions

    Apart from the benefits that accessibility provides to a significant group of people, let’s not forget that disabilities are diverse, and as we grow older, sooner or later we will likely need to use accessibility features ourselves. Even people without disabilities may, at some point, need to focus on information under challenging conditions—like poor weather—which can make interaction more difficult. It’s clear that this will affect us as iOS developers in how we design and implement user interfaces in our apps.

    You can find source code that we have used for conducting this post in following GitHub repository.

    References

  • Seamless Text Input with Your Voice on iOS

    Seamless Text Input with Your Voice on iOS

    Most likely, you have faced a situation where you’re enjoying the seamless flow of an application—for instance, while making a train or hotel reservation. Then, suddenly—bam!—a never-ending form appears, disrupting the experience. I’m not saying that filling out such forms is irrelevant for the business—quite the opposite. However, as an app owner, you may notice in your analytics a significant drop in user conversions at this stage.

    In this post, I want to introduce a more seamless and user-friendly text input option to improve the experience of filling out multiple fields in a form.

    Base project

    To help you understand this topic better, we’ll start with a video presentation. Next, we’ll analyze the key parts of the code. You can also download the complete code from the repository linked below.

    To begin entering text, long-press the desired text field. When the bottom line turns orange, it indicates that the has been activated speech-to-text mode. Release your finger once you see the text correctly transcribed. If the transcribed text is correct, the line will turn green; otherwise, it will turn red.

    Let’s dig in the code…

    The view is built with a language picker, which is a crucial feature. It allows you to select the language you will use later, especially when interacting with a form containing multiple text fields.

    struct VoiceRecorderView: View {
       @StateObject private var localeManager = appSingletons.localeManager
        @State var name: String = ""
        @State var surename: String = ""
        @State var age: String = ""
        @State var email: String = ""
        var body: some View {
            Form {
                Section {
                    Picker("Select language", selection: $localeManager.localeIdentifier) {
                        ForEach(localeManager.locales, id: \.self) { Text($0).tag($0) }
                    }
                    .pickerStyle(SegmentedPickerStyle())
                    .onChange(of: localeManager.localeIdentifier) {
                    }
                }
    
                Section {
                    TextFieldView(textInputValue: $name,
                                  placeholder: "Name:",
                                  invalidFormatMessage: "Text must be greater than 6 characters!") { textInputValue in
                        textInputValue.count > 6
                    }
                    
                    TextFieldView(textInputValue: $surename,
                                  placeholder: "Surename:",
                                  invalidFormatMessage: "Text must be greater than 6 characters!") { textInputValue in
                        textInputValue.count > 6
                    }
                    TextFieldView(textInputValue: $age,
                                  placeholder: "Age:",
                                  invalidFormatMessage: "Age must be between 18 and 65") { textInputValue in
                        if let number = Int(textInputValue) {
                            return number >= 18 && number <= 65
                        }
                        return false
                    }
                }
                
                Section {
                    TextFieldView(textInputValue: $email,
                                  placeholder: "Email:",
                                  invalidFormatMessage: "Must be a valid email address") { textInputValue in
                        let emailRegex = #"^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$"#
                        let emailPredicate = NSPredicate(format: "SELF MATCHES %@", emailRegex)
                        return emailPredicate.evaluate(with: textInputValue)
                    }
                }   
            }
            .padding()
        }
    }

    For every text field, we need a binding variable to hold the text field’s value, a placeholder for guidance, and an error message to display when the acceptance criteria function is not satisfied.

    When we examine the TextFieldView, we see that it is essentially a text field enhanced with additional features to improve user-friendliness.

    struct TextFieldView: View {
        
        @State private var isPressed = false
        
        @State private var borderColor = Color.gray
        @StateObject private var localeManager = appSingletons.localeManager
    
        @Binding var textInputValue: String
        let placeholder: String
        let invalidFormatMessage: String?
        var isValid: (String) -> Bool = { _ in true }
        
        var body: some View {
            VStack(alignment: .leading) {
                if !textInputValue.isEmpty {
                    Text(placeholder)
                        .font(.caption)
                }
                TextField(placeholder, text: $textInputValue)
                    .accessibleTextField(text: $textInputValue, isPressed: $isPressed)
                    .overlay(
                        Rectangle()
                            .frame(height: 2)
                            .foregroundColor(borderColor),
                        alignment: .bottom
                    )
                .onChange(of: textInputValue) { oldValue, newValue in
                        borderColor = getColor(text: newValue, isPressed: isPressed )
                }
                .onChange(of: isPressed) {
                        borderColor = getColor(text: textInputValue, isPressed: isPressed )
                }
                if !textInputValue.isEmpty,
                   !isValid(textInputValue),
                    let invalidFormatMessage {
                    Text(invalidFormatMessage)
                        .foregroundColor(Color.red)
                }
            }
        }
        
        func getColor(text: String, isPressed: Bool) -> Color {
            guard !isPressed else { return Color.orange }
            guard !text.isEmpty else { return Color.gray }
            return isValid(text) ? Color.green : Color.red
        }
        
    }

    The key point in the above code is the modifier .accessibleTextField, where all the magic of converting voice to text happens. We have encapsulated all speech-to-text functionality within this modifier.

    extension View {
        func accessibleTextField(text: Binding<String>, isPressed: Binding<Bool>) -> some View {
            self.modifier(AccessibleTextField(text: text, isPressed: isPressed))
        }
    }
    
    struct AccessibleTextField: ViewModifier {
        @StateObject private var viewModel = VoiceRecorderViewModel()
        
        @Binding var text: String
        @Binding var isPressed: Bool
        private let lock = NSLock()
        func body(content: Content) -> some View {
            content
                .onChange(of: viewModel.transcribedText) {
                    guard viewModel.transcribedText != "" else { return }
                    self.text = viewModel.transcribedText
                }
                .simultaneousGesture(
                    DragGesture(minimumDistance: 0)
                        .onChanged { _ in
                            lock.withLock {
                                if !isPressed {
                                    isPressed = true
                                    viewModel.startRecording(locale: appSingletons.localeManager.getCurrentLocale())
                                }
                            }
                            
                        }
                        .onEnded { _ in
                            
                            if isPressed {
                                lock.withLock {
                                    isPressed = false
                                    viewModel.stopRecording()
                                }
                            }
                        }
                )
        }
    }

    The voice-to-text functionality is implemented in the VoiceRecorderViewModel. In the view, it is controlled by detecting a long press from the user to start recording and releasing to stop the recording. The transcribed voice text is then forwarded upward via the text Binding attribute.

    Finally, here is the view model that handles the transcription:

    import Foundation
    import AVFoundation
    import Speech
    
    class VoiceRecorderViewModel: ObservableObject {
        @Published var transcribedText: String = ""
        @Published var isRecording: Bool = false
        
        private var audioRecorder: AVAudioRecorder?
        private let audioSession = AVAudioSession.sharedInstance()
        private let recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
        private var recognitionTask: SFSpeechRecognitionTask?
        private var audioEngine = AVAudioEngine()
        
        var speechRecognizer: SFSpeechRecognizer?
    
        func startRecording(locale: Locale) {
            do {
                self.speechRecognizer = SFSpeechRecognizer(locale: locale)
    
                recognitionTask?.cancel()
                recognitionTask = nil
    
                try audioSession.setCategory(.record, mode: .measurement, options: .duckOthers)
                try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
    
                guard let recognizer = speechRecognizer, recognizer.isAvailable else {
                    transcribedText = "Reconocimiento de voz no disponible para el idioma seleccionado."
                    return
                }
                
                let inputNode = audioEngine.inputNode
                let recordingFormat = inputNode.outputFormat(forBus: 0)
                inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, when in
                    self.recognitionRequest.append(buffer)
                }
                
                audioEngine.prepare()
                try audioEngine.start()
                
                recognitionTask = recognizer.recognitionTask(with: recognitionRequest) { result, error in
                    if let result = result {
                        self.transcribedText = result.bestTranscription.formattedString
                    }
                }
                
                isRecording = true
            } catch {
                transcribedText = "Error al iniciar la grabación: \(error.localizedDescription)"
            }
        }
        
        func stopRecording() {
            audioEngine.stop()
            audioEngine.inputNode.removeTap(onBus: 0)
            recognitionRequest.endAudio()
            recognitionTask?.cancel()
            isRecording = false
        }
    }

    Key Components

    1. Properties:

      • @Published var transcribedText: Holds the real-time transcribed text, allowing SwiftUI views to bind and update dynamically.
      • @Published var isRecording: Indicates whether the application is currently recording.
      • audioRecorder, audioSession, recognitionRequest, recognitionTask, audioEngine, speechRecognizer: These manage audio recording and speech recognition.
    2. Speech Recognition Workflow:

      • SFSpeechRecognizer: Recognizes and transcribes speech from audio input for a specified locale.
      • SFSpeechAudioBufferRecognitionRequest: Provides an audio buffer for speech recognition tasks.
      • AVAudioEngine: Captures microphone input.

    Conclusions

    I aim you that you download the project  from following github repositoryand start to play with such great techology.

    References

    • Speech

      Apple Developer Documentation