Youssef Hammoud Youssef Hammoud - 1 year ago 334
iOS Question

AVSpeechSynthesizer does not speak after using SFSpeechRecognizer

So I built a simple app that does speech recognition using SFSpeechRecognizer and displays the converted speech to text in a UITextView on the screen. Now I'm trying to make the phone speak that displayed text. It doesn't work for some reason. AVSpeechSynthesizer speak function works only before SFSpeechRecognizer was used. For instance, when the app launches, it has some welcome text displayed in the UITextView, if I tap the speak button, the phone will speak out the welcome text. Then if I do record (for speech recognition), the recognized speech will be displayed in the UITextView. Now I want the phone to speak that text, but unfortunately it doesn't.

here is the code

import UIKit
import Speech
import AVFoundation

class ViewController: UIViewController, SFSpeechRecognizerDelegate, AVSpeechSynthesizerDelegate {

@IBOutlet weak var textView: UITextView!
@IBOutlet weak var microphoneButton: UIButton!

private let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))!

private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var recognitionTask: SFSpeechRecognitionTask?
private let audioEngine = AVAudioEngine()

override func viewDidLoad() {

microphoneButton.isEnabled = false

speechRecognizer.delegate = self

SFSpeechRecognizer.requestAuthorization { (authStatus) in

var isButtonEnabled = false

switch authStatus {
case .authorized:
isButtonEnabled = true

case .denied:
isButtonEnabled = false
print("User denied access to speech recognition")

case .restricted:
isButtonEnabled = false
print("Speech recognition restricted on this device")

case .notDetermined:
isButtonEnabled = false
print("Speech recognition not yet authorized")

OperationQueue.main.addOperation() {
self.microphoneButton.isEnabled = isButtonEnabled

@IBAction func speakTapped(_ sender: UIButton) {
let string = self.textView.text
let utterance = AVSpeechUtterance(string: string!)
let synthesizer = AVSpeechSynthesizer()
synthesizer.delegate = self
@IBAction func microphoneTapped(_ sender: AnyObject) {
if audioEngine.isRunning {
microphoneButton.isEnabled = false
microphoneButton.setTitle("Start Recording", for: .normal)
} else {
microphoneButton.setTitle("Stop Recording", for: .normal)

func startRecording() {

if recognitionTask != nil { //1
recognitionTask = nil

let audioSession = AVAudioSession.sharedInstance() //2
do {
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
} catch {
print("audioSession properties weren't set because of an error.")

recognitionRequest = SFSpeechAudioBufferRecognitionRequest() //3

guard let inputNode = audioEngine.inputNode else {
fatalError("Audio engine has no input node")
} //4

guard let recognitionRequest = recognitionRequest else {
fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
} //5

recognitionRequest.shouldReportPartialResults = true //6

recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in //7

var isFinal = false //8

if result != nil {

self.textView.text = result?.bestTranscription.formattedString //9
isFinal = (result?.isFinal)!

if error != nil || isFinal { //10
inputNode.removeTap(onBus: 0)

self.recognitionRequest = nil
self.recognitionTask = nil

self.microphoneButton.isEnabled = true

let recordingFormat = inputNode.outputFormat(forBus: 0) //11
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in

audioEngine.prepare() //12

do {
try audioEngine.start()
} catch {
print("audioEngine couldn't start because of an error.")

textView.text = "Say something, I'm listening!"


func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
if available {
microphoneButton.isEnabled = true
} else {
microphoneButton.isEnabled = false

Answer Source

The problem is that when you start speech recognition, you have set your audio session category to Record. You cannot play any audio (including speech synthesis) with an audio session of Record.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download