Delegation and Recording L2.4.2

AVFoundation and AudioEngine

The AVAudioEngine is part of the AVFoundation framework.

app layers

Implementing AVFoundation and AVAudioRecorder with code breakdown

import AVFoundation // 1

class ViewController: UIViewController {

    // MARK: Properties

    var audioRecorder: AVAudioRecorder! // 2

    @IBAction func recordAudio(_ sender: AnyObject) {
        recordingLabel.text = "Recording in progress"
        stopRecordingButton.isEnabled = true
        recordButton.isEnabled = false

        let dirPath = NSSearchPathForDirectoriesInDomains(.documentDirectory,.userDomainMask, true)[0] as String // 3
        let recordingName = "recordedVoice.wav"
        let pathArray = [dirPath, recordingName]
        let filePath = URL(string: pathArray.joined(separator: "/“)) // 4

        let session = AVAudioSession.sharedInstance() // 5
        try! session.setCategory(AVAudioSessionCategoryPlayAndRecord,
with:AVAudioSessionCategoryOptions.defaultToSpeaker) // 6 

        try! audioRecorder = AVAudioRecorder(url: filePath!, settings: [:])  // 7
        audioRecorder.isMeteringEnabled = true // 8
        audioRecorder.prepareToRecord()
        audioRecorder.record()
    }
}
  1. This imports a framework that contains the AVAudioRecorder, without this Xcode wouldn’t know about any of the AVFoundation classes and the code would not compile
  2. This property give the view controller the ability to use and reference the audioRecorder in multiple places. This is useful as we will need to reference the audioRecorder in different functions (beginning and stopping recording)
  3. Used to get the directory path. Specifically this line is used to get the applications documentDirecory and stores it as a string in the dirPath constant
  4. The directory path then gets combined with a file name and stored in a constant
  5. A const of type AVAudioSession is setup using the AVAudioSession class. An AVAudioSession is what we need to either record or playback audio. And the “.sharedInstance()” is accessing a shared instance of the class that is setup by default once the app starts running that can be used with a minimal amount of setup. Diggin deeper we find that the AVAudioSession class is basically an abstraction over the entire Audio Hardware. Since there is only one Audio Hardware for a device there’s only one instance of AVAudioSession, which is why we used the shared instance which used across all apps on the device. (This idea of a shared instance is a very common pattern used in Apple frameworks)
  6. This setups the session for playing and recording audio. Its part of a try statement with an exclamation point which indicated that it doesn’t handle any errors if this line were to fail
  7. The same is true here as is with line 6, we are optimistically assuming that these lines will not break for any reason
  8. From here and for the rest of the lines in the function we set the metering to enables, prepare for recording and then kick off the recording in the last line

Setting up a segue to send audio files between view controllers

When sending a file from one view controller to another we have 2 problems to think through

  1. We need a way to pass the audio file from the first view controller to the second so it can play it back

  2. What happened when we need to send a really large file and transition before its been written out to storage. We need to make sure that we only move from the first view controller to the second once the AVAudioRecorder has finished saving the file

To do this we should implement the segue in code rather than from a button push. From the document outline ctrl-click drag from the first view controller to the second and select show from the pop menu.

app layers

Once the two view controllers are connected via segue we can have the storyboard perform the segue by calling a function “performSegue”. To do this we need to give the segue a unique identifier that we can reference in code. To set this up select the segue in the storyboard and open the attributes inspector and add the case sensitive string

app layers

Setting up Delegation

Delegation is the principle of assigning work from an object to any other object.

One way to think about it is the relationship between a manager and an employee where work is passed on from the manager to the employee. One object is getting the other to do work for it. In iOS delegation can occur between any types of objects. In out case the AVAudioRecorder does not know anything about our view controllers or even the app. It does however know that once it finishes recording it can send out a “message” that the recording is finished. In which case our delegate (RecordSoundsViewController) can perform whatever work it would like to after receiving this message. In this way we can use AVAudioRecorder as a tool just to record the audio and then tell it’s delegate (RecordSoundsViewController) when it has finished recording so the delegate can perform some actions.

To implement delegation first we need to tell the view controller that it will conform the delegate protocol. In our case the AVAudioRecorderDelegate. So we add it to the end of the class declaration after the parent class

app layers

Specifically what this means is that we want to implement a function or functions described in the delegate’s protocol that our view controller can act as the delegate for AVAudioRecorder. Again AVAudioRecorder does not know what classes you have in your app. However, if you say that your class conforms to the AVAudioRecorderDelegate protocol, then it knows it can call a specific set of functions in your class.

The specific functions have been defined in the protocol (in this case, the AVAudioRecorderDelegate protocol). This way the view controller class and the AVAudioRecorder are loosely coupled, and they can work together without having to know much about each other. A class can conform to as many delegate protocols as we want.

To set the class as the delegate of the AVAudioRecorder, we need to do so in code like the highlighted line

app layers

Now because this class conforms the AVAudioRecorderDelegate protocol Xcode knows that and will suggest functions available in that delegate as we type (i.e. audioRecorderDidFinishRecording) which we can use to make sure the audio file is saved before performing the segue

To take a more in depth look at what functions are available when implementing a delegate protocol you can select the delegate and right click then choose jump to definition

app layers

From the “jump to definition" you can see some of the declaration code including audioRecorderDidFinishRecording among other functions available through the delegate

app layers

Sending audio files between view controllers

To summarize what we need to do

  • First we need to send the file path where the recorded audio was saved not the file in it’s entirety. The file path is all thats needed to pay a file back
  • Perform the segue when the audio file is saved and send along the file path to the next view controller
  • Inform the view controller the we are transitioning to that it will review the url for the recorded audio

By adding this code we can ensure that if the audio was successfully saved to perform the segue and send the file path along as a url, else print something to the console and leave the user in the dark

app layers

To prepare the view controller that will receive the audio file we add the following.

app layers

  • Line 1 - we make sure its the segue that we want
  • Line 2 - we reference the view controller that we will transition to from the destination property on the segue. Because this property is of type UIViewController and we know its a PlaySoundsViewController (the name of the view controller class that we are transitioning to) we can upcast it using “as!” to a PlaySoundsViewController using a forces upcast
  • Line 3 - we are grabbing the sender which is the recordedAudioUrl,
  • Line 4 - we set recordedAudioUrl on the PlaySoundsViewController