Voice Recorder app in SwiftUI – #1 Implementing the Audio Recorder

Share on facebook
Share on twitter
Share on pinterest

Welcome to a new SwiftUI tutorial. In this article, we will create our own dictation app. We will learn how to record audios, how to save audio files and how to play them. In this part, we’ll implement the recorder itself and learn how to save and fetch the audio files. In the next one, we’ll include the playback functionality and learn how to delete particular audio files.

This is what the finished app will look like:

Preparing the audio recorder 🎤

After creating a new SwiftUI project, we start by preparing our audio recorder. We will create an ObservableObject for this, which we will use to record the audios and save them. So let’s create a new File-New-Swift file and select Swift File. We call this file AudioRecorder.

Besides the SwiftUI and Combine framework we also have to import the AVFoundation framework which for the recording functionality. We also create a class called AudioRecorder which adapts the ObservableObject protocol.

import Foundation
import SwiftUI
import Combine
import AVFoundation

class AudioRecorder: ObservableObject {

}

To notify observing views about changes, for example when the recording is started, we need a PassthroughObject.

class AudioRecorder: ObservableObject {
    
    let objectWillChange = PassthroughSubject<AudioRecorder, Never>()
    
}

Then we initialize an AVAudioRecorder instance within our AudioRecorder. With its help we will start the recording sessions later.

class AudioRecorder: ObservableObject {
//...
    
var audioRecorder: AVAudioRecorder!
}

Our AudioRecorder should pay attention to whether something is being recorded or not. For this purpose, we use a suitable variable. If this variable is changed, for example when the recording is finished, we update subscribing views using our objectWillChange property.

var recording = false {
        didSet {
            objectWillChange.send(self)
        }
    }

In order to use the app on physical devices, we will need the user’s permission to access his microphone.

We need to go to the info.plist file of our Xcode project and add a new entry with the key “Privacy – Microphone Usage Description”. The value to insert is the description that we present to the user in order to know why we need this permission, for example: “We need access to your microphone to conduct recording sessions.”

After setting up our AudioRecorder and asking for the necessary permissions, we can now start designing the interface for our app!

Designing the voice recorder 👨‍🎨

We will use the standard ContentView file to set up the interface for our app. The ContentView will need to access an AudioRecorder instance, so we declare a corresponding ObservedObject.

struct ContentView: View {
    
    @ObservedObject var audioRecorder: AudioRecorder
    
    var body: some View {
        //...
    }
}

We need to initialise an AudioRecorder instance for our previews struct …

struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        ContentView(audioRecorder: AudioRecorder())
    }
}

… as well as for the scene function of our scenedelegate.swift, which uses ContentView as the root view for the app launch.

let contentView = ContentView(audioRecorder: AudioRecorder())

Next, we replace the standard “Hello World” Text with a VStack.

var body: some View {
        VStack {
            
        }
    }

When the audioRecorder is not recording, we want to present a button for starting the record session. If this is not the case, i.e. if a recording is in progress, we would like to have a button for stopping the recording instead.

VStack {
            if audioRecorder.recording == false {
                Button(action: {print("Start recording")}) {
                    Image(systemName: "circle.fill")
                        .resizable()
                        .aspectRatio(contentMode: .fill)
                        .frame(width: 100, height: 100)
                        .clipped()
                        .foregroundColor(.red)
                        .padding(.bottom, 40)
                }
            } else {
                Button(action: {print("Stop recording)")}) {
                    Image(systemName: "stop.fill")
                        .resizable()
                        .aspectRatio(contentMode: .fill)
                        .frame(width: 100, height: 100)
                        .clipped()
                        .foregroundColor(.red)
                        .padding(.bottom, 40)
                }
            }
        }

Above the start/stop button, we would like to display the already created recordings in a list. Therefore, we create a new File-New-Swift file, select SwiftUI view and call it RecordingsList. In this view, we insert an empty list. Later we will fill this list with the already saved recordings. At this point, we can also create an ObservedObject for the RecordingsList for the AudioRecorder, since we will need it later on.

import SwiftUI

struct RecordingsList: View {
    
    @ObservedObject var audioRecorder: AudioRecorder
    
    var body: some View {
        List {
            Text("Empty list")
        }
    }
}

struct RecordingsList_Previews: PreviewProvider {
    static var previews: some View {
        RecordingsList(audioRecorder: AudioRecorder())
    }
}

We can now insert the RecordingsList into our ContentView above the start/stop button and use the AudioRecorder instance of the ContentView as the RecordingsList‘s audioRecorder.

VStack {
            RecordingsList(audioRecorder: audioRecorder)
            //...
        }

Finally, we embed our ContentView in a NavigationView and provide it with a navigation bar.

NavigationView {
            VStack {
                //...
            }
                .navigationBarTitle("Voice recorder")
        }

Your preview should now look like this:

Starting a record session ⏺

As mentioned before, we use our AudioRecorder to record, end and save audios.

Let’s begin with implementing a function to start the audio recording as soon as the user taps on the record button. We call this function startRecording.

class AudioRecorder: ObservableObject {
    
    //...
    
    func startRecording() {
        
    }
    
}

Within this function we first create a recording session using AVFoundation framework:

func startRecording() {
        let recordingSession = AVAudioSession.sharedInstance()
    }

Then we define the type for our recording session and activate it. If this fails, we’ll output a corresponding error.

do {
            try recordingSession.setCategory(.playAndRecord, mode: .default)
            try recordingSession.setActive(true)
        } catch {
            print("Failed to set up recording session")
        }

Then we specify the location where the recording should be saved.

let documentPath = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]

The file should be named after the date and time of the recording and have the .m4a format.

let audioFilename = documentPath.appendingPathComponent("\(Date().toString(dateFormat: "dd-MM-YY_'at'_HH:mm:ss")).m4a")

The to string function isn’t implemented yet. To change this, create a new Swift file called Extensions. Then create an extension for the Date class and add the following function:

extension Date
{
    func toString( dateFormat format  : String ) -> String
    {
        let dateFormatter = DateFormatter()
        dateFormatter.dateFormat = format
        return dateFormatter.string(from: self)
    }

}

After that, we will define some settings for our recording…

func startRecording() {
        //...
        
        let settings = [
            AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
            AVSampleRateKey: 12000,
            AVNumberOfChannelsKey: 1,
            AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
        ]
    }

..and start the recording with our audioRecorder property! Then we inform our ContentView that the recording is running so that it can update itself and display the stop button instead of the start button.

do {
            audioRecorder = try AVAudioRecorder(url: audioFilename, settings: settings)
            audioRecorder.record()

            recording = true
        } catch {
            print("Could not start recording")
        }

We can now call this function via the start button of our ContentView.

Button(action: {self.audioRecorder.startRecording()}) {
                        Image(systemName: "circle.fill")
                            //...
                    }

Next, we implement the function to end the recording session. We named this function stopRecording. In this, we simply stop the recording session of our AudioRecorder and inform all observing views about this by setting the recording variable to false again.

class AudioRecorder: ObservableObject {
    
    //...
    
    func stopRecording() {
        audioRecorder.stop()
        recording = false
    }
    
}

We call this function from our stop button:

Button(action: {self.audioRecorder.stopRecording()}) {
                        Image(systemName: "stop.fill")
                            //...
                    }

In fact, we can now run our app and click on the record button to start a recording session. If we click on stop, the session will be ended. However, so far we don’t see the new recording in our RecordingsList.

However, we can manually check if the recording is stored in the documents storage of our app. Notice: This is only possible if the app has been running on a physical device and the device is still connected! To do this we go to the toolbar of Xcode and select “Window” and then “Devices and simulators”. Now mark the device that you used for the recording. Then select the VoiceRecorder app and click on the gear symbol.

Then click on “Download Container”. Right-click on the downloaded file and select “Show Package Contents”. Under AppData-Documents, you should now be able to see and play the file you just created!

Next, we’ll see how we can display the saved files directly in our app!

Fetching the saved recordings 🎵⬇️

We need the following information for each recording: When was the recording made (in order to sort the recordings) and under which document path can we find the recording? For this purpose, we create a suitable data model.

Create a new File-New-Swift file, select Swift file, and name it RecordingDataModel.

In this file, we declare an appropriate struct with the according attributes:

struct Recording {
    let fileURL: URL
    let createdAt: Date
}

Back to our AudioRecorder. We can now create an array to hold the recordings.

 //...
    
    var audioRecorder: AVAudioRecorder!
    
    var recordings = [Recording]()
    
    var recording = false {
        didSet {
            objectWillChange.send(self)
        }
    }
    
    //...

Then we implement a function called fetchRecordings to access the stored recordings.

class AudioRecorder: ObservableObject {
    
    //...
    
    func stopRecording() {
        audioRecorder.stop()
        recording = false
    }
    
    func fetchRecordings() {
        
    }
    
}

Every time we fetch the recordings we have to empty our recordings array before to avoid that recordings are displayed multiple times. Then we access the documents folder where the audio files are located and cycle through all of them.

func fetchRecordings() {
        recordings.removeAll()
        
        let fileManager = FileManager.default
        let documentDirectory = fileManager.urls(for: .documentDirectory, in: .userDomainMask)[0]
        let directoryContents = try! fileManager.contentsOfDirectory(at: documentDirectory, includingPropertiesForKeys: nil)
        for audio in directoryContents {
            
        }
    }

In addition to the file path of the recording, we also need to know when it was created. For this purpose, we create a new Swift file and call it Helper. In this we add the following function:

func getCreationDate(for file: URL) -> Date {
    if let attributes = try? FileManager.default.attributesOfItem(atPath: file.path) as [FileAttributeKey: Any],
        let creationDate = attributes[FileAttributeKey.creationDate] as? Date {
        return creationDate
    } else {
        return Date()
    }
}

This function reads out the file at the given path and returns the date when it was created. If this fails, we simply return the current date.

In the fetchRecordings‘ for-in loop we can now use this function for the respective audio recording. We then create one Recording instance per audio file and add it to our recordings array.

for audio in directoryContents {
            let recording = Recording(fileURL: audio, createdAt: getCreationDate(for: audio))
            recordings.append(recording)
        }

Then we sort the recordings array by the creation date of its items and eventually update all observing views, especially our RecordingsList.

func fetchRecordings() {
        //...
        
        recordings.sort(by: { $0.createdAt.compare($1.createdAt) == .orderedAscending})
        
        objectWillChange.send(self)
    }

The fetchRecordings function should be called every time a new recording is completed.

func stopRecording() {
        audioRecorder.stop()
        recording = false
        
        fetchRecordings()
    }

But also when the app and therefore also the AudioRecorder is launched for the first time. For this, we overwrite the init function of the AudioRecorder accordingly. To make this work, our AudioRecorder must adopt the NSObject protocol.

class AudioRecorder: NSObject,ObservableObject {
    
    override init() {
        super.init()
        fetchRecordings()
    }
    
    //...
}

Displaying the recordings 👁

Our RecordingsList should display one row for each stored recording. Therefore, we add a RecordingRow view below our RecordingsList struct.

struct RecordingRow: View {
    
    var body: some View {
        
    }
}

Each row should be assigned to the path of the particular audio file. Within the HStack we then use this path without the file extension for a Text object which we push to the left side with the help of a Spacer.

struct RecordingRow: View {
    
    var audioURL: URL
    
    var body: some View {
        HStack {
            Text("\(audioURL.lastPathComponent)")
            Spacer()
        }
    }
}

In our RecordingsList, we add one RecordingView for each object in the recordings array of the audioRecorder.

var body: some View {
        List {
            ForEach(audioRecorder.recordings, id: \.createdAt) { recording in
                RecordingRow(audioURL: recording.fileURL)
            }
        }
    }

When we run our app now, we see all saved recordings in the list above the record button. When we complete a new recording, the fetchRecording function gets called, which clears the recording array and refills it including the newly saved recording.

Conclusion 🎊

Awesome! In this part, we learned how to record and save audios using an AVRecorder. We also saw how to read out already saved recordings and display them sorted by their creation dates.

In the next part, we will then add the possibility to playback the saved recordings. Additionally, we will learn how to delete recordings!

You can look up the current progress of the project here!

I hope you enjoyed this tutorial! If you want to learn more about SwiftUI, check out our other tutorials! Also make sure you follow us on Instagram and subscribe to our newsletter to not miss any updates, tutorials and tips about SwiftUI and more!

Leave a Reply

Your email address will not be published. Required fields are marked *

small_c_popup.png

Covid-19 Forces you into quarantine?

Start Mastering swiftUI Today save your 33% discount

small_c_popup.png

Are you ready for a new era of iOS development?

Start learning swiftUI today - download our free e-book