Source code
Revision control
Copy as Markdown
Other Tools
<!-- go/cmark -->
<!--* freshness: {owner: 'henrika' reviewed: '2021-04-12'} *-->
# Audio Device Module (ADM)
## Overview
The ADM(AudioDeviceModule) is responsible for driving input (microphone) and
output (speaker) audio in WebRTC and the API is defined in [audio_device.h][19].
Main functions of the ADM are:
* Initialization and termination of native audio libraries.
* Registration of an [AudioTransport object][16] which handles audio callbacks
for audio in both directions.
* Device enumeration and selection (only for Linux, Windows and Mac OSX).
* Start/Stop physical audio streams:
* Recording audio from the selected microphone, and
* playing out audio on the selected speaker.
* Level control of the active audio streams.
* Control of built-in audio effects (Audio Echo Cancelation (AEC), Audio Gain
Control (AGC) and Noise Suppression (NS)) for Android and iOS.
ADM implementations reside at two different locations in the WebRTC repository:
`/modules/audio_device/` and `/sdk/`. The latest implementations for [iOS][20]
and [Android][21] can be found under `/sdk/`. `/modules/audio_device/` contains
older versions for mobile platforms and also implementations for desktop
platforms such as [Linux][22], [Windows][23] and [Mac OSX][24]. This document is
focusing on the parts in `/modules/audio_device/` but implementation specific
details such as threading models are omitted to keep the descriptions as simple
as possible.
By default, the ADM in WebRTC is created in [`WebRtcVoiceEngine::Init`][1] but
an external implementation can also be injected using
[`rtc::CreatePeerConnectionFactory`][25]. An example of where an external ADM is
injected can be found in [PeerConnectionInterfaceTest][26] where a so-called
[fake ADM][29] is utilized to avoid hardware dependency in a gtest. Clients can
also inject their own ADMs in situations where functionality is needed that is
not provided by the default implementations.
## Background
This section contains a historical background of the ADM API.
The ADM interface is old and has undergone many changes over the years. It used
to be much more granular but it still contains more than 50 methods and is
implemented on several different hardware platforms.
Some APIs are not implemented on all platforms, and functionality can be spread
out differently between the methods.
The most up-to-date implementations of the ADM interface are for [iOS][27] and
for [Android][28].
Desktop version are not updated to comply with the latest
and more work is also needed to improve the performance and stability of these
versions.
## WebRtcVoiceEngine
[`WebRtcVoiceEngine`][2] does not utilize all methods of the ADM but it still
serves as the best example of its architecture and how to use it. For a more
detailed view of all methods in the ADM interface, see [ADM unit tests][3].
Assuming that an external ADM implementation is not injected, a default - or
internal - ADM is created in [`WebRtcVoiceEngine::Init`][1] using
[`AudioDeviceModule::Create`][4].
Basic initialization is done using a utility method called
[`adm_helpers::Init`][5] which calls fundamental ADM APIs like:
* [`AudiDeviceModule::Init`][6] - initializes the native audio parts required
for each platform.
* [`AudiDeviceModule::SetPlayoutDevice`][7] - specifies which speaker to use
for playing out audio using an `index` retrieved by the corresponding
enumeration method [`AudiDeviceModule::PlayoutDeviceName`][8].
* [`AudiDeviceModule::SetRecordingDevice`][9] - specifies which microphone to
use for recording audio using an `index` retrieved by the corresponding
enumeration method which is [`AudiDeviceModule::RecordingDeviceName`][10].
* [`AudiDeviceModule::InitSpeaker`][11] - sets up the parts of the ADM needed
to use the selected output device.
* [`AudiDeviceModule::InitMicrophone`][12] - sets up the parts of the ADM
needed to use the selected input device.
* [`AudiDeviceModule::SetStereoPlayout`][13] - enables playout in stereo if
the selected audio device supports it.
* [`AudiDeviceModule::SetStereoRecording`][14] - enables recording in stereo
if the selected audio device supports it.
[`WebRtcVoiceEngine::Init`][1] also calls
[`AudiDeviceModule::RegisterAudioTransport`][15] to register an existing
[AudioTransport][16] implementation which handles audio callbacks in both
directions and therefore serves as the bridge between the native ADM and the
upper WebRTC layers.
Recorded audio samples are delivered from the ADM to the `WebRtcVoiceEngine`
(who owns the `AudioTransport` object) via
[`AudioTransport::RecordedDataIsAvailable`][17]:
```
int32_t RecordedDataIsAvailable(const void* audioSamples, size_t nSamples, size_t nBytesPerSample,
size_t nChannels, uint32_t samplesPerSec, uint32_t totalDelayMS,
int32_t clockDrift, uint32_t currentMicLevel, bool keyPressed,
uint32_t& newMicLevel)
```
Decoded audio samples ready to be played out are are delivered by the
`WebRtcVoiceEngine` to the ADM, via [`AudioTransport::NeedMorePlayoutData`][18]:
```
int32_t NeedMorePlayData(size_t nSamples, size_t nBytesPerSample, size_t nChannels, int32_t samplesPerSec,
void* audioSamples, size_t& nSamplesOut,
int64_t* elapsed_time_ms, int64_t* ntp_time_ms)
```
using regular interleaving of channels within each sample.
`WebRtcVoiceEngine` also owns an [`AudioState`][30] member and this class is
used has helper to start and stop audio to and from the ADM. To initialize and
start recording, it calls:
* [`AudiDeviceModule::InitRecording`][31]
* [`AudiDeviceModule::StartRecording`][32]
and to initialize and start playout:
* [`AudiDeviceModule::InitPlayout`][33]
* [`AudiDeviceModule::StartPlayout`][34]
Finally, the corresponding stop methods [`AudiDeviceModule::StopRecording`][35]
and [`AudiDeviceModule::StopPlayout`][36] are called followed by
[`AudiDeviceModule::Terminate`][37].