Anton Anton - 1 year ago 212
Android Question

Android AudioTrack.write() returns after playing the whole buffer

AudioTrack.write(). Android documentation says that

In streaming mode, the write will normally block until all the data has been enqueued for playback, and will return a full transfer count.

However, in my code, write() method seems to wait until the whole buffer is played until the from the loudspeaker. Therefore, for example calling stop() method afterwards or filling more data is not possible.

The audiotrack is initialized by:

int mBufferSize = AudioTrack.getMinBufferSize(44100,
AudioTrack mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
mBufferSize, AudioTrack.MODE_STREAM);
int duration = 44100*5;
short[] mBuffer = new short[duration];

In onCreate:

for (int i = 0; i < wavelength; i++) {
waveform[i] = Math.sin(((double) i)/wavelength * 2 * Math.PI - Math.PI);

On a button click, this is called:

private void playSound(double frequency, int duration) {
int idx = 0;
for (int i = 0; i < duration-1; i++) {
idx = idx + (int) Math.ceil(frequency);
if (idx > wavelength-1)
idx = idx % wavelength;
mBuffer[i] = (short) (waveform[idx] * Short.MAX_VALUE);
long startTime = System.currentTimeMillis();
ret = mAudioTrack.write(mBuffer, 0, mBuffer.length);
long runtime = System.currentTimeMillis() - startTime;

The timestamp shows that the write() call takes exactly 5 seconds, which is the length of the audio clip, and I wouldn't believe the transfer time would be exactly the same. I want to generate more data to be played (perhaps with another frequency) while the previous data is still being played. I know some developers use multiple threads (and I have no experience in threading, so I don't know how to do it), but the documentation indicates it would be possible also this way...

Answer Source

A call to AudioTrack.write(...) blocks until all of the supplied data has been copied into the AudioTrack's streaming buffer. The AudioTrack buffer's size (mBufferSize above) is set to AudioTrack.getMinBufferSize(...), and so is obviously small, surely enough for only a fraction of a second of audio.

The 5 seconds of audio data that you want to enqueue will obviously not fit into such a small buffer. So AudioTrack.write(...) must repeatedly fill that buffer as it is being drained by playback. Before it can return it must enqueue the last part of your audio data. And before it can do that it must wait at least until everything but the last mBufferSize of audio data has been played.

If you increase mBufferSize by various amounts you should start to see the AudioTrack.write call blocking for less time.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download