Sharing a device
It is often desirable to be able to share a sound card among several processes running at the same time.
This requires the ability to mix the sound outputs of those processes into a single stream, that is multiplexing.
In order to achieve this with ALSA there are several different cases and techniques.
The cases depend on whether the sound card/chipset supports hardware mixing or not, and whether the processes access the sound card/chipset via the ALSA library, a sound server or OSS emulation.
In the beginning OSS often did not support sharing even if it was supported by the hardware. ALSA drivers, as a rule, will support sharing if the hardware supports it. The ALSA library supports sharing even if the hardware does not support it, but this requires some explicit configuration. For applications that use OSS, the aoss wrapper can make them use ALSA instead, which improves things.
Finally applications that use sound servers like EsounD, polypaudio, aRts, or JACK. most sound servers perform software mixing and support ALSA output.
The individual cases are:
* The card supports hardware mixing.
* The card does not support hardware mixing, but all processes accessing it run applications that use the ALSA library.
* The applications use a sound server to access the card.
* The applications use the OSS API to access the card.
The card supports hardware mixing
This is the best case. Most recent cards support hardware mixing, at least for output, and when they do they support it to up to a maximum number of streams that is so high that it is unlikely to be ever a problem.
If you can the simplest way to ensure sharing is to get a card that supports hardware mixing. Sound cards are cheap, often costing less than the time to implement workarounds.
The card does not support hardware mixing, but all processes accessing it run applications that use the ALSA library
In this case, it is fairly to create an ALSA library configuration file (see the .asoundrc documentation at OpenSrc.org and at ALSA-Project.org) that allows software mixing. This is achieved using the dmix (for output) and dsnoop (for input) plugins, and asym to tie them together. In ALSA library version 1.0.9rc2 and later versions this is already done in the standard configuration files.
There is an example of using them below, and a much more extensive, ready-made, nearly universal 1 or 2 card configuration file that you can most often just drop-in and it will work here.
The applications use a sound server to access the card
Sound servers were mainly created to premix multiple streams for OSS, where even cards that supported hardware mixing did not support multiplexing.
If your system runs a sound servers like EsounD, for GNOME or aRts, for KDE, set the sound server to use ALSA as its output, and applications to use the sound server.
For KDE aRts problems see: here and try using artsdsp to make OSS applications use aRts instead. But it is probably preferable to make them use ALSA directly by using aoss.
The applications use the OSS API to access the card
Some applications cannot use ALSA or a sound server, but only the OSS API. In that case you can often make them use ALSA using the aoss wrapper. Also check these these notes and check careflly the description of non blocking options in these other notes.
Examples of ALSA lib configurations to use the software mixing plugins
Simple output only sharing example
# The top level shared pseudo device, with both PCM and CTL interfaces
# The device names "default", "dsp0", "mixer0" have conventional meanings.
# The top level shared pseudo device, with both PCM and CTL interfaces
# The ALSA default is "!default", but many programs like XMMS and aoss
# assume "dsp0" as default name for PCM and "mixer0" for CTL.
# Amazingly, XMMS has problems if one defines 'pcm.dsp0' to be
# 'plug' for 'pcm.asym0' and not directly as 'asym0'.
pcm.!default { type plug;
slave.pcm "dmix0"; }
ctl.!default { type hw; card 0; }
pcm.dsp0 { type plug;
slave.pcm "dmix0"; }
ctl.dsp0 { type hw; card 0; }
ctl.mixer0 { type hw; card 0; }
########################################################################
# Buffering (period time defaults to 125000 usecs).
# Size of period, expressed either in usec or byte units:
# period_time USECS
# period_size BYTES
# Size of buffers, expressed either in period, usec, or byte units:
# periods PERIODS
# buffer_time USECS
# buffer_size BYTES
# The ALSA docs have examples with 'period_time' set to 0,
# when 'period_size' and 'buffer_size' are used instead,
# but this can cause trouble in later releases of ALSA.
# For OSS compatibility, 'period_size' and 'buffer_size'
# should be powers of 2. Also, many cards cannot accept
# a 'period_size' much greater than 4096, so 4096 is safe.
# On my VIA 8233A, any value for 'period_time' greater than
# 85333 usecs (precisely!) causes hiccups in sound output.
# Why? At 48kHz, 85333 usec are are just over 4096 bytes/channel.
pcm.dmix0 { type dmix;
ipc_key 13759;
slave.pcm "hw:0,0";
slave.channels 2;
slave.rate 48000;
slave.period_size 4096;
slave.buffer_size 16384;
slave.period_time 84000;
slave.buffer_time 340000;
# Map only the first two channels
bindings.0 0;
bindings.1 1; }
Sharing both input and output
To the output only example add:
# The top level shared pseudo device, with both PCM and CTL interfaces
# The ALSA default is "!default", but many programs like XMMS and aoss
# assume "dsp0" as default name for PCM and "mixer0" for CTL.
# Amazingly, XMMS has problems if one defines 'pcm.dsp0' to be
# 'plug' for 'pcm.asym0' and not directly as 'asym0'.
pcm.!default { type asym;
capture.pcm "dsnoop0";
playback.pcm "dmix0"; }
ctl.!default { type hw; card 0; }
pcm.dsp0 { type asym;
capture.pcm "dsnoop0";
playback.pcm "dmix0"; }
ctl.dsp0 { type hw; card 0; }
ctl.mixer0 { type hw; card 0; }
########################################################################
pcm.asym0 { type asym;
capture.pcm "dsnoop0";
playback.pcm "dmix0"; }
pcm.dsnoop0 { type dsnoop;
ipc_key 13758;
slave.pcm "hw:0,0"; }
This defines a virtual ALSA PCM device called asym0. This device is capable of mixing several playback streams and sharing one capture stream amongst several applications. To get automatic samplerate conversion, etc, we defined the device dmix0 which uses alsa's plug plugin.
Furthermore we defined a device called !default. This is equivalent to dsp0. The special name !default makes this device the default device for all well coded ALSA apps (sadly not too many are well coded).
And last we defined a device called dsp0. This device is used by the aoss script from the alsa-oss package.
First of all we test this basic setup with the standard ALSA aplay tool. You will need a .wav file for this test. If you have none, create one out of an MP3 with the following command:
mpg123 name.mp3 -w name.wav
With this .wav file we test the dsp0 device now:
aplay -D dsp0 name.wav
This should playback the .wav. Even if you run this command in a second terminal at the same time, because the dsp0 device does the mixing. Because we also defined the default alsa device !default to use asym0, you should also be able to run the command without the -D dsp0 parameter:
aplay name.wav
Not all apps honour the default device though. MPlayer for example is one of them.
To test this setup with MPlayer use:
mplayer -ao alsa1x:dsp0 name.avi
So, now is the time to test all your desired alsa apps to work with this setup.
* Some will need to be told explicitly to use dsp0.
* Others will happily use the default.
If some ALSA apps behave badly with dsp0 (crackles, stutter), check the dmix plugin configuration page. It has quite a bit of troubleshooting. Tips: look at the samplerates of the slave, maybe play with the period_size parameter, etc.; in particular many cards have limits on the period_size, usually to 4096 bytes.
Some applications use mmap'ed audio data transfer. If your application complains about not being able to use mmap, then play around with the mmap_emulation setting in the pcm.dsp0 definition.
Some applications and/or some cards will simply not work in mmap mode, so try disabling it.