Background
|
Open Sound System (OSS) is a device driver
for accessing sound cards and other sound devices under various UNIX operating
systems. OSS has been derived from the Linux Sound Driver. The current
version supports almost all popular sound cards and sound devices integrated
on computer motherboards.
Sound cards normally have several separate devices or ports which produce
or record sound. There are differences between various cards but most of
them have the following devices.
-
Digitized voice device (usually called as codec, PCM, DSP or ADC/DAC)
is used for recording and playback of digitized voice.
-
Mixer device is used to control various input and output volume
levels. The mixer device also handles switching the input sources from
Mic-input, line-input and CD-input.
-
Synthesizer device is used mainly for playing music. It is also used
to generate audio effects in games. OSS driver currently supports two kind
of synthesizer devices. The first one is the Yamaha FM synthesizer chip
which is available in most of the sound cards. There are two models of
the FM chip. Yamaha OPL2 is a 2 operator version which was used in earliest
sound cards such as AdLib and SB 1.x/2.x. It had just 9 simultaneous voices
and was not capable of producing realistic instrument timbres. OPL3 is
an improved version of the OPL2. It supports 4 operator voices which gives
it the capability to produce more realistic sounds. The second type of
synthesizer devices are so called Wave Table Synthesizers. These devices
produce sound by playing back prerecorded instrument samples. This method
makes it possible to produce extremely realistic instrument timbres. Gravis
Ultrasound (GF1) is one example of wave table synthesizer.
-
MIDI interface is a device which is used to connect external synthesizers
to the computer. Technically MIDI interface is similar to (but not compatible
with) serial ports (RS-232) used in the computers. It runs at 31.5K baud
instead of common serial baud rates of 9.6K, 14.4K and 28.8K baud. The
MIDI interface is designed to work with on-stage equipment like stage props
and lighting controllers. The synthesizers and computers communicate together
by sending messages through MIDI cable.
Most of the sound cards have also a joystick port and some kind of interface
for a CD-ROM drive. These devices are not controlled by OSS but there are
separate drivers available (at least in Linux).
|
OSS API
in General
|
The programming interface (API) of OSS driver is defined in the C language
header file sys/soundcard.h. (There
is another include file for the Gravis Ultrasound card - sys/ultrasound.h,
but normally it should not be required. ultrasound.h is actually
not part of OSS API but a hardware specific extension to it).
Header file soundcard.h is distributed with the driver. Ensure that you
use the latest version of this file when compiling the driver. Otherwise
you may not be able to access some recently introduced features of the
API. In Linux this header file is distributed in the directory linux/include/linux
of the kernel source distribution. On most other systems it is distributed
in the include sub-directory of the distribution.
In case you have installed a separately distributed (test) version of the
driver, you have to manually copy the soundcard.h from the driver distribution
package to it is proper place. Please refer to documentation of the driver
package.
If your program doesn't compile, just check to see if you are using the
latest version of soundcard.h. You have to recompile (on Linux)
both the driver and the application if there are problems with the versions
of the soundcard.h (just to be sure).
|
Types of
Device Files
Supported by
OSS
|
OSS driver supports several different types of device files. These
types are the following:
/dev/mixer
Mixer device files are used mainly for accessing the built-in mixers
of most sound devices. With a mixer it is possible to adjust playback and
recording levels of various sources. This device file is used also for
selecting the recording sources. Typically a mixer will control the output
levels of the digital audio and synthesizer and also mix it with the CD-input,
Line-input and microphone-input sources.
OSS driver supports several mixers on the same system. Mixer devices are
named as /dev/mixer0, /dev/mixer1, ..., /dev/mixerN.
File /dev/mixer is a symbolic link to one of these device files
(usually /dev/mixer0).
/dev/sndstat
This device file is just for diagnostic purposes. Use cat /dev/sndstat
to print some useful information about the driver configuration. This device
will print out all the ports and devices detected by the OSS driver.
/dev/dsp and /dev/audio
These are the main device files for digitized voice applications. Any
data written to this device is played with the DAC/PCM/DSP device of the
sound card. Reading this device returns the audio data recorded from the
current input source (the default is microphone input). Device files /dev/audio
and /dev/dsp are very similar. The difference is that the
/dev/audio uses logarithmic u-Law encoding by default while /dev/dsp
uses 8 bit unsigned linear encoding. With u-Law encoding a sample recorded
with 12 or 16 bit resolution is represented by a 8 bit byte. Note that
the initial sample format is the only difference between these device files.
Both devices behave similarly after program selects specific sample encoding
by calling ioctl(). This device file can be used for applications
such as speech synthesis and recognition and voice mail.
OSS driver supports several codec devices on the same system. Audio devices
are named as /dev/dsp0, /dev/dsp1, ..., /dev/dspN.
File /dev/dsp is a symbolic link to one of these device files
(usually /dev/dsp0). Similar naming scheme is used for /dev/audio
devices.
/dev/sequencer
This device file is intended for electronic music applications. It
can be also used for producing sound effects in games. The /dev/sequencer
provides access to any internal synthesizer devices of the sound cards.
In addition this device file can be used for accessing any external music
synthesizer devices connected to the MIDI port of the sound card as well
as General MIDI daughtercards connected to the "Wave Blaster"
connector of many sound cards. The /dev/sequencer interface permits
control of up to 15 synthesizer chips and up to 16 MIDI ports at the same
time.
/dev/music (formerly /dev/sequencer2)
This device file is very similar than /dev/sequencer. The
difference is that this interface handles both synthesizer and MIDI devices
in the same way. This makes it easier to write device independent applications
than it is with /dev/sequencer. On the other hand /dev/sequencer
permits more precise control to individual notes than /dev/music
which is based on MIDI channels.
CAUTION!
Unlike other device files supported by OSS, both
/dev/sequencer and /dev/music accept formatted input. It is not possible
to play anything with these files just by catting MIDI (or any other) files
to them.
|
/dev/midi
These low level MIDI ports work much like tty devices (raw mode). These
device files are intended for 'non-realtime' use. There is no timing capability
so everything written to the device file will be sent to the MIDI port
as soon as possible. Low level MIDI devices are suitable for use by applications
such as MIDI sysex and sample librarians.
There are several MIDI device files which are named as /dev/midi00,
..., /dev/midi0N (note! two digit numbering). Name /dev/midi
is a symbolic link to one of the actual device files (usually /dev/midi00).
HINT!
Many of the device file categories are numbered
between 0 and N. It is possible to find out the proper number by using
command "cat /dev/sndstat". The printout produced contains
a section for each device category. Devices in each category are numbered
and the number corresponds to the number in device file name. Numbering
of devices depends on order the devices have been initialized during startup
of the driver. This order is not fixed so don't make any assumptions about
device numbers.
|
|
Device
Numbering
Scheme
|
These device files share the same major device number. The major device
number is 14 in Linux. On other operating systems it is probably something
else.. The minor number assignment is given in the table below. The four
least significant bits of the minor number are used to select the device
file type or class. If there is more than one devices in this class, the
upper 4 bits are used to select the device. For example the class number
of the /dev/dsp is 3. Then the minor number of /dev/dsp is 3 and the /dev/dsp1
is 16+3=19.
####Under construction. Insert table here
|
Programming
Guidelines
|
One of the main goals of OSS API is full portability of applications
(source code) between systems supporting OSS API. This is possible if you
follow the following guidelines when designing and programming the audio
portion of your application. It is even more important that the rest of
your application is written portably. In practice, most portability problems
in current sound applications written for Linux are in program modules
doing screen handling. Sound related portability problems are usually just
endianess problems.
The term "portability" doesn't cover just the program's ability
to work on different machines having different operating systems. It also
covers the capability to work on different sound hardware. This is even
more important than inter system portability since differences between
current and future sound devices are likely to be relatively big. OSS
makes it possible to write applications which work with all possible sound
devices by hiding device specific features behind the API. The API is based
on universal "physical" properties of sound and music rather
than hardware specific properties.
-
Use API macros. Macros defined
in soundcard.h provide good portability
since possible future changes in driver's internals will be handled transparently
by the macros. It is possible to use, for example, /dev/sequencer by formatting
the event messages by the application itself. However it is not guaranteed
that this kind of application works in all systems.
-
Device numbering and naming. In
some cases there might be several sound devices in the same system (e.g.
a sound card and on-board audio). In such cases the user may have reasons
to use different device with different applications. This is not a major
problem with freeware applications where the user has freedom to change
device names in source code. However the situation is different when source
code of the program is not available. In both cases it is very useful if
user has chance to change the device in application's preferences or configuration
file. The same is true with MIDI and synthesizer numbers used in /dev/sequencer
and /dev/music. Design your application so that it is possible to select
the device number(s). Don't make your program to use hardcoded device
names which have a numeric suffix. For example program your application
to use /dev/dsp and not /dev/dsp0. /dev/dsp
is a symbolic link which normally points to /dev/dsp0. The user
may have reasons to change audio applications to use /dev/dsp1
by changing the link. In this case all applications using /dev/dsp0
directly will use incorrect device.
-
Endianess.
This is serious problem with applications using 16 bit audio sampling
resolution. Most PC sound cards use little endian encoding of samples.
This means that there is no problems in getting audio applications to work
in little endian machines such as i386 and Alpha AXP. In these environments
it is possible to represent 16 bit samples as 16 bit integers (signed short).
This is not a problem in big endian machines which have built in big endian
codec device. However endianess is a big problem in "mixed" endian
systems. For example many RISC systems use big endian encoding but it is
possible to use little endian ISA sound cards with them. In this case,
using 16 bit integers (signed short) directly will produce just white noise
with faint audio signal mixed with it. This problem can be solved if the
application takes care of endianess (using standard portability techniques).
-
Don't trust "undefined" default
conditions. For most parameters accepted by OSS driver there
is a defined default value. These defined default values are listed
in this manual in point where specific features are discussed. However
in some cases the default condition is not fixed but depends on characteristics
of the machine and the operating system where the program runs. For example
the timer rate of /dev/sequencer is fixed and it depends on systems
timer frequency parameter (HZ). Usually the timer frequency is
100 Hz which gives timer resolution of 0.01 seconds. However there are
systems where the timer frequency is 60 or 1024 Hz. Programs assuming (this
is very common) that the tick interval is always 0.01 seconds and they
will not work on these systems. The proper way to handle this kind of variable
conditions is to use the method defined for querying the default value.
-
Don't try to open the same device twice.
Most device files supported by OSS driver have been designed to
be used exclusively by one application process (/dev/mixer is
the only exception). It is not possible to reopen a device while the same
device is already opened by another process. Don't try to overcome this
situation by using fork() or any other "tricks". This
may work in some situations but in general the result is undefined (using
fork() is OK if only one process actually uses the device. The
same is true also with threads).
-
Avoid extra features and tricks.
Think at least twice before adding a new feature to your application.
The main problem in many programs is that there are lot of unnecessary
features which are untested and just cause problems when used (this is
actually a general problem - not just with sound applications). An example
of a very common extra feature is including a mixer interface to an audio
playback application which doesn't normally need a mixer. It is very likely
that this kind of extra feature gets poorly implemented and causes troubles
on systems which are somehow different.
-
Don't use undocumented features
(unless all others use them). There are many features that are defined
in soundcard.h but which are not documented here. This kind of
undocumented features are left undocumented for reason. Usually
they are obsolete features which are no longer supported and will disappear
in future driver versions. Some of them are features which have not been
tested well enough and they may cause problems in some systems. Third class
of undocumented features are device dependent features which work just
with few devices (which are usually discontinued). So be extremely careful
when browsing soundcard.h and looking for nice features. Please
consult Undocumented OSS for list of these
features.
-
Avoid false assumptions. There
are many common assumptions which make programs non-portable or highly
hardware dependent. The following is a list of things that are commonly
misunderstood:
-
Mixer
-
All sound cards don't have mixer. This is true with 1) older sound
cards, 2) sound cards that are not (yet) fully supported by OSS driver
and 3) some high end professional ("digital only") devices which
are usually connected to an external mixer. Your program will not work
with these cards if it requires availability of a mixer.
-
. Yes this is true. For some reason almost all
mixer programs written for OSS API make this assumption.
-
Set of available mixer controls is not fixed but varies between devices.
Your application should query available available channels from driver
before attempting to use them (alternatively the application can just ignore
selectively some error codes returned by the mixer API but this is a really
crude and semantically incorrect method).
-
Try to avoid "automatic" use of the "main volume"
mixer control. This control affects volume of all audio sources connected
to the mixer. Don't use it for controlling volume of audio playback since
it also affects the volume of an audio CD that may be playing in the background.
Your program should use only the "PCM" channel to control volume
of audio playback.
-
/dev/dsp and /dev/audio
-
The default audio data format is 8kHz/8bits unsigned/mono (/dev/dsp)
or 8kHz/mu-Law/mono (/dev/audio). However this is not always true.
Some devices simply don't support 8 kHz sampling rate, mono mode or 8bit/mu-Law
data formats. An application which assumes these defaults will produce
unexpected results (144 dB noise) with some (future 24 bits only) hardware.
-
/dev/sequencer and /dev/music
-
Don't assume that the timer rate of /dev/sequencer is 100
Hz (0.01 sec). This is not always true. For example, Linux/Alpha uses much
higher system clock rate.
-
Set timing parameters of /dev/music before using the device.
There are no globally valid default values.
-
Don't assume that there is always at least one MIDI port and/or one
synthesizer device. There are sound cards which have just a synthesizer
device or just a MIDI port.
-
Don't try to use MIDI port or synthesizer device before first checking
that it exists.
|