1 '\" te
2 .\" Copyright (c) 2009, Sun Microsystems, Inc. All Rights Reserved
3 .\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
4 .\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the
5 .\" fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner]
6 .TH AUDIO 7D "Aug 3, 2009"
7 .SH NAME
8 audio \- common audio framework
9 .SH DESCRIPTION
10 .sp
11 .LP
12 The \fBaudio\fR driver provides common support routines for audio devices in
13 Solaris.
14 .sp
15 .LP
16 The audio framework supports multiple \fBpersonalities\fR, allowing for devices
17 to be accessed with different programming interfaces.
18 .sp
19 .LP
20 The audio framework also provides a number of facilities, such as mixing of
21 audio streams, and data format and sample rate conversion.
22 .SS "Overview"
23 .sp
24 .LP
25 The audio framework provides a software mixing engine (audio mixer) for all
26 audio devices, allowing more than one process to play or record audio at the
27 same time.
28 .SS "Multi-Stream Codecs"
29 .sp
30 .LP
31 The audio mixer supports multi-stream Codecs. These devices have DSP engines
32 that provide sample rate conversion, hardware mixing, and other features. The
33 use of such hardware features is opaque to applications.
34 .SS "Backward Compatibility"
35 .sp
36 .LP
37 It is not possible to disable the mixing function. Applications must not assume
38 that they have exclusive access to the audio device.
39 .SS "Audio Formats"
40 .sp
41 .LP
42 Digital audio data represents a quantized approximation of an analog audio
43 signal waveform. In the simplest case, these quantized numbers represent the
44 amplitude of the input waveform at particular sampling intervals. To achieve
45 the best approximation of an input signal, the highest possible sampling
46 frequency and precision should be used. However, increased accuracy comes at a
47 cost of increased data storage requirements. For instance, one minute of
48 monaural audio recorded in u-Law format (pronounced \fBmew-law\fR) at 8 KHz
49 requires nearly 0.5 megabytes of storage, while the standard Compact Disc audio
50 format (stereo 16-bit linear PCM data sampled at 44.1 KHz) requires
51 approximately 10 megabytes per minute.
52 .sp
53 .LP
54 An audio data format is characterized in the audio driver by four parameters:
55 sample Rate, encoding, precision, and channels. Refer to the device-specific
56 manual pages for a list of the audio formats that each device supports. In
57 addition to the formats that the audio device supports directly, other formats
58 provide higher data compression. Applications can convert audio data to and
59 from these formats when playing or recording.
60 .SS "Sample Rate"
61 .sp
62 .LP
63 Sample rate is a number that represents the sampling frequency (in samples per
64 second) of the audio data.
65 .sp
66 .LP
67 The audio mixer always configures the hardware for the highest possible sample
68 rate for both play and record. This ensures that none of the audio streams
69 require compute-intensive low pass filtering. The result is that high sample
70 rate audio streams are not degraded by filtering.
71 .sp
72 .LP
73 Sample rate conversion can be a compute-intensive operation, dependingon the
74 number of channels and a device's sample rate. For example, an 8KHz signal can
75 be easily converted to 48KHz, requiring a low cost up sampling by 6. However,
76 converting from 44.1KHz to 48KHz is computer intensive because it must be up
77 sampled by 160 and then down sampled by 147. This is only done using integer
78 multipliers.
79 .sp
80 .LP
81 Applications can greatly reduce the impact of sample rate conversion by
82 carefully picking the sample rate. Applications should always use the highest
83 sample rate the device supports. An application can also do its own sample rate
84 conversion (to take advantage of floating point and accelerated instructions)
85 or use small integers for up and down sampling.
86 .sp
87 .LP
88 All modern audio devices run at 48 kHz or a multiple thereof, hence just using
89 48 kHz can be a reasonable compromise if the application is not prepared to
90 select higher sample rates.
91 .SS "Encodings"
92 .sp
93 .LP
94 An encoding parameter specifies the audiodata representation. u-Law encoding
95 corresponds to CCITT G.711, and is the standard for voice data used by
96 telephone companies in the United States, Canada, and Japan. A-Law encoding is
97 also part of CCITT G.711 and is the standard encoding for telephony elsewhere
98 in the world. A-Law and u-Law audio data are sampled at a rate of 8000 samples
99 per second with 12-bit precision, with the data compressed to 8-bit samples.
100 The resulting audio data quality is equivalent to that of stan dard analog
101 telephone service.
102 .sp
103 .LP
104 Linear Pulse Code Modulation (PCM) is an uncompressed, signed audio format in
105 which sample values are directly proportional to audio signal voltages. Each
106 sample is a 2's complement number that represents a positive or negative
107 amplitude.
108 .SS "Precision"
109 .sp
110 .LP
111 Precision indicates the number of bits used to store each audio sample. For
112 instance, u-Law and A-Law data are stored with 8-bit precision. PCM data can be
113 stored at various precisions, though 16-bit is the most common.
114 .SS "Channels"
115 .sp
116 .LP
117 Multiple channels of audio can be interleaved at sample boundaries. A sample
118 frame consists of a single sample from each active channel. For example, a
119 sample frame of stereo 16-bit PCM data consists of 2 16-bit samples,
120 corresponding to the left and right channel data. The audio mixer sets the
121 hardware to the maximum number of channels supported. If a mono signal is
122 played or recorded, it is mixed on the first two (usually the left and right)
123 channel only. Silence is mixed on the remaining channels.
124 .SS "Supported Formats"
125 .sp
126 .LP
127 The audio mixer supports the following audio formats:
128 .sp
129 .in +2
130 .nf
131 Encoding Precision Channels
132 Signed Linear PCM 32-bit Mono or Stereo
133 Signed Linear PCM 16-bit Mono or Stereo
134 Signed Linear PCM 8-bit Mono or Stereo
135 u-Law 8-bit Mono or Stereo
136 A-Law 8-bit Mono or Stereo
137 .fi
138 .in -2
139 .sp
140
141 .sp
142 .LP
143 The audio mixer converts all audio streams to 24-bit Linear PCM before mixing.
144 After mixing, conversion is made to the best possible Codec format. The
145 conversion process is not compute intensive and audio applications can choose
146 the encoding format that best meets their needs.
147 .sp
148 .LP
149 The mixer discards the low order 8 bits of 32-bit Signed Linear PCM in order to
150 perform mixing. (This is done to allow for possible overflows to fit into
151 32-bits when mixing multiple streams together.) Hence, the maximum effective
152 precision is 24-bits.
153 .SH FILES
154 .sp
155 .ne 2
156 .na
157 \fB\fB/kernel/drv/audio\fR\fR
158 .ad
159 .RS 29n
160 32-bit kernel driver module
161 .RE
162
163 .sp
164 .ne 2
165 .na
166 \fB\fB/kernel/drv/amd64/audio\fR\fR
167 .ad
168 .RS 29n
169 64-bit x86 kernel driver module
170 .RE
171
172 .sp
173 .ne 2
174 .na
175 \fB\fB/kernel/drv/sparcv9/audio\fR\fR
176 .ad
177 .RS 29n
178 64-bit SPARC kernel driver module
179 .RE
180
181 .sp
182 .ne 2
183 .na
184 \fB\fB/kernel/drv/audio.conf\fR\fR
185 .ad
186 .RS 29n
187 \fBaudio\fR configuration file
188 .RE
189
190 .SH ATTRIBUTES
191 .sp
192 .LP
193 See \fBattributes\fR(5) for a description of the following attributes:
194 .sp
195
196 .sp
197 .TS
198 box;
199 l | l
200 l | l .
201 ATTRIBUTE TYPE ATTRIBUTE VALUE
202 _
203 Architecture SPARC, x86
204 _
205 Interface Stability Uncommitted
206 .TE
207
208 .SH SEE ALSO
209 .sp
210 .LP
211 \fBioctl\fR(2), \fBattributes\fR(5), \fBaudio\fR(7I), \fBdsp\fR(7I)
|
1 '\" te
2 .\" Copyright (c) 2009, Sun Microsystems, Inc. All Rights Reserved
3 .\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
4 .\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the
5 .\" fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner]
6 .TH AUDIO 7D "Jan 10, 2020"
7 .SH NAME
8 audio \- common audio framework
9 .SH DESCRIPTION
10 The \fBaudio\fR driver provides common support routines for audio devices in
11 Solaris.
12 .sp
13 .LP
14 The audio framework supports multiple \fBpersonalities\fR, allowing for devices
15 to be accessed with different programming interfaces.
16 .sp
17 .LP
18 The audio framework also provides a number of facilities, such as mixing of
19 audio streams, and data format and sample rate conversion.
20 .SS "Overview"
21 The audio framework provides a software mixing engine (audio mixer) for all
22 audio devices, allowing more than one process to play or record audio at the
23 same time.
24 .SS "Multi-Stream Codecs"
25 The audio mixer supports multi-stream Codecs. These devices have DSP engines
26 that provide sample rate conversion, hardware mixing, and other features. The
27 use of such hardware features is opaque to applications.
28 .SS "Backward Compatibility"
29 It is not possible to disable the mixing function. Applications must not assume
30 that they have exclusive access to the audio device.
31 .SS "Audio Formats"
32 Digital audio data represents a quantized approximation of an analog audio
33 signal waveform. In the simplest case, these quantized numbers represent the
34 amplitude of the input waveform at particular sampling intervals. To achieve
35 the best approximation of an input signal, the highest possible sampling
36 frequency and precision should be used. However, increased accuracy comes at a
37 cost of increased data storage requirements. For instance, one minute of
38 monaural audio recorded in u-Law format (pronounced \fBmew-law\fR) at 8 KHz
39 requires nearly 0.5 megabytes of storage, while the standard Compact Disc audio
40 format (stereo 16-bit linear PCM data sampled at 44.1 KHz) requires
41 approximately 10 megabytes per minute.
42 .sp
43 .LP
44 An audio data format is characterized in the audio driver by four parameters:
45 sample Rate, encoding, precision, and channels. Refer to the device-specific
46 manual pages for a list of the audio formats that each device supports. In
47 addition to the formats that the audio device supports directly, other formats
48 provide higher data compression. Applications can convert audio data to and
49 from these formats when playing or recording.
50 .SS "Sample Rate"
51 Sample rate is a number that represents the sampling frequency (in samples per
52 second) of the audio data.
53 .sp
54 .LP
55 The audio mixer always configures the hardware for the highest possible sample
56 rate for both play and record. This ensures that none of the audio streams
57 require compute-intensive low pass filtering. The result is that high sample
58 rate audio streams are not degraded by filtering.
59 .sp
60 .LP
61 Sample rate conversion can be a compute-intensive operation, depending on the
62 number of channels and a device's sample rate. For example, an 8KHz signal can
63 be easily converted to 48KHz, requiring a low cost up sampling by 6. However,
64 converting from 44.1KHz to 48KHz is computer intensive because it must be up
65 sampled by 160 and then down sampled by 147. This is only done using integer
66 multipliers.
67 .sp
68 .LP
69 Applications can greatly reduce the impact of sample rate conversion by
70 carefully picking the sample rate. Applications should always use the highest
71 sample rate the device supports. An application can also do its own sample rate
72 conversion (to take advantage of floating point and accelerated instructions)
73 or use small integers for up and down sampling.
74 .sp
75 .LP
76 All modern audio devices run at 48 kHz or a multiple thereof, hence just using
77 48 kHz can be a reasonable compromise if the application is not prepared to
78 select higher sample rates.
79 .SS "Encodings"
80 An encoding parameter specifies the audiodata representation. u-Law encoding
81 corresponds to CCITT G.711, and is the standard for voice data used by
82 telephone companies in the United States, Canada, and Japan. A-Law encoding is
83 also part of CCITT G.711 and is the standard encoding for telephony elsewhere
84 in the world. A-Law and u-Law audio data are sampled at a rate of 8000 samples
85 per second with 12-bit precision, with the data compressed to 8-bit samples.
86 The resulting audio data quality is equivalent to that of stan dard analog
87 telephone service.
88 .sp
89 .LP
90 Linear Pulse Code Modulation (PCM) is an uncompressed, signed audio format in
91 which sample values are directly proportional to audio signal voltages. Each
92 sample is a 2's complement number that represents a positive or negative
93 amplitude.
94 .SS "Precision"
95 Precision indicates the number of bits used to store each audio sample. For
96 instance, u-Law and A-Law data are stored with 8-bit precision. PCM data can be
97 stored at various precisions, though 16-bit is the most common.
98 .SS "Channels"
99 Multiple channels of audio can be interleaved at sample boundaries. A sample
100 frame consists of a single sample from each active channel. For example, a
101 sample frame of stereo 16-bit PCM data consists of 2 16-bit samples,
102 corresponding to the left and right channel data. The audio mixer sets the
103 hardware to the maximum number of channels supported. If a mono signal is
104 played or recorded, it is mixed on the first two (usually the left and right)
105 channel only. Silence is mixed on the remaining channels.
106 .SS "Supported Formats"
107 The audio mixer supports the following audio formats:
108 .sp
109 .in +2
110 .nf
111 Encoding Precision Channels
112 Signed Linear PCM 32-bit Mono or Stereo
113 Signed Linear PCM 16-bit Mono or Stereo
114 Signed Linear PCM 8-bit Mono or Stereo
115 u-Law 8-bit Mono or Stereo
116 A-Law 8-bit Mono or Stereo
117 .fi
118 .in -2
119 .sp
120
121 .sp
122 .LP
123 The audio mixer converts all audio streams to 24-bit Linear PCM before mixing.
124 After mixing, conversion is made to the best possible Codec format. The
125 conversion process is not compute intensive and audio applications can choose
126 the encoding format that best meets their needs.
127 .sp
128 .LP
129 The mixer discards the low order 8 bits of 32-bit Signed Linear PCM in order to
130 perform mixing. (This is done to allow for possible overflows to fit into
131 32-bits when mixing multiple streams together.) Hence, the maximum effective
132 precision is 24-bits.
133 .SH FILES
134 .ne 2
135 .na
136 \fB\fB/kernel/drv/amd64/audio\fR\fR
137 .ad
138 .RS 29n
139 Device driver (x86)
140 .RE
141
142 .sp
143 .ne 2
144 .na
145 \fB\fB/kernel/drv/sparcv9/audio\fR\fR
146 .ad
147 .RS 29n
148 Device driver (SPARC)
149 .RE
150
151 .sp
152 .ne 2
153 .na
154 \fB\fB/kernel/drv/audio.conf\fR\fR
155 .ad
156 .RS 29n
157 Driver configuration file
158 .RE
159
160 .SH ATTRIBUTES
161 See \fBattributes\fR(5) for a description of the following attributes:
162 .sp
163
164 .sp
165 .TS
166 box;
167 l | l
168 l | l .
169 ATTRIBUTE TYPE ATTRIBUTE VALUE
170 _
171 Architecture SPARC, x86
172 _
173 Interface Stability Uncommitted
174 .TE
175
176 .SH SEE ALSO
177 \fBioctl\fR(2), \fBattributes\fR(5), \fBaudio\fR(7I), \fBdsp\fR(7I)
|