1 '\" te 2 .\" Copyright (c) 2009, Sun Microsystems, Inc. All Rights Reserved 3 .\" The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. 4 .\" See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this CDDL HEADER, with the 5 .\" fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner] 6 .TH AUDIO 7D "Jan 10, 2020" 7 .SH NAME 8 audio \- common audio framework 9 .SH DESCRIPTION 10 The \fBaudio\fR driver provides common support routines for audio devices in 11 Solaris. 12 .sp 13 .LP 14 The audio framework supports multiple \fBpersonalities\fR, allowing for devices 15 to be accessed with different programming interfaces. 16 .sp 17 .LP 18 The audio framework also provides a number of facilities, such as mixing of 19 audio streams, and data format and sample rate conversion. 20 .SS "Overview" 21 The audio framework provides a software mixing engine (audio mixer) for all 22 audio devices, allowing more than one process to play or record audio at the 23 same time. 24 .SS "Multi-Stream Codecs" 25 The audio mixer supports multi-stream Codecs. These devices have DSP engines 26 that provide sample rate conversion, hardware mixing, and other features. The 27 use of such hardware features is opaque to applications. 28 .SS "Backward Compatibility" 29 It is not possible to disable the mixing function. Applications must not assume 30 that they have exclusive access to the audio device. 31 .SS "Audio Formats" 32 Digital audio data represents a quantized approximation of an analog audio 33 signal waveform. In the simplest case, these quantized numbers represent the 34 amplitude of the input waveform at particular sampling intervals. To achieve 35 the best approximation of an input signal, the highest possible sampling 36 frequency and precision should be used. However, increased accuracy comes at a 37 cost of increased data storage requirements. For instance, one minute of 38 monaural audio recorded in u-Law format (pronounced \fBmew-law\fR) at 8 KHz 39 requires nearly 0.5 megabytes of storage, while the standard Compact Disc audio 40 format (stereo 16-bit linear PCM data sampled at 44.1 KHz) requires 41 approximately 10 megabytes per minute. 42 .sp 43 .LP 44 An audio data format is characterized in the audio driver by four parameters: 45 sample Rate, encoding, precision, and channels. Refer to the device-specific 46 manual pages for a list of the audio formats that each device supports. In 47 addition to the formats that the audio device supports directly, other formats 48 provide higher data compression. Applications can convert audio data to and 49 from these formats when playing or recording. 50 .SS "Sample Rate" 51 Sample rate is a number that represents the sampling frequency (in samples per 52 second) of the audio data. 53 .sp 54 .LP 55 The audio mixer always configures the hardware for the highest possible sample 56 rate for both play and record. This ensures that none of the audio streams 57 require compute-intensive low pass filtering. The result is that high sample 58 rate audio streams are not degraded by filtering. 59 .sp 60 .LP 61 Sample rate conversion can be a compute-intensive operation, depending on the 62 number of channels and a device's sample rate. For example, an 8KHz signal can 63 be easily converted to 48KHz, requiring a low cost up sampling by 6. However, 64 converting from 44.1KHz to 48KHz is computer intensive because it must be up 65 sampled by 160 and then down sampled by 147. This is only done using integer 66 multipliers. 67 .sp 68 .LP 69 Applications can greatly reduce the impact of sample rate conversion by 70 carefully picking the sample rate. Applications should always use the highest 71 sample rate the device supports. An application can also do its own sample rate 72 conversion (to take advantage of floating point and accelerated instructions) 73 or use small integers for up and down sampling. 74 .sp 75 .LP 76 All modern audio devices run at 48 kHz or a multiple thereof, hence just using 77 48 kHz can be a reasonable compromise if the application is not prepared to 78 select higher sample rates. 79 .SS "Encodings" 80 An encoding parameter specifies the audiodata representation. u-Law encoding 81 corresponds to CCITT G.711, and is the standard for voice data used by 82 telephone companies in the United States, Canada, and Japan. A-Law encoding is 83 also part of CCITT G.711 and is the standard encoding for telephony elsewhere 84 in the world. A-Law and u-Law audio data are sampled at a rate of 8000 samples 85 per second with 12-bit precision, with the data compressed to 8-bit samples. 86 The resulting audio data quality is equivalent to that of stan dard analog 87 telephone service. 88 .sp 89 .LP 90 Linear Pulse Code Modulation (PCM) is an uncompressed, signed audio format in 91 which sample values are directly proportional to audio signal voltages. Each 92 sample is a 2's complement number that represents a positive or negative 93 amplitude. 94 .SS "Precision" 95 Precision indicates the number of bits used to store each audio sample. For 96 instance, u-Law and A-Law data are stored with 8-bit precision. PCM data can be 97 stored at various precisions, though 16-bit is the most common. 98 .SS "Channels" 99 Multiple channels of audio can be interleaved at sample boundaries. A sample 100 frame consists of a single sample from each active channel. For example, a 101 sample frame of stereo 16-bit PCM data consists of 2 16-bit samples, 102 corresponding to the left and right channel data. The audio mixer sets the 103 hardware to the maximum number of channels supported. If a mono signal is 104 played or recorded, it is mixed on the first two (usually the left and right) 105 channel only. Silence is mixed on the remaining channels. 106 .SS "Supported Formats" 107 The audio mixer supports the following audio formats: 108 .sp 109 .in +2 110 .nf 111 Encoding Precision Channels 112 Signed Linear PCM 32-bit Mono or Stereo 113 Signed Linear PCM 16-bit Mono or Stereo 114 Signed Linear PCM 8-bit Mono or Stereo 115 u-Law 8-bit Mono or Stereo 116 A-Law 8-bit Mono or Stereo 117 .fi 118 .in -2 119 .sp 120 121 .sp 122 .LP 123 The audio mixer converts all audio streams to 24-bit Linear PCM before mixing. 124 After mixing, conversion is made to the best possible Codec format. The 125 conversion process is not compute intensive and audio applications can choose 126 the encoding format that best meets their needs. 127 .sp 128 .LP 129 The mixer discards the low order 8 bits of 32-bit Signed Linear PCM in order to 130 perform mixing. (This is done to allow for possible overflows to fit into 131 32-bits when mixing multiple streams together.) Hence, the maximum effective 132 precision is 24-bits. 133 .SH FILES 134 .ne 2 135 .na 136 \fB\fB/kernel/drv/amd64/audio\fR\fR 137 .ad 138 .RS 29n 139 Device driver (x86) 140 .RE 141 142 .sp 143 .ne 2 144 .na 145 \fB\fB/kernel/drv/sparcv9/audio\fR\fR 146 .ad 147 .RS 29n 148 Device driver (SPARC) 149 .RE 150 151 .sp 152 .ne 2 153 .na 154 \fB\fB/kernel/drv/audio.conf\fR\fR 155 .ad 156 .RS 29n 157 Driver configuration file 158 .RE 159 160 .SH ATTRIBUTES 161 See \fBattributes\fR(5) for a description of the following attributes: 162 .sp 163 164 .sp 165 .TS 166 box; 167 l | l 168 l | l . 169 ATTRIBUTE TYPE ATTRIBUTE VALUE 170 _ 171 Architecture SPARC, x86 172 _ 173 Interface Stability Uncommitted 174 .TE 175 176 .SH SEE ALSO 177 \fBioctl\fR(2), \fBattributes\fR(5), \fBaudio\fR(7I), \fBdsp\fR(7I)