Audio Signal Processing is also known as Digital Analog Conversion (DAC). Sound waves are the most common example of longitudinal waves. The speed of sound waves is a particular medium depends on the properties of that temperature and the medium. Sound waves travel through air when the air elements vibrate to produce changes in pressure and density along the direction of the wave’s motion. It transforms the Analog Signal into Digital Signals, and then converted Digital Signals is sent to the Devices. Which can be used in Various things., Such as audio signal, RADAR, speed processing, voice recognition, entertainment industry, and to find defected in machines using audio signals or frequencies. The signals pay important role in our day-to-day communication, perception of environment, and entertainment. A joint time-frequency (TF) approach would be better choice to effectively process this signal. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid-20th century. Claude Shannon and Harry Nyquist’s early work on communication theory and pulse-code modulation (PCM) laid the foundations for the field.
Introduction
I. INTRODUCTION
Audio Signal Processing is at the heart of recording, enhancing, storing and transmitting audio content. Audio Signal Processing is used to convert between Analog and Digital formats.
It is used to cut or boost selected frequency ranges, to remove unwanted noise, to add effects and to obtain many other desired results.
Audio signals are the representation of sound, which is in form of Analog and Digital signals. There frequencies range between 20 to 20,000 Hz, and this humans lower and upper limit of our ears. Digital signals occur in binary representations, while Analog signals occur in electrical signals. This process encompasses removing unwanted noise and balancing the time-frequency ranges by converting Analog and Digital signals.
It removes or minimizes the overmodulation, unwanted noise, echo by applying various techniques into it. Various techniques are used in the process of improving the audio quality they are Analog to Digital Converter (ADC), Audio effects (Data compression / decompression, Automatic gain control, Acoustic echo cancellation (AEC), Filtering / Resampling, Equalization, Beamforming), Digital to Analog Converter (DAC).
Analog audio signals are more likely to be influenced by noise and distortion. Converting Analog audio signals into Digital signals allows convenient storage, transmission, and manipulation without any quality degradation. It uses a specified sampling rate and converts the electric signals into binary bits resolution. The higher the sampling rate and precision measurements, the higher the quality. The performance of Analog to Digital Conversion is defined by its bandwidth and signal-to-noise ratio (SNR). Bandwidth is characterized by sampling rate and SNR differs when there is a change in resolution, accuracy, etc.
II. WORKING ARCHITECTURE
Before moving to real implications of Audio Signal Processing (ASP) and its application let us first discuss the eight types of Audio Processing. Audio Processing means changing the characteristics of audio signal in some way. Processing can be used to fix problems, create new sounds, enhance audio, separate sources, as well as to store, transmit, and compress data. These types are decided on the basis of the technique used for the Audio Processing. The eight types of Audio Processing are as follows:
Audio Compression is a method of reducing the dynamic range of a signal. All Signal levels above the specified threshold are reduced by the specified ratio. Audio Expansion means to expand the dynamic range of a signal. It is basically the opposite of audio compression. Like compressors and limiters, an audio expander has an adjustable threshold and ratio. Whereas compression and limiting take effect whenever the signal goes above the threshold, expansion effects signal level below the threshold. Any signal below the threshold is expanded downwards by the specified ratio. Audio Equalization means boosting or reducing (attenuating) the levels of different frequencies in a signal. The most basic type of equalization familiar to most people is the treble / bass control on home audio equipment. The bass control adjusts low frequencies while the treble control adjusts high frequencies. Equalization is most commonly used to correct signals which sound unnatural.
For example, if a sound was recorded in a room with high frequencies, an equalizer can reduce those frequencies to a more normal level. There are some common types of equalization such as Shelving Equalization, Bell Equalization, Graphic Equalization, Parametric Equalization.
Audio Limiters is a type of compression designed for a specific purpose to limit the level of a signal to a certain threshold. Whereas a compressor will begin smoothly reducing the gain above the threshold, a limiter will almost completely prevent any additional gain above the threshold.
Reverberation, or reverb for short, refers to the way sound waves reflect off various surface before reaching the listener’s ear. Reverberation can be added to a sound artificially using a reverb effect. This effect can be generated by a stand-alone reverb unit, by audio processing software.
Phasing also known as Phase Shifting is an audio effect which takes advantages of the way sound waves interact with each other when they are out of phase. By splitting an audio signal into two signals and changing the relative phasing between them a variety of interesting sweeping effects can be created. The phasing effect was first made popular by musicians in the 1960s and has remained an important part of audio work ever since.
Flanging is a type of Phasing or Phase Shifting. It is an effect which mixes the original signal with a varying, slightly delayed version of the signal. The delayed and original signals are mixed less or more equally.
The Chorus effect was originally designed to make a single person’s voice sound like multiple voices saying or singing the same thing. It has since become a common effect used with musical instruments as well.
III. ACKNOWLEDGMENT
It gives us pleasure in presenting the Complete project report on ‘Audio Signal Processing’ Firstly, we would like to express our indebtedness appreciation to our internal guide Dr. M.P. Borawake her constant guidance and advice played very important role in making the execution of the report. She always gave us her suggestions that were crucial in making this report as flawless as possible. We would like to express our gratitude towards Prof. Dr. R.V. Patil Head of Computer Engineering Department and Principal P.D.E.A. College of Engineering for his kind co-operation and encouragement which helped us during the completion of this report. Also, we wish to thank to all faculty members for their whole-hearted co-operation for completion of this report. We also thank our laboratory assistants for their valuable help in laboratory. Last but not the least, the backbone of our success and confidence lies solely on blessings of dear parents and lovely friends.
Conclusion
Audio Signal Processing is a method of communication based on voluntary Vocal activity or audio activity generated by the Mouth and independent of its normal output pathways of peripheral Vocal. The Vocal activity used in Audio Signal Processing can be recorded using algorithm to store in any media. We can say as detection techniques and experimental designs improve, the Audio Signal Processing will improve as well and would provide wealth alternatives for individuals to interact with their environment.