0% found this document useful (0 votes)
64 views13 pages

Unit 4 Sound

The document discusses the key elements of audio in television and film production including lip synchronized sound, voice over, music, ambience, sound effects, microphones, audio mixers, and audio control and adjustment. It provides examples and descriptions of each element and how they are used to enhance visual content and set mood.

Uploaded by

aryantejpal2605
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
64 views13 pages

Unit 4 Sound

The document discusses the key elements of audio in television and film production including lip synchronized sound, voice over, music, ambience, sound effects, microphones, audio mixers, and audio control and adjustment. It provides examples and descriptions of each element and how they are used to enhance visual content and set mood.

Uploaded by

aryantejpal2605
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 13

AUDIO ELEMENTS .

Sound is a form of energy, just like electricity and light. A sound is made when air molecules
vibrate and move in a pattern called waves, or sound waves.

Audio, which refers to the sound portion of a television show, is necessary to give specific
information about what is said and to help set the mood of a scene.

It has the following elements:

1. Lip Synchronized Sound

It combines audio and video recording in such a way that the sound perfectly synchronizes
with the speaker’s lip movement.

Example: Dialogues spoken by animated cartoon characters, and playback songs lip synced
by actors in movies.

2. Voice Over

It is an off-screen voice (non-diegetic sound) by an unseen narrator that informs the


audience of important facts or opinions in fiction and non-fiction productions. It is often used
in documentaries and news segments.

Example: Narration in a movie or show such as that of the thoughts of an actor thinking
about something being heard on the screen.

3. Music

Music is added in the post production process and helps to set a particular mood and convey
or enhance emotions. A film score is the music for a movie. It is the music at the beginning
of the movie when the credits are rolling, and it sets the atmosphere for the movie. Music
also helps to set a distinct identity of the production.

Example: The song ‘Let it go’ from Frozen.

4. Ambience

Ambience (AKA ambient audio, ambience, atmosphere, atmos or background noise) means
the background sounds which are present in a scene or location.
Example: if the scene happens in a house in a village it will have the ambience of lots of
birds chirping in the background, while a house in a city would have sounds of vehicles.

It performs a number of functions including:

 Providing audio continuity between shots.


 Preventing an unnatural silence when no other sound is present.
 Establishing or reinforcing the mood.

5. Sound Effects

A sound effect is an artificially created or enhanced sound, to emphasize or express an


action, mood, or feeling. It is often synchronized with certain actions and is created with foley
and digital processing, or taken from a sound effect library and the original sound of a scene.
Foley example: The realistic sound of bacon frying can be the crumpling of cellophane.

Sound effects add depth and realism to a video production. Thus, significantly impact the
audience’s experience.

The term also refers to a process/technique applied to a recording while editing on the
software in the post production stage. Some typical effects used in recording and amplified
performances are:

 Echo: To simulate the effect of reverberation in a large hall or cavern, one or several
delayed signals are added to the original signal with a minimum delay of 35
milliseconds.

 Phaser: To give a "synthesized" or electronic effect to natural sounds, such as


human speech, the signal is split, a portion is filtered with an all-pass filter to produce
a phase-shift, and then the unfiltered and filtered signals are mixed. The voice of C-
3PO from Star Wars was created by taking the actor's voice and treating it with a
phaser.

 Equalization: Different frequency bands are attenuated or boosted to produce


desired spectral characteristics.

 Time stretching: Changes the speed of an audio signal without affecting its pitch.

MICROPHONES .
A microphone is a device that captures audio by converting sound waves into an electrical
signal.

It is crucial to use a microphone to make a high quality production as recording clear and
crisp audio is integral to film. When shooting a scene, the camera is furthest from the action
meaning that it’ll pick up a lot of additional and unnecessary noise. This can be avoided with
a separate microphone.

Depending on your needs and budget, different types of microphones are as follows:

1. ON THE BASIS OF PICKUP PATTERNS

 Unidirectional

Unidirectional microphones suppress sounds from the rear and sides and hear better in one
direction—the front of the mic.

Because the polar patterns of unidirectional microphones are roughly heart-shaped, they are
called cardioid.

They are good for news conferences and meetings because of their ability to minimize
audience noise. Example: Shotgun microphone.

 Bidirectional

Bidirectional microphones pickup sound from front and back but not from sides.

They have a figure-8 polar pattern. The user must correctly position the microphone to
record desired sound while rejecting unwanted sounds.

They can be used in an interview with two people facing each other with the mic between
them.

Example: Hypercardioid microphone.

 Omnidirectional

Omnidirectional microphones hear sounds from all directions more or less equally well.

They are great for covering a group of people or someone who is moving around. This mic is
great for picking up ambient or natural (NAT) sounds.

2. ON THE BASIS OF MECHANISM

 Dynamic
Their sound pickup device consists of a diaphragm that is attached to a movable coil. As the
diaphragm vibrates with the air pressure from the sound, the coil moves within a magnetic
field, generating an electric current. Also called moving-coil microphone.

They can be worked close to the sound source and still withstand high sound levels without
damage to the microphone or excessive input overload (distortion of very high-volume
sounds). They can also withstand fairly extreme temperatures, therefore, are ideal outdoor
mics.

 Condenser

Their diaphragm consists of a condenser plate that vibrates with the sound pressure against
another fixed condenser plate, called the backplate. Also called electret or capacitor
microphone.

They are much more sensitive to physical shock, temperature change, and input overload,
but they usually produce higher-quality sound when used at greater distances from the
sound source.

 Ribbon

Their sound pickup device consists of a ribbon that vibrates with the sound pressures within
a magnetic field. Also called velocity microphone.

They are similar in sensitivity and quality to the condenser mics, ribbon microphones
produce a warmer sound, frequently preferred by singers. They are strictly for indoor use.

AUDIO MIXERS .
The audio mixer allows us to control the volume of a limited number of sound inputs and mix
them into a single output signal. It is needed whenever there are a number of sound sources
to select, blend together, and control (such as a couple of microphones, CD, VCR audio
output, etc.). The output of this unit is fed to the recorder.

Each input source comes into the mixer through a channel (vertical columns on the board)
which contains a number of rotary potentiometer knobs and buttons, each performing a
different function.
The basic input controls of the channel are:

 Gain

The gain is a shield for the incoming signal that controls the amount of amplification
(boosting the signal) or attenuation (reducing the signal).

Your level is normalized when you have a healthy sound signal coming in that still has
enough headroom, so the loudest portions aren’t overmodulated. Overmodulation occurs
when the incoming signal is too loud and your signal becomes distorted. Too much gain
equals distortion.

 Equaliser

The EQ allows us to add or subtract a given frequency. On a simple board, it’s broken up
into high, mid and low. Some mixing boards expand to high, high-mid, low-mid and low
frequencies.

The best practice is to use an EQ to remove troublesome frequencies, thus allowing us to


have a clearer, and less muddy sound. It is used for making sounds more intelligible and
reducing feedback.

 Pan

Panning allows us to move the sound to the left or to the right or keep it in the center. This
will allow us to play with the stereo image.

The stereo image is the perceived spatial location of a given sound source. If you put an
input at 30 degrees left it will sound like the sound source is coming from 9 o’clock in front of
you. Playing with the panning of each channel will give us more space in mix.

 Fader

Fader allows you to control the level of any given channel that will be sent to the main mix.
So if you have an input and you want it to be the main thing heard in your mix, you would put
the fader up to full, allowing it to be the main thing heard in your mix.

 Phantom

Phantom power is a DC voltage (usually 12-48 volts) used to power the electronics of a
condenser microphone. For some (non-electret) condensers it may also be used to provide
the polarizing voltage for the element itself.

Functions of an Audio Mixer


1. Input: to pre amplify and control the volume of the various incoming signals

2. Mix: to combine and balance two or more incoming signals

3. Quality control: to manipulate the sound characteristics

4. Output: to route the combined signals to a specific output

5. Monitor: to listen to the sounds before or as their signals are actually recorded or
broadcast

AUDIO CONTROL AND ADJUSTMENT .


AUDIO LEVEL

Audio levels are measured in decibels (dB), usually on a numbered minus scale, with zero
being maximum volume. It is advised to never go beyond 0db as that may cause your audio
signal to clip, which results in distortion.

Cameras generally allow the operator to set the audio manually or automatically.

 Auto Control

Most audio and video recording equipment include the option of automatic gain control
(AGC) to avoid loud sounds overloading the audio system and causing distortion.

It detects the incoming audio signal in which quiet sounds are increased in volume, and loud
sounds are held back. It does so by automatically reducing the audio input when the sound
signal exceeds a certain level.

A completely automatic gain system amplifies all incoming sounds to a specific preset level
and “irons out” sound dynamics by preventing over- or under amplification. But because the
AGC cannot distinguish between desirable sounds and noise and amplifies both
indiscriminately, it results in inferior sound quality. No adjustments can be made in this, and
the camera operator must accept the results as it is.
Some auto gain systems do have manual adjustments. The idea is to ensure that the gain
control is set high enough to amplify the quietest passages without over amplifying the
loudest sounds.

There are also special electronic devices, called “limiters'' or “compressors,” that
automatically adjust the dynamic range of the audio signal, but these are only found in more
sophisticated systems.

 Manual Control

The audio level can be controlled manually by continuously monitoring the program while
watching an audio level meter.

Unlike automatic control circuits, audio personnel are able to anticipate and make artistic
judgments, which can make the final audio far superior.

The drawback to this is that the audio personnel need to be vigilant all the time, ready to
make any necessary readjustments. If they are not careful, the resulting audio may be less
satisfactory than the auto circuits would have produced.

 Audio Meters

Most professional cameras include an audiometer/volume indicator which allows the camera
operator to monitor the audio signal.

It most commonly takes the form of visual displays using bar graphs or some type of volume
unit (VU) meters.

A bar graph has a strip made up of tiny segments. This varies in length with the audio
signal’s strength. Twin bar graphs are used to monitor the left and right channels.
Calibrations vary, but it might have a decibel scale from -50 to +10 dB, with an upper
working limit of about +2 dB.

The VU meter has two scales: a “volume unit” scale marked in decibels and another showing
“percentage modulation.” The normal range used is -20 to 0 dB, typically peaking between -2
and 0 dB. Overmodulation of signal is indicated by a different colour, usually in red.

In summary, if the camera operator needs the audio system to look after itself because he or
she is preoccupied with shooting the scene or is coping with unpredictable sounds, then the
automatic gain control has its merits; it will prevent loud sounds from overloading the
system. However, if an assistant is available who can monitor the sound as it is being
recorded and adjust the gain for optimum results, then this has significant artistic
advantages.
AUDIO CHANNEL

It is the passage or communication channel through which the sound signal is transported
from the player source to the speaker.

An audio file can contain one, two or even more channels.

1. Mono

Mono is short for monophonic, meaning one sound. In this system all the audio signals are
mixed together and routed through a single audio channel.

If you are listening to mono audio, you will notice that whatever you hear in your right
earbud, you will hear in the left earbud. That is because the speakers play back the same
single channel audio file into both earbuds. You won’t hear the drums in your left ear, or the
guitar in your right. Everything will just sound like it's right in front of you, evenly dispersed
through both earbuds.

The big advantage to mono is that everyone hears the very same signal at essentially the
same sound level. This makes well-designed mono systems very well suited for speech
reinforcement as they can provide excellent speech intelligibility.

Hence, it is preferred in radiotelephone communications, telephone networks, and radio


stations dedicated to talk shows and conversations, public speeches, and hearing aids.

However, the drawback of this system is that it does not convey any sensation of depth or
location so the audience cannot easily distinguish direction and distance.

Mono sound recording is done mostly with one microphone and only one loudspeaker is
required to listen to the sound. For headphones and multiple loudspeakers the paths are
mixed into a single signal path and transmitted. It is cheaper and easier to record in mono
sound.

2. Stereo

Stereo is short for stereophonic, meaning full sound. This system transmits two independent
signals through two separate channels into a pair of speakers.
The signals have a specific level and phase relationship to each other so that when played
back through a suitable reproduction system, there will be an apparent image of the original
sound source.

As the independent signals emphasize different instruments or sounds in the right and left
channels, stereo sound creates an illusion of space and dimension. It enhances clarity and
gives the viewer the ability to localize the direction of the sound. This localization is what
gives the audience a sense of depth, a spatial awareness of the visual image and the sound.

A good example would be watching a film with a train becoming visible to the viewer as it
approaches from the left. If you were listening with headphones you would hear the train
only in the left ear. As it appeared in front of you you would hear the train in both ears and
gradually more and more in the right headphone and less in the left as it disappeared away
to the right on the screen.

Stereo sound is thus preferred for listening to music, in theaters and concert halls, radio
stations dedicated to music, and FM broadcasting.

However, in a stereo system, reverberation even appears more pronounced, and extraneous
noises such as wind, ventilation, and footsteps are more prominent because they have
direction, rather than merging with the overall background.

Stereo recording is done with two or more special microphones. The stereo effect is
achieved by careful placement of microphone receiving different sound pressure levels
accordingly even the loudspeakers need to have the capability to produce the stereo and
they also need to be positioned carefully. It is expensive and it requires skill to record stereo
sound.

3. Surround

Surround is a multi-channel system with 3 or more channels and speakers. The most
common surround sound has six discrete channels: left front, right front (sometimes called
stereo left and right), center, a subwoofer for low-frequency effects (LFE), left rear, and right
rear speakers (sometimes called surround left and right).

To present the feeling of depth, direction, and realism, audio personnel pan between the five
main channels and route effects to the LFE channel.

In this system, the sound appears to “surround the listener” by 360 degrees and create a
surrounding envelope of sound and directional audio sources. The term surround sound has
become popular in recent years and more commonly used since the advent of home theater
systems.

Disney introduced surround sound to the cinemas with the movie Fantasia, released in 1940.
Three channels were used behind the theater screen with three additional speakers used on
either side and at the rear. However, back then, implementing this system was extremely
expensive, and the system was used in only two theaters.
Cables and Connectors

Cables are used to transfer audio signals from the source to the speakers. They can be
divided into two categories:

 Unbalanced

The cables themselves consist of two wires inside the plastic casing: a signal wire and a
ground wire. The signal wire in the center of the cable passes the audio signal through, while
the surrounding ground wire shields the main signal wire from external electronic
interference from devices such as lights, televisions, radios and transformers. It does a
decent job of rejecting noise, but unfortunately, the wire itself also acts like an antenna and
picks up noise.

Unbalanced cables should have a maximum length of 4-6 meters, especially when used in
noisy environments and with signals that are low level to begin with, such as those from
keyboards, guitars, MP3 devices, etc.

Eg: RCA cable connectors, which are most often used in consumer microphones, small
camcorders, stereo setups such as surround sound systems, turntables and older audio
systems.

 Balanced

Balanced cables have three wires inside the plastic casing: two signal wires and a ground
wire.
The signal wires pass an identical audio signal through each wire and with the help of that
second wire, both wires cancel out the noise while the surrounding ground wire shields the
signal wires from external electronic interference.

Balanced cables can support much longer distances like 15-30 meters and are used in the
wiring for microphones, the interconnect cables between consoles, etc.

Eg: XLR cables. With an XLR jack in a camcorder, you can use any professional audio cable
to connect a high-quality microphone to the camera.

FILE FORMATS .
Audio format defines the quality and loss of audio data. Based on application different types
of audio format are used. They can primarily be grouped into three categories:
1. Uncompressed

As the name suggests, these audio files are uncompressed. That is, real-world sound waves
are simply converted into a digital format without any processing in between. This is why
they occupy more size and retain detailed information about the recorded sound.

 PCM (Pulse Code Manipulation)

The PCM audio format is a commonly used format to store audio files in CDs and DVDs
even now. It is ideally a technique to convert analog audio files into digital formats. To do
this, the sound file is sampled under different intervals. This later corresponds to the
sampling rate of the file.

 WAV (Waveform)

It is one of the most widely used uncompressed formats for audio files and was introduced in
1991 by Microsoft and IBM. Although the format is not that popular in the present time, it is
still extensively used in the recording especially to store sound recordings in CDs. The size
of an uncompressed file is usually more since it follows a standard 14-bit encoding.

Most WAV files contain uncompressed audio in PCM format. It is just a wrapper. It is
compatible with both Windows and Mac.

 AIFF (Audio Interchange File Format)

It was developed by Apple for Mac systems in 1988 and that's why it is majorly used by
Apple manufactured devices. Like WAV files, AIFF files can contain multiple kinds of audio.
It contains uncompressed audio in PCM format. It is just a wrapper for the PCM encoding. It
is compatible with both Windows and Mac.

2. Lossy Compressed

This is a lossy technique, in which the size as well as the quality of the audio would be
reduced. Therefore, the original sound of the music might be altered. The formats are mostly
used for data transfer and streaming.

 MP3 (MPEG Audio Layer-3 - Moving Picture Experts Group)

It is the most popular audio format for music files. Main aim of MP3 is to remove all those
sounds which are not hearable or less noticeable by humans ears. This can reduce the size
of the audio file by 75 to 90% as compared to the original size. It is known as a universal
music extension as it is compatible with almost every device.

 AAC (Advanced Audio company)

It was developed in 1997 after MP3.The compression algorithm used by AAC is much more
complex and advanced than MP3, so when compared a particular audio file in MP3 and AAC
formats at the same bitrate, the AAC one will generally have better sound quality. It is the
standard audio compression method used by YouTube, Android, iOS, iTunes, and
PlayStations.

 WMA- Lossy (Windows Media Audio)

It was released in 1999 by Microsoft. It was designed to remove some of the flaws of the
MP3 compression method. The format can follow both lossy as well as lossless compression
techniques. It can drastically reduce the size of an audio file while retaining most of the data.
In terms of quality it is better than MP3, but it didn't gain popularity as it wasn't open-source.

3. Lossless Compressed

These compression techniques are recommended to maintain a balance between the quality
of music and the size of audio files. While the size would be lesser than the original file, the
quality will be maintained.

 FLAC (Free Lossless Audio Codec)

It is open source and features a more efficient compression algorithm, which can reduce the
file by 50-70% over its original size. This format is popular among audiophiles as a way to
store collections of music in their highest quality form.

 ALAC (Apple Lossless Audio Codec)

Developed by Apple, it was first introduced in 2004. The compression technique retains the
meta content and the files are usually of half the size as WAV audio. It is the native
compression for iOS and Mac. Since iOS devices don't support FLAC compression, users
need to use ALAC extension by default.

 WMA

The lossless WMA compression might not compress files as well as FLAC or ALAC, but it is
recommended for its DRM support. Also, it is mostly used by native Windows users. It is a
proprietary compression technique and thus is not recommended for data transfer or
distribution.

IN-CAMERA EDITING .
In-camera editing was used when editing hadn’t flourished to what it is today. In this
technique, instead of editing the shots in a film into sequence after shooting, the director or
cinematographer instead shot the sequences in strict order. The resulting "edit" was
therefore already complete when the film was developed.
This process takes a lot of planning so that the shots are filmed in the precise order they will
be presented. However, some of this time can be reclaimed, as there is no editing, cutting
out or reordering scenes later on.

You might also like