We all have a fairly simple and straightforward
definition of what a fake is; a fake closely resembles
something but usually lacks its intrinsic values.
There are many forms of fakes in our lives. We could
imagine for instance a person wearing a wig and
false eyelashes and fake fingernails eating margarine
on a hot «dog», while pouring saccharin into their
coffee, then reaching for it with a hand whose wrist
is garnished with a fake Rolex, while the fingers of
their other hand twitch away at a false diamond in
their left earlobe, all of which may have been paid
for by counterfeit money, which is then placed
under a false name in a fictive account somewhere on
an island.
In music there are other notions of fake, for
instance what they used to call fake sheets used
by musicians to get a quicker approach to a song
which they may not have learned before, or play
back as was often the case on television when bands
would be up on stage having to look pretty but not
necessarily being able to perfectly sing their hits
while facing a camera; the voices were actually the
voices from the record with the singers mimicking
mouth motions. If you walk through street markets in
many countries,
you will very often see CDs or DVDs at ridiculously
low prices that were probably cloned.
In this chapter, I’ll take as our «fake», the idea
of a sampler - a machine, or software, that uses real
sounds to bluff us in one way or another. A clone
substituted for a baby will not fool its parents, and
probably not even its surrounding family members. A
sampled instrument will not fool someone who plays
the instrument, but comes close to fooling those
who are less familiar with it. As a parallel example,
think of your favorite fruit or vegetable, and the
texture and savor it had in your youth. Is it now
as pleasing, or has some of its essential character
been slowly modified over the years for commercial
reasons? Do you want to retain the heart of your
sample in the musical context?
If you record one second at the beginning of
one of your favorite songs, and one second at the
end, you have effectively sampled it, but unless you
have a very good «phonographic» memory, you may
not remember much of what happened in between
these sampled points. If you take 100 different one
second samples coming to a total of 100 seconds, in
chronological order, you are getting a much better
representation of the song,
and can fill in the blanks easily. The higher
frequencies usually heard over a sound system can
reach up to around 20,000 cps. That means that
the wave, if you look at it in that way, goes up and
down 20,000 times per second for the highest
pitches that we hear. This implies, as has been
mentioned earlier in “SOUND”, and “RECORD”,
that to accurately portray your sound you will
need a sampler capable of working at high sample
As a film is only a series of sequential photos
that blur into what is perceived as smooth movement,
a sampled sound wave can only capture snips
(samples) of what was in the air at the moment the
sampling was done. When sampling an instrument,
these samples are usually quite short, representing
individual notes being played at different strengths
or as we say, velocities. Nevertheless, we can
compensate somewhat with a few tricks. A struck
string, or membrane, will have more tension in the
first part of its note than in the following part, and
the harder it is struck, the higher the pitch will
rise before settling into the sustained tone of the
instrument. With samplers that employ synthesizer
techniques of envelope control,
and responsiveness to what is called velocity, we can
improve the versatility of this static recording of our
Before getting into a more in-depth discussion
of what samplers do and what you can do with them,
just think ahead as we did for the recording sessions
to what you would like to do. Do you want your
samples to be employed in a very narrow context,
or would you want what you captured to be able to
be manipulated for any sort of future context that
might surprise you and tantalize you later down
the line? If this is the case, then you must think
of recording your sample in what they call a dry
situation, without too much sound from the room or
space that it was recorded in. You can add that in
later if you wish as you can add in many effects such
as distortion, saturation, etc. Before you record,
also think of being patient and precise because the
task of creating an authentic sounding sampled
instrument is very daunting, it is not for the faint of
heart (ear). As with recording outdoor sounds of
any sort or musicians in a studio, the techniques for
recording your samples will be similar, and you can
refer to “RECORD” if necessary. One last thing
before diving into what a sampler is
and does, we can think not only of individual sounds
like each note played by the instrument, but we
can also think in terms of sitting and getting into
a groove that allows you to play riffs that come
in a spontaneous manner. You can record all of
these as you would record anything, and later use
your sampling techniques and software to snip up
the best parts and then organize them onto your
sampler’s sample map. The advantage this would
have is that you capture spontaneity and surprise
elements that come with the interaction between the
player and the instrument in the moment it is played.
This is not something that is quite as simple to
do when playing the individual notes later from a
keyboard or other controller. This is also true of
effects used on the sound while the sound is being
played as opposed to being added to it later.
When playing something like an electric guitar or
any amplified instrument going through pedals or
other effect boxes, the effect can bring a swing to
the musical lines that would not be there if those
same lines were played without effects. This is a
very interesting way of capturing focused types of
sounds - very true sounding “fakes”.
Now it is time to get into the insides, the guts, of
a typical sampler. But first, let me simply say that in
certain circles it is better to call modern software
samplers «sample players», because in most cases
they do not record the sound that they will treat,
whereas in the earliest hardware samplers, there
was a setup which allowed an audio signal to go in
and be recorded by the machine before further
Usually, the earliest samplers, with or without
keyboards, had a computer system somewhere
doing the brunt of the work. We are now most often
using the power of our ubiquitous computers and
their large screens to handle samplers. This allows
a frankly vast number of choices to those trying to
capture sound for future personalized productions.
The earliest samplers were innovative, and since they
were new technology, were extremely expensive, and
not to be found in most home studios. They also had
much stronger limitations than we are used to now,
since they could mostly only capture the sound and
do transpositions which brought you up to the land
of Mickey or down to the land of Goliath depending
upon which side of the keyboard you played in
relation to the note that
held the original pitch of the sample. Samplers
began to drop in price, to have more memory, and
more unique manipulation tools, very often coming
from the world of synthesizers. Synthesizers had
been developed as a way of bringing new sounds
into our audio palette by using simple sound waves,
(see “Wave Forms”), that would be manipulated in
many ways through filtering, distortion, time delays,
etc. When the samplers became a common thing as
software on our computers, they benefited from a
huge jump in memory and processing capabilities
available. I will not speak of any specific sampler,
although as I describe certain things in this
discussion, some of you may begin to perceive
which sampler I tend to use the most. The point is
not about comparing sampler software, but about
what are the general features to be found and
how do these features fit into our sound sculpting
When you open an instrument in a software
sampler, it can be folded out as if it were a series of
modules in the kind of 19 inch rack that has become
prevalent in most studios, professional or otherwise.
The first time that you look at the façade on your
computer screen of this complex
instrument, it seems as if there are too many knobs
to turn and too many decisions to make. In time,
everything becomes more familiar, and it is obvious
that you do not need to use all features in every
instrument you design. A good starting point would
be to take a very simple sound, to record it on to
a single key, and then begin to experiment with it.
Choose a sound that you can live with because you
will hear it over and over as you play with tuning,
amplitude, or pitch envelopes, reverberation and
delays, and the many other options in your software
sampler. In the rack that holds your instrument, you
can actually add several instruments, each of them
having their own MIDI channel or channels, and the
outputs of which having their own routing systems.
This will come up later in your work, but for
now it would be a good idea to only work with a
single instrument and perhaps try to turn it into
two or three or more very differently functioning
instruments all based upon a single sample sound.
This tedious exercise can pay off in the future
because you could save these instruments as
templates into which you will later drop much more
complicated sampled sounds, thus saving much time.
One possible setup of a folded out instrument
in the rack would be, from top to bottom, the
instrument and its tuning, panoramic, volume, and
MIDI choices. This would then be followed in the
lower sections of the rack by blocks that could
inform you as to whether you are routing the sound
directly to the outputs, or sending some by auxiliary
outputs to treatment such as reverberation etc.
Another block that is very important in the rack
would be the sample map indicating where on the
keyboard the sound would be placed, and whether
it would be playable on more than one key, and
with perhaps more than one velocity setting. And,
not last and not least, there would be a loop editor
which would allow you to take a short sound bite
and let it have a wrap around section that would
permit continuous sound as long as the key was
held down on the keyboard. There is much more
sophistication to all of these things I have just
mentioned, and that is what we will try to get into
So, buckle your seat belts, put on your helmet,
and let’s crash into the world of software samplers!
There on the computer screen in front of you lies
the interface that you will be using as you develop
your sampled sounds into instruments that can be
played from what ever MIDI controller you decide
to use. This will usually be presented in the form
of a rack of modules, as in studios which use this
convenient system of stacking electronic boxes
- anything from synthesizers to effects units or
recorders etc. Also on the screen there will probably
be a browser with the hierarchy of folders and files
that your computer uses. The browser will mostly be
used to find elements that you will add to the rack.
It is a very good idea from the beginning to think
long term about how you would handle hundreds or
thousands of sampled sounds and the instruments
they compose. Your sampler probably was sold with
a set of professionally designed instruments, and
you can look at the system used to organize them,
and apply it to your own method of cataloging all of
your work. Depending upon the sophistication and
the efficiency of your software sampler, you should
be able to use search functions to find things, and
probably you can also build a system of shortcuts
that allows you to very quickly find what you need
for any situation when you are into the phase of
playing your sampler as opposed to building and
editing the sounds you have recorded.
We will now look more into the rack part of the
interface since I will presume that you can handle
the browser and your own organization. The rack
can have different forms, but we will presume that
its basic composition will be a series of rectangles
that represent instruments that can be folded out
into various editing and controller blocks. If we
leave an instrument block closed to its smallest
representation, it will be a very thin rectangle, called
the header which would only have very essential
information - perhaps solo, mute, pitch, panoramic
and volume as well as the MIDI channel it is using.
As you fold it out to find other information, a likely
candidate in the upper part will be the source
module of the sound and what method of sampling
will be used. Different sampling methods can include
direct from disk playing, or manipulations of time
or tone in real time. These may have similarities but
are usually separately functioning systems feeding
the sound into its following modulators. A second
rectangle that should be visible would concern the
amplifier and its modulators.
Other rectangles could be concerned with effects
and their position in the flow of the signal. Insert
effects are those that would receive the entire audio
signal and transform it in one way or another before
passing it on. Send effects receive different sound
sources and handle them according to the dosage
that you desire before that information is passed
on and mixed with the original sound source before
going out of the outputs. Other blocks could be
the modulation sections concerning various parts of
sound control. These are real time controls that free
up your hands so that while you play the keyboard
your sound can be modified in real-time according
to the modulators you choose to use. Another
important block that could be visible might contain
the auxiliary sends similar to those seen on mixing
boards, that allow parts of the signal to be sent
to external effects. This would concern the whole
instrument, whereas the earlier send effects that we
had discussed concerned individual parts of your
sounds inside the individual instrument.
The starting point for building an instrument
would be bringing an audio signal, a sampled sound,
into your instrument and placing it on what is called
the sample map.
In the folded out instrument, you will somewhere
see something that could be called the mapping
editor. When you open this up you will see the
representation of a keyboard on the bottom, and
when you drag your sound on to the upper part
of this surface you will see that it can be restricted
to a single key or widened out to cover several
keys. There will be some indication for the original
key showing which note will play the sound with no
transposition. If you allow key follow, the sound
that you recorded will rise or fall in pitch depending
upon the note on the keyboard that you play.
We are starting with a single sound, so you could
stretch it across the entire keyboard if you like.
When you make more complex instruments, you will
set up key zones that show which sound plays on
which set of keys. And these zones can allow or
forbid transposition. But for now, the rest of this
description will be based upon the idea of what you
can do with a single recorded sound, be it a note, or
a phrase from speech, or thunder in the distance,
Early samplers were quite restricted in the
memory that they could use; they could only employ
short sound recordings.
To get around this limitation, they invented a system
called looping, which permitted the sampler to read
over and over again a portion of the recorded sound
while you held the key down on your keyboard.
Then the sound would go to its logical conclusion
when you released the key. Even though we no
longer have this limitation, there are many creative
reasons to use loops. We now should open up the
sample editor or wave editor and look at the sound
we have placed upon the keyboard. The first thing
is to see if the sound when you hit the key actually
starts where you want it to, and when you release
the key to see if there may be empty space (wasted
space) at the end of the sample. What we do in this
situation is called truncating, or cutting off anything
at the end that you do not need. And we can look
at the start point of the sample by zooming in and
placing the start marker as close as possible to the
first transient or first musical sound that you want to
CAVEAT: If your sound has noise that needs to
be reduced, the beginning of the sample is a likely
place to find a good «noiseprint», so clean it up if
necessary before truncating.
That was the first step, simply making sure
that when you hit the key you have an immediate
reaction from your sampler, and that you are not
wasting your CPU on nonexistent sound at the end
of the sample. The loop is the part that we will look
at now and your creative decisions are necessary
here. The idea is that somewhere in the body of
the sound there is a part that you want to repeat.
The difficulty is that when the sound loops over
and over you may hear clicks that are the result of
mismatched levels at the end of the loop which joins
the beginning of the loop. Again we must zoom in
and use the sampler’s tools and our patience to find
a match that is as close to being natural sounding
as possible. A graphical representation of this is
usually possible where you will see the tail that must
join the beginning of the loop and you will work until
you find the smoothest join of these two parts. When
a loop has been defined you will probably have
several options as to how it will play. The obvious
possibilities are that it plays forward and continues
to loop as long as you hold the key down, or that it
plays in reverse order, also continuing while you hold
the key down. But depending upon your software,
you may have other more
interesting possibilities to explore. One thing that I
did not mention in the looping process is a technique
called cross fade, which tends to blur the joint
between two portions of sound thus smoothing out
the transition. I’m sure your sampler will have this.
We should now have a sound that is transposed
across the keyboard while we play and that can be
prolonged by holding the keys down. Most default
systems in a sampler will already have a mapping
of velocity that can allow the sound to become
louder as you strike harder upon the key, acting
on the amplifier of the source sound. There can be
ways of scaling this velocity to better fit your own
playing techniques, including even a reverse velocity,
where the harder you play the softer sound will
become. Most instruments sound slightly brighter
when they are struck harder. We can use this same
controller called velocity - the speed at which the
key goes down - to control the pitch of the sound,
but here we should only use a small amount of
pitch modulation to avoid playing out of tune with
every different stroke on the keyboard. These are
elementary controls that take the originally very
static sound recording to a little bit more life like
Both of these, volume and pitch, can also be
affected by another controller which is called the
envelope, or breakpoint envelope, which is not
obvious to those who have never used a synthesizer
or a sampler before. The idea is that each point of
the break point envelope represents a change of
value that can be either represented by straight lines
between points, or curves on more sophisticated
software. This allows you to dramatically change the
sound by softening or hardening its initial part, and
changing its volume throughout its duration. This
also allows you to do similar modifications to pitch,
and if the breakpoint envelope has enough points,
you can provoke rhythmic changes in volume, pitch,
or many other aspects of your final sound. The
tremolo and vibrato are repeating changes of volume
and pitch which come from the musicians actual
movements, and are very spontaneous and sensual.
A machine cannot get to this level of subtlety, but
by using either the envelopes or an LFO - a low
frequency oscillator - you can add these aspects
as well to your sound. The LFO will not be heard
because its frequencies are much below the limits
that we can recognize as tone. The idea of a cycling
LFO is to simulate a controller
such as a finger on a fretboard rocking back and
forth to change the tension on the string.
Our sound now can be modulated by velocity,
and envelopes or LFOs, but it still can seem very
It would be too long to go into the large number
of controllers possible in modern day samplers. The
most common situation you will find is that you will
have a matrix of some sort that allows you to link
what is called a MIDI (Musical Instrument Digital
Interface) controller to any variable parameter of
the sounds on your sampler. Visually, this may be
represented by the controller numbers appearing
in your browser that you can then drop onto the
controlling knobs that you see on the various
modules of your instrument. A very efficient method
is what is called MIDI learn, which allows you to link
a controller on your MIDI instrument to the software
sampler by simply clicking on the knob on the
graphical interface, and then moving the controller
on your instrument. The sophistication of MIDI
matrixes is gigantic, and has been very useful since
its beginnings in 1983, for synthesizers, samplers,
lighting for shows, etc. Suffice it to say that it would
be a good idea to get used to
linking your most ergonomic controllers to the more
important parameters of your instrument. And don’t
forget that usually any source controller can control
more than one target modulation, and one target can
have more than one source. The principle will be the
same, think and link.
As said before with the idea of organizing your
future instruments and sound sets, all the work you
do on this individual simple instrument could actually
serve as a template for future instruments that you
will design. You can build a set of templates and save
them to have easy access, which will then allow you
to call up a template, and then simply fill the sample
map, and from that point on revise according to the
new sounds you have just placed on the key map of
the new instrument.
Back again to our single sound project. The
advantage, if you are not yet bored to death by the
sound, is that, as you attack the world of effects,
and the possibility of chaining effects in series or in
parallel systems, you will clearly hear the influence
on your sound. Always keep in mind that this work
can lead to your own signal processing setups, like
a group of foot pedal effects, that you can save for
other sample map treatment.
In the chain, you can use bypass on certain effects,
or you can switch the order of effect units. All this
can be done on the computer screen, without the
spaghetti tangle of wires, and nothing impedes your
Reverberation can be left out of the chain for
now, since it is more important in the mix stage of
a project (see “MIX”), for now we are still trying to
bring life to static samples, and give them character
and a reason to be recognized when mixed with
other sounds in an overall audio collage. Bear in
mind that we will be dealing with effects included
in the software of your sampler, which are digital
jugglers of zeros and ones. It is unlikely that these
effects can rival with external effects, but they
can certainly help in a pinch, and may even be
more sonically correct in your own context. As a
simple image, think of these as the stage lighting
shining down onto your sampled sounds, and
think of them as being able to change with time as
the tune progresses. This brings us back to the
matrix ( Hello, Neo! ) and to the linking of external
controllers to various parameters ( knobs ) of the
effects units. Never neglect the fact that you’re
sharply honed effect becomes
much more lively if it is cleverly varied over time in
accordance with the feeling of the tune.
The difference between parallel, and series, in
an effects chain is to be kept in mind as well, with
separate parallel treatment of a sound by 2 effects
sometimes giving more distinct results than the same
sound going in series from one effect to the next.
You decide the processing order of your FX boxes,
and can mix between parallel and series as your ears
General families of effects exist.
Volume, or level, manipulations concern
compressors expanders limiters etc.
Frequency-based effects include equalization,
filters, exciters, and harmonic doublers etc.
Time-based effects include chorus, flanging,
echo and delay ( each working within its own span
of time, dealing with our ear-mind perception of
distance and location ).
Timbral-based effects are numerous and include
some of the above, but also overdrive, saturation,
distortion, bit crushing, etc.
Spatial-based effects can be static as in the left
right position in the stereo field, or can be dynamic,
as with the rotor principle in
the Leslie cabinets used with Hammond organs, or
the flight of an insect in a surround sound cinema
etc. Voice related effects are numerous as well, but
often are a combination of some of the above.
Using our single sound, let’s start with the
time-based family of effects, and take them in the
following order: flanger, chorus, delay, echo, and
a bit of reverberation. Why in this order? Because
our perception of sound seems to follow a series
of distinctions based upon the differences in
time between the onset of a sound and that of
a repeated onset of the same sound. The time
intervals determining our perception are relative,
but seem to fall into fairly clear slices. Other ideas
related to this can be found in “SOUND”, or
much more precise information can be had from
discovering what the Haas effect is by researching
on the Internet.
Don’t forget the general distinction of insert
effects ( treating the entire sound and passing the
modified version on down the chain ), and effects
used by auxiliary sends ( treating a feed from the
original sound and then folding its results back into
the overall signal chain ).
This will be very important for the clarity of your
work. Inserts gobble up CPU as a snack, while
auxiliary sends are more community oriented,
sharing their modifying algorithms with whomever
knocks at their door.
Dear ear, the gang’s all ear! And our sound has
now been digitally cloned and is being repeated
(one or more times) in the time-based algorithm. If
the delay of the doubled sound lies within the 0 to
15 ms range, it doesn’t get perceived as a second
coming, but gets blurred somewhat to appear as
what is usually called the jet airplane sound. This is
the usual working range of the flanger and as with all
the other time-based effects, the actual delay can
be modulated with time and the doubled sound may
be treated by equalization, giving a larger variation
as a result.
The next perceived range is from 10 ms to 25
ms ( slightly overlapping the flanger range ), and
this is called chorus. Common occurrences of this
would be the richness of backing vocals singing
with precision and feeling, or several of the same
instruments playing in unison. Nothing is identical
in nature, and by feeding back our cloned sound in
this range we can make it more
effective by varying the delay time subtly over the
duration of the sound.
When we get past the 25 ms range ( between 25
and 50 ms ) we perceive a separation between the
source sound and its double. In this range we have
a “slap back” effect often heard in Elvis recordings
and in John Lennon’s rock ‘n roll album. It is very
effective on percussion and other instruments as
well, a sort of tight space with instant rebound.
Stretching into the 50 ms and beyond, the
separation between our two signals begins to
suggest greater spaces between the origin of the
sound and the solid objects surrounding it. Sound
travels fast, but the echo from a canyon wall does
take more time to reach you than that from the
parking structure walls. So much of our knowledge
of our surroundings depends upon the ear-brain
analysis of sound separation, that we have a nearly
unconscious mapping of space going on at all
times. This is what we are playing with in time-based
modifications of sound. As sound travels through
air, it is also modified by the absorption of certain
frequencies, and equalization of the doubled sound
can help us be even more convincing about the
soundscape we are creating.
Reverberation is the over all return of all the
echoes from all surrounding reflecting surfaces, and
the interplay of all the signals ( filtering, reinforcing,
etc. ) is immensely complex and CPU hungry. This
is why reverberation is generally used via auxiliary
sends to capture all the sources, more than as an
insert on individual sound sources. But, you’re the
chef, and you can set the processing price.
A final thought on delays, since you now know
the main ideas, is that by mixing spatial information
and the number and timing of repeats, your single
sampled sound gains a much larger playground
surface. Also, I hope some of you will delve deeper
into the phenomenon of hearing which is so essential
to our joy, and so taken for granted.
In the frequency-based effect family we primarily
are dealing with changes in perceived pitch of
sounds, and often using a filtering system of some
sort. This can affect the phase of a signal as well,
but let’s look into the more obvious sound changes.
Your coffee filter can remove some or all of the
residue as the water passes through the coffee
grinds. Filters in audio are selective in several ways,
and have fairly self explanatory family names:
low pass, high pass, band pass, band reject, notch,
etc. Your sampler should have all of these and
several more, with detailed explanations of their
functions in its documentation. I will only give a few
outlines of what to look for, and let your fingers and
ears do the rest. In the digital world, the common
method of describing a filter would be its family name
( for instance, low pass ), as well as the number of
poles, which indicate, in 6 dB steps, one pole being
6 dB, how steep the slope of the filter would be. In
the mathematical world, we could imagine that if we
used a low pass filter at 10,000 Hz, then exactly
at that frequency there would be a lessening of all
frequencies beyond that point. The truth is that
filtering presents serious problems when it comes
into play, and the search for their solutions have
given several results, each with its own character.
While cutting down on the amplitude of frequencies
beyond a chosen point, there may actually be a
boost near and around the peak frequency - the
cutoff point. The solutions in hardware filters can
be emulated somewhat in digital filters, and digital
algorithms may provide some new approaches. It’s
good to know that we can learn by listening how to
set the cutoff points
and how steep a slope we require for our filters. We
can even experiment with one filter after another
for more drastic results. Filtering, equalization, and
other frequency based effects can be corrective, or
creative, depending upon the situation.
Dynamics processors, working in the volume
domain, are among the more difficult to feel your
way into. When you’ve spent time and money on
an external compressor, and can’t seem to hear it
working, it’s sometimes a problem of knowing what
to listen for. As much as possible, if using a bypass
to compare with the original sound, assure yourself
that the listening levels of the two compared sounds
are similar. Then start thinking in terms of weight
perception, does the treated sound seem more
compact, dense, and present? Listen to the onset
of the transients ( percussion, etc. ) and the body
of the tonal part of the sound. Listen with your ears
zoomed in, but also in a slightly distracted manner.
Know your goals when dealing with dynamics
processors, and slowly train your ear to hear the
subtle differences.
I needn’t go any further into the effects part of
your samplers, since it is a very hands on learning
process, so I’ll now just wind up with a few remarks
concerning present and future developments. The
early samplers had their computation behind the
scenes, but we all live with computers now and it is all
on the screens. Presets can simplify all our choices,
but the adventure starts when you twiddle the
knobs. The deeper you dive in, the more you realize
that what you see is a graphical interface simplifying
manipulations. If your sampler allows you to work
with a scripting language, you get closer to the nuts
and bolts, and use words to tell those zeros and
ones what to do. The fluidity is becoming much more
apparent in the final flow of our captured “sound
shots”. Our static single sound can wriggle like a
salamander up a buttered hill if we want.
And that would be the ultimate goal; then we will
have transformed our fakes into fairly expressive
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF