Hello, Friends! On the previous topics we talked about how to record live instrument and MIDI in your MIX. But today we won't speak about instruments or samples but about synthesizer - digital instrument, that have own unique sound. I would like to explain the usage of the 5 most important synthesis modules: Oscillator, Filter, Amplifier, Envelope, and LFO on the example of ES2 in Logic 9 Pro.

Create a new software instrument track and set the input to the ES2. The synth will pop up automatically. 


An Oscillator generates a sound. Its task is to create a waveform which will produce a different sound depending on the shape of the waveform. The oscillator does this continuously. The rate at which it generates each cycle of the waveform is what we hear as pitch.

The most common oscillators are:

1. Sawtooth waves, also called saw waves, have a very strong, clear, buzzing sound. A sawtooth wave can be made by adding a series of sine waves at different frequencies and volume levels.

2. Square waves have a rich sound that's not quite as buzzy as a sawtooth wave, but not as pure as a sine. Chip-tune and old video games tracks are made almost exclusively from square waves.

3. Pulse Wave - a variation on the above, the pulse wave is half as wide as a square wave, and has the unique ability to have its width modulated (called ‘Pulse Width Modulation').

4. Triangle waves sound like something between a sine wave and a square wave. Like square waves, they contain only the odd harmonics of the fundamental frequency. They differ from square waves because the volume of each added harmonic drops more quickly.

5. Sine waves look similar to a gentle wave in a bowl of water (like a horizontal ‘S'), moving up and down with no abrupt starts or stops., this produces a mild, soft tone.

6. Noise waves are irregular and they do not have a repeated pattern. Noise generators output all frequencies distributed randomly through the entire audible spectrum.

Do you remember this?) I think it's perfect example of using synthesizer in old game ;) It was created by NIKITA in 1995.

If you're having problems viewing Flash player you can download music there.

As you can see the ES2 has three oscillators. In the default preset the 1 is set to a sawtooth wave, 2 and 3 are both set to square waves with the pulse width set to about 2/3, and 3 is also detuned up 10 cents. The pulse width is a type of modulation controlled by an LFO that modulates the width of the square wave. To the right of the wave dials we see a triangle which looks similar to an x/y pad, but is simply a blend ‘knob’ allowing you to mix between the three oscillators.


A filter is a module that allows only certain frequencies it receives to pass through, and at the same time is a barrier to others. In this way a filter is used to screen, or filter out, unwanted frequencies from the waveform so as to alter the timbre. The filters can be labeled VCF (Voltage Controlled Filter) or DCF (Digitally Controlled Filter) in our synthesizer.

The most common filters are:

1. Lowpass filter: Low frequencies are passed; high frequencies are attenuated.

2. Highpass filter: High frequencies are passed; low frequencies are attenuated.

3. Bandpass filter: Only frequencies within a frequency band are passed.

4. Band Reject filter: Only frequencies within a frequency band are attenuated.

5. Allpass filter: All frequencies in the spectrum are passed, but the phase of the output is modified.

After being filtered, a brilliant-sounding sawtooth wave can become a smooth, warm sound without sharp treble.

The ES2 has 2 filters which can be used parallel to each other or in sequence, the default preset uses a parallel filter. Filter 1 has Drive, Resonance, and Cut parameters. 
With the Cut knob you select the frequency that the filter begins to reduce, 
the Res knob controls how much resonance is added at the cut frequency, 
and the Drive knob acts like a gain stage. 
The five buttons beside these knobs allow you to choose which type of filter you want, the default is set to a high pass filter. Filter 2 is largely the same with a few differences.

The five buttons of Filter 1 are replaced with 4 slope control buttons which allow you to choose the rate at which the cut descends. The FM knob modulates the Filter 2 Cutoff with the Oscillator 1 frequency.


The amplifier is the module that outputs sound to a sound card or a digital file. It amplifies the signal before the output and it does it with the Envelope.

The Amplifier on the ES2 is the smallest portion. The only true parameter of the amplifier section is the Sine Level  knob which acts as a gain stage. The effects section to the right also contains a Volume and Distortion which, although not technically part of the amp, could still produce the desired loudness.


The envelope, also known as ADSR, is what controls the way the oscillator ‘plays’ the notes. ADSR stands for:

Attack time – how fast the note hits or swells,

Decay time – how fast the note goes from the full attack level to the sustained level,

Sustain level – the level at which the note is held while the key is still pressed,

Release time – how fast the note fades away after the key is released.

The ES2 has two true envelopes – 2 and 3 – and another – envelope 1 – that has only attack and decay. In the default preset only envelope 2 is used.


The LFO, or low frequency oscillator, is called as such because it’s frequency is below the human hearing range. The oscillator is used to modulate other aspects of the synthesizer to add a more player-like sound. We can adjust several settings to produce different effects. It’s great if we want to  sweep similar to tremolo, vibrato or wah-wah. It acts below 20Hz and creates a pitch variation. It’s not a creator of sounds itself, it works when is connected with the oscillator or amplifiers.The ES2 has two LFOs, 1 has an EG slider, which determines how fast the LFO fades in, a Rate slider, which determines the rate the LFO modulates at, and a wave shape button, which determines the type of wave the LFO oscillates as. 2 has only a Rate slider and a wave shape button.

I hope this information will be helpful for you. Synthesizer is a powerful instrument, which have own timbre, like any live instrument. And it's perfect opportunity to give your music more color. Try to experiment and find the sound that you need. Synthesizer has a lot of thing which can help you to reach this goal.

Good luck!

On the previous topics we talked about how to record live instrument and MIDI in your MIX, about automation in DWS and Dynamic processors. And today i would like to demonstrate effective use of “mirror EQ” in a mixing context.

This is a project of my song in Logic 9 PRO.

Mirror EQ is a technique that we use when we have certain tracks in our project with similar frequencies, which makes it hard to sound clear and independent from each other.

In our MIX we have a lot of different tracks:

There are 2 solo instruments: cello 1 and trombone.

Piano and Clav play rhythm of the song,

Lute, Bass, lead vocal and 

1 cello plays bass voice and two other - middle voices.

So, we have tracks A and B "competing" for the same frequencies. Mirror EQ would basically consist of boosting the best sounding frequencies of track A and the best sounding frequencies of track B. Then, in track A we would attenuate the frequencies that we boosted in track B, and in track B we would attenuate the frequencies that we boosted in track A. This generally makes instruments sound thinner by themselves, but clearer and fuller when sounding together.


Let's talk about these 2 voices of cello. It sounded kind of muddy and nonclear. What I have to do to make them sound clearer, is mirror EQ them.


So, to mirror EQ them, I used a visualizer to see which frequencies were sounding naturally in track "cello 2" and which frequencies were sounding naturally in "cello3":

I noticed that the "favourite" frequencies in cello2 were around 300hz and 3khz, and "favourite" frequencies in cello 3 were around 180hz and 1khz. With that information, I used the EQ in each track to boost its best frequencies and attenuate the worst frequencies. Watch the slides and compare them ;)


Of course the sound in mix "before" is more naturall. But in the context of main mix it is too heavy and create unclear region of music. That is why mix "after EQ" will be better there.


P.S. This MIX is a song for lesson 5 Assignment "songwriting" on coursera.org ;)

Highly recommend this course!)

On the previous topic we talked about how to record MIDI in your MIX on the example of flute. We have received dialog between two instruments: cello and flute in the middle of song.

When we talk with somebody in real life - we usually do it by turn, and not together at once. Because we should listen to the partner and keep silent when he speaks. After that you change - you answer his question and he keeps silent at this moment. Only in this case of mutual respect there will be a good dialog. There is a common situation in music, when new instrument begins to play or one has solo - another should play more quite. 

We have a dialog in our song, but at this moment trombone does not want to listen to his partners ;) He continues playing so loud, that we do not clearly understand what other instruments want to say.

Let's try to correct this.

Press "A" on you keyboard and LOGIC will display track automation. Track automation data is displayed on a transparent gray shaded area. You can choose the parameter that you want to view and edit in the Automation Parameter menu. This appears below track names in the Arrange track list. 

We need to select "volume" and at the moment when cello and flute have dialog, make a slight decrease of green line on the trombone track.

You can do it by setting 4 point on this truck: first and lust will fix default volume of trombone speech, and two other points in the middle will fix the volume during the dialog, when he should play not so loud.

[widgetkit id=12] 

On the other way, you can do it in real time during the moment of playing by fader on MIXER board. 

As you can see at the start automation is off.

"Reed automation" tells the program to play back all the automations, that have been recorded on track.

"Touch automation" allows making changes on the fly. In this case, volume will be controlled by fader during the moment of playing. And it will automatically return to default volume parameter when you stop moving fader.

It will not happen if you select "launch automation". The volume will be continuing at the parameter of last change unless the song finishes.

[widgetkit id=13] 

Let's try to listen:


Ok, not bad. But we can hear that these two instruments are playing from one point of the space. But I would like to have a dialog, like a dialog between two people, which are sitting in front of each other.

To do this we should select "pan" in the Automation Parameter menu.

I can do it by moving "pan" controller on the track. But I would like the cello and flute to move slightly during the moment of playing - it will make the sense of fly.

So, I need to set the points on the cello truck above zero, and on the flute truck - under.

Let's listen:


And the final MIX:


I think now this dialog looks like the dialog between two people. Music has lots of expressions and you should try to use them! The instruments should not only sing, they should speak too ;)


On the previous topics we talked about how to record live instrument and MIDI in your MIX, about automation in DWS. And today i want to describe the concept behind dynamic processors and describe threshold, ratio, attack and release.

Dynamic processors are post-production effects designed to manipulate the dynamic range of a piece of audio. That is, they affect how many decibels there are between the quietest and loudest levels. Dynamic Processors can either:

1) reduce dynamic range through compression

2) increase dynamic range through expansion

To compress (reduce) the dynamic range of a piece of audio, you can either increase the levels of the quiet parts to make them louder or decrease the levels of the loud parts to make them quieter. Compressors and limiters are in this category.

To expand the dynamic range of a piece of audio, you can either decrease the levels of the quiet parts to make them quieter or increase the levels of the loud parts to make them louder. Expanders and noise gates fall under this category.

Four main parameters of a dynamic processors are: threshold, ratio, attack and release.

The threshold setting is the exact point (measured in decibels) where compression will begin to occur on the audio signal when its amplitude exceeds the threshold. The gain of any signal louder than this point will be reduced by the compressor.

The ratio is the amount of gain reduction that the compressor will apply to the signal.  A ratio of 1:1 will not effect the original signal because no change is required. Common compression ratios include 4:1 and 2:1.  A compressor will usually have a ratio of less than 10:1. A limiter is a compressor with a ratio larger than that.

The attack setting controls the amount of time the compressor takes to react when the signal exceeds the threshold.

The release setting controls the amount of the compressor takes to react when the signal falls below the threshold after a compression event.

And after this theoretical part  i would like to show you the practical using of compressors on the example of trombone in LOGIC.

We can select any dynamic effect from the list in MIXER menu. Try to experement with the sound to make it better in MIX.

[widgetkit id=14]

Before Compressor After Compressor and other effects ;) And the final MIX

In the previous topic we talked about how to record an acoustic instrument on the cello's example. But sometimes it can be difficult to do at home. For example: you need to record drums. You should have a lot of microphones, sound card with many physical input channels, good computer for recording without any delays and of course patient neighbors ;) But today we have an opportunity to record sound of instrument, even if we don't have it. And software instruments can help us to reach this goal. One thing that you need in this case is a midi keyboard.

I will use “Korg kontrol49” to show you how to record midi. It's a powerful workstation with a lot of functions and presets for all famous DWS. It has 49 note, full-size velocity sensitive 4 octave keyboard and 16 pads. The work on it is really comfortable and just one thing that you need to start working is plugging in to usb of your computer. It will be recognized automatically in you program If it didn't happen, install the latest driver from the official web site.


   Let's start doing music!:)

I have prepared a session of my song with a few instruments: Guitar, Bass, Trombone and Cello.



But I need to add flute to this MIX,  because i would like to have a dialog between cello and flute in the centre of song :)

The flute will play this:

And the cello will answer:



So, I need to add 1 new midi track - select software instrument. After that on the right part of your screen you will see a huge list of instruments. You can choose any of them. I`ve selected one from the orchestra group - flute. Now, if you press a note on your midi keyboard you will hear a sound from you speaker. It's not necessary to turn it off, because unlike of the microphone recording - midi keyboard sends only digital commands to computer and doesn't record the sound in your room. You can sing during the record if you like :)

[widgetkit id=8] 

You should select this track for recording, choose the place where will be the "start" and start the record. It's green line above the trucks. You can change the placement and the length of it by sliding your mouse. Turn on the click in you session. Be sure that the session has right temp. Set advanced settings of recording if you need.

[widgetkit id=9] 

Now we have the flute track. But I'm not sure that I have received that I wanted. We can see that some notes are not in time. We need to correct the place of them in the grid of midi redactor. You can move note in redactor by your mouse, but it can be difficult if we have a lot of sounds. The LOGIC can try to correct it automatically - you just need to set the parameters of quantize.

[widgetkit id=11] 

After that press "q" on your keyboard or select quantize in menu.

[widgetkit id=10] 

Let's try to listen that we have recorded.


Not bad ;) If we need we can correct the velocity, sustain line or other parameters in your midi track there. But it's not necessary now.

Let's listen to the mix.


Of course, we can't refuse of recording live instruments, because midi has narrow spectra of emotions. But sometimes it's one possible thing to record the instrument that you need. So, if you have not difficult score of 1 of not lead instrument in MIX - midi is a good way to have this.


Ещё статьи...

  • 1
  • 2

Copyright © Roman Shirokinskiy