Sunday, November 20, 2011

Wave Theory - Part 1

Apologies for the delay here. I started writing a number of things into this post, however it became a long, boring, technical marathon. So I'm going to break up these types of articles and post some interstitial things as well. I originally wanted to look at Quantization Noise, but there were so many concepts involved that I decided to start with the "Basics".


Superposition

Science and nature detest corners. Curves, even constantly varying ones, are much easier to deal with. This is because curves add together nicely through a principle called "Superposition." Long story short, curves can be added together and taken apart without any difficulty. 

If you've ever looked at music on an oscilloscope, you'll see a random squiggle that never seems to stop changing. Whilst it can make for an interesting visual effect, it's not very helpful. However, due to Superposition, we can break that seemingly random signal down into the component inputs. This is usually called the Fourier Function Transform, but you might be more familiar with the term "Spectrum Analyser." (I strongly advise not looking too deeply into it unless you really like maths.)

If you run a signal though a Spectrum Analyser, you change the random squiggle into a series of columns. Each column represents a frequency range; the height of each column denotes the amount that each particular frequency range is contributing to the original signal.


Input signals (Left) and the resultant Spectrum Analysis (Right)
Thanks Wiki Commons
Let's have a look at some pictures; because they are easier to understand than words.
The top picture is your standard sine-wave. Since there is only one frequency, the Spectrum Analysis shows us that information; one tall column. All of the power in this signal is contained within that narrow frequency range.

The second set of images shows static; a totally random signal at low level. If we look at the Spectrum Analysis, you'll see that the power is spread randomly across all frequencies. We'll get into the origins of static one of these days. Just not today.

The last set of images shows these two signals superimposed onto each other. At each point, the "height" of the two signals is added together to make the bottom left image. It's a bit hard to see in the image, but you'll note that the curve is no longer smooth. However, this is nearly impossible to tell by looking at the input signal.
However, when we look at the Spectrum Analysis, we can clearly see that most of the energy is still in that main frequency range, but there is energy spread out across the other frequencies.

This effect is normally called "Noise".

Repeating Patterns
There is one other cool thing about superposition.

Basically, any repeating pattern can be built up with the right combination of sine waves. Take, for instance, a square wave:

Once again, Thanks Wiki!

If you have a look at the above image, you'll see three lines.
The Red line is a "true" square wave. The Green-dashed line shows a Fourier Approximation of the square wave using 5 component waves. The blue-dashed line uses 15 component waves. These waves are superimposed onto each other, like this:


The left-hand images shows our four component waves and their relative powers. Just by looking at the left-hand side, we can see that the "Fundamental" frequency, which is the same as the frequency of the square wave, has the most power.

The second column shows the superposition, but without adding the waves together. The third column shows the resultant, superimposed wave. As we go down the list, it starts looking more and more like a square wave.

The right-hand column, again, is what you'd see if you put the signal into a Spectrum analyser. There's a couple of points to note here:

  •  For a square wave of a fundamental frequency F, the frequency of the component waves (f) is as follows - f = (2n+1)F, where n starts at 0 and goes to infinity
  •  The power of each wave drops off significantly as n gets bigger (or, to put it another way, as the frequency of the component wave goes up, the power of that component goes down).


It turns out that once you get past n=16 or so, the power in the higher frequencies is so low that it no longer matters if you include them or not.



Next Time
It is very tempting to plough ahead here and talk about why I just wrecked your mind with superposition, but I won't.

Here's a hint though, it has to do with square waves and noise, and why digital and analogue aren't all that different.

As always, please feel free to post questions in the comments section!

Tuesday, November 1, 2011

Pulse-Code Modulation - Music to our ears

Okay, so we've just spent a lot of time looking at how data moves around a network. I hope that you were all with me for the ride. If not then please feel free to comment on any of the posts and I'll answer your post.

For the next couple of weeks I'd like to start looking at the data that you actually put into those packets.

One of the easiest places to start looking at digital signals is the humble Pulse-code Modulation, or PCM, method of encoding analogue information into a digital signal.

Analogue vs Digital
No, I'm not going to get into the "aesthetic" differences between analogue and digital, save to say that the only instrument that you can trust is your own ears. If it sounds better to you, then it sounds better to you.

I would like, however, to clarify something quickly. An "Analogue" signal is a proportionate signal with no real limitation. The local air pressure around a microphone or a signal can be measured by a device and turned into an electrical signal that is proportionate to the pressure. The higher the pressure, the higher the voltage. The voltage is an "Analogy" of the pressure.
This is what we refer to as an "Analogue" signal. Most simple electronic devices will process and run on analogue signals.
At some point, everything is an "Analogue" signal; the pressure changes that reach your ear drums or the light changes that reach your retinas are "Analogue".

A "Digital" signal, however, is somehow encoded so that it is no longer proportionate to the original signal. Through some kind of electronic process the Analogue signal is broken down into symbols of some kind that are readable only by devices that use that format. Before we can interact with them again then they need to be converted into an Analogue signal again. This process is called "Encoding" (Analogue to Digital) and "Decoding" (Digital to Analogue). Combine "enCOder" and "DECoder" and you get CODEC... but we'll get to those later.

Pulse-Code Modulation
Anyone who's ever Google'd "Digital Audio" will have seen a picture similar to this one:
Let's assume that we are looking at the Encode (Analogue to Digital) side of things (although the process is exactly the same in reverse).
The red line is our input signal; a standard sine-wave. This could be anything;  an audio signal, the number of people that like or dislike the current Prime Minister... it doesn't matter. We have a signal that is changing as time goes on.

The analogue signal is continuous and unbroken.

Pulse-Code Modulation sets a value (shown above as 0-15) for each equivalent amplitude. To convert the signal into a digital one, we record the value of the analogue signal at the start of each of the time divisions shown along the bottom access. This process is called "Sampling" - you are taking a sample of the Analogue signal at each of the time divisions.

You'll note that on the image above you can see a difference between the continuous Analogue (Red) signal and the Digital (Grey) one. It looks like a lot, right? In fact, the small differences in images like the above are one of the main arguments used by Analogue supporters. However, there is something missing from this picture...

Bit Rate
The picture above gives you a pretty good look at what you'd see in a phone-line; a 4-bit system. A bit is a single binary "symbol". One bit gives two states; two bits gives four, three bits give eight and four bits give sixteen states.
The number of "bits" that a digital signal contains is referred to as the Bit Depth. In basic terms, the higher the bit depth, the better the quality.

Even your most basic audio (CD-quality) has a bit depth of 16 bits; or about 65 thousand different states. The Human ear isn't really able to detect that kind of resolution; it would be like trying to look at the millimetre markings on a ruler that was 10 meters away.

Still, there are higher bit depths; standard digital audio (AES/EBU-3, which I will cover in a future article) runs at 20 bits (1,048,576 states) or at 24 bits (16,777,216 states). At this point, you're pretty much splitting hairs with a 2000-pound bomb...

But there is another factor that affects the quality of the sound; the rate at which the audio is sampled.
As a general rule, you want to take a sample at twice the frequency of the highest frequency you want to hear. The label on a new-born baby reads 20Hz-20,000Hz, although by the time you've used an MP3 player and gone to a concert or two you will be lucky if you can hear above about 17,000Hz.
Thus, the main sampling rate used in digital audio is 44.1kHz (CD-quality). "Professionals" will use 48kHz, or even go as high as 96kHz. Once again; at that level you are recording detail that humans just can't perceive. It's like taking a photo in ultraviolet; it might look brilliant, but there is no way for us to see the result.





I would like to continue this article, but in the interest of keeping things concise I will hold off for the time being. Next article I will look at a couple of the strange effects of PCM, and how we avoid them.
But for now, I must away. Until next time.