Thursday, August 29, 2013

Research Paper - Examining the Cost Impact of Technical Upgrades on Performing Arts Centres

For the past year or so I have been researching the impact of technical upgrades on performing arts centres as part of my Master's degree in management.

A few people have asked me if I could post the completed paper, so here it is.

Please feel free to pass the link around, and please feel free to reference it if you'd like to. However, if you want to reproduce it please let me know. I may be using this as the basis for a couple of articles, but that is in the future. For now, I rest (and/or go to IBC).


https://docs.google.com/file/d/0B99Ft5KabT7zSkVlZmRPUGVPbDg/edit?usp=sharing

Tuesday, August 21, 2012

Femto Photography.

https://www.youtube.com/watch?v=SoHeWgLvlXI


Honestly, there isn't much more than I can say about this; it should speak for itself.

Brilliant.

Sunday, August 12, 2012

Subnets

When I'm teaching people about networks, the one thing that tends to cause confusion is the concept of Subnets.

If you've ever manually entered an IP address, you'll be familiar with the following:

I've added the red box to point out the "subnet mask"; a bunch of 255's and 0's that magically appear whenever you type in an IP address.

But what does it mean?

Splitting Networks

I briefly touched on the subject of packet switching in my "Layers of the Internet" article. Basically, an Internet Protocol (IP) network is a collection of devices connected via some form of network.

But what if we wanted to have some computers on the same network, but we didn't want them to interact? For example, we had the computers that controlled a company's payroll on the same network as the company's email servers.



Obviously that's not a wise thing; anyone with a bit of networking knowledge and access to Google would be able to give themselves an unexpected pay rise.

Thankfully, internet engineers foresaw this problem and came up with "subnets". A Subnet is a simple way of splitting one physical network in to two "virtual" networks.
Just like devices that are on separate physical networks, devices on different Subnets can't communicate with each other.

In other words, you can have the finance guys  on one subnet and the rest of the company on the other subnet, and you'll never have to worry about people editing their own pay grade.

One Number; two addresses

The great thing about Subnets is that they are contained within the devices' IP addresses by default.

The IP address is made up of two parts; the "Subnet" address and the "Device" address.

In fact, an IP address is a lot like a Street address. If you look at an address like "31 George St" you can instantly find that building. "31" by itself means nothing, and "George St" is too vague to be of any use.

In our IP addresses the "Subnet" address is the street, and the "Device" address is the number.

But let's look at one of the IP addresses above.

192.168.100.5
That doesn't look much like "31 George St". Which part is the Street, and which is the number?

Subnet Masks

Now we are finally coming to the mystery behind the Subnet Mask.
A Subnet mask tells us which part of the address is the street and which is the number.

The network in the diagram above has a subnet mask of:
255.255.255.0
Quite quickly we can see something special here. Everywhere that there is a "255" is part of the Street Name, and every where that there is a "0" is the number.

Let's look at that address again:
192.168.100.5
If use use the above rule, the "Street" (Subnet) becomes 192.168.100 and the Device number is 5.

Looking at our network again, we can see that the Finance machines on subnet 192.168.100 are effectively on a separate network from the workstations on the 192.168.101 subnet.

But what if...
Okay, for an exercise, let's take the same network as above, but let's change the subnet mask to:
255.255.0.0
As you'd imagine, all of the devices are now on the same subnet:
192.168.100.5
is on the same subnet as 
192.168.101.5

So we have to be careful when designing our networks to include the correct subnet masks in all of our addresses.

But what does 255.255.255.0 mean?
I have never been asked this question, so I am going to pre-empt you all and give you the answer.

Remember that all computers work in binary, that is, 0's and 1's.
An IP address is made up of four groups of eight bits, that is, eight 0's or 1's.

So, if we were to look at 192.168.100.5 in binary, it would look more like:
11000000.10101000.0110010.00000101
Now, that doesn't look like much to a normal person, but now let's have a look at the subnet mask 255.255.255.0:
11111111.11111111.11111111.00000000
It should be immediately obvious that the 1's show the subnet, and the 0's show the device number.

And so we come to a more general rule:
The Subnet Mask denotes the "Subnet" part of an IP address with a 1, and the "Device" part with a 0.

If we look at other subnet masks we can see this quite clearly:
255.255.0.0 = 11111111.11111111.00000000.00000000
We can also get into more exotic subnet masks:
255.248.0.0 = 11111111.11111000.00000000.00000000

It's very rare to find a subnet mask that isn't a combination of 255's and 0's for the simple reason that it is practically impossible to work out the subnet/device number of a "split" subnet mask.

Slash Fiction

Since writing 255.255.255.0 constantly is a bit of a pain, IT people have come up with a quicker way of notating the subnet mask of an IP address.

Normally, for our Finance Server in the example above, we'd have to write:
IP: 192.168.100.5
Subnet Mask: 255.255.255.0
But let's look at the subnet mask of 255.255.255.0 in binary again:
11111111.11111111.11111111.00000000
We can see that this is simply twenty-four 1's in a row. So, we can write the full IP address of the server as:
192.168.100.5/24
The "/" at the end of the address denotes the subnet mask; in this case, twenty-four 1's (or, in decimal, 255.255.255.0).

If we had the subnet of 255.255.0.0, the Full IP address could be written:
192.168.100.5/16
as 255.255.0.0 only has sixteen 1's in a row to denote the subnet address.


I hope that this has explained the concept of the subnet mask. As always, please let me know if you think that this could be a better explanation!

(PS: I do know that there is also the whole subject of VLANs that offer a better explanation for how to separate networks, but I'll leave that for another time!)

Wednesday, July 18, 2012

Why Square Waves Matter

A while ago I wrote a post about what Square waves are, how they are formed, and how they are really just made up of a bunch of increasingly higher-pitched sign waves.

I'm sure that was more than enough graphs and lines to scare anyone into submission. That's not such a bad thing.

But one thing that I didn't get into is why square waves are important to the modern sound engineer.

Distortion - nothing's perfect
Whether you want it or not, Distortion is at the heart of every piece of reproduced music. It doesn't matter if that reproduction is "live" (as in a microphone running through a mixing console at a rock concert) or if it was recorded 15 years ago and played back on a CD player.

There are a number of different reasons for this, some of which (like the reaction of various electronic components like capacitors) will be left for later; there's just simply too much to cover.

What I'd like to cover is the linking between Square Waves and distortion, specifically in two areas: Gain and Quantisation.

Gain Management
If you've ever used a mixing console (or pretty much anything that shows you the audio levels in the standard green-orange-red format) you've probably kept on turning up the volume until you've got some pretty red lights blinking all over the place. If you ask any experienced engineer they'll tell you that this means that they are "saturating" the console's amplifiers, leading to "some nice compression" and/or distortion.

But what's actually happening here?

At the heart of every bit of audio equipment is a humble little circuit; the Operational Amplifier (Op-Amp). This simple circuit has a never-ending number of variations and uses, but essentially all we need to know is that it takes an input signal and some power, and outputs the same signal but with more volts.
Your stock-standard op-amp drawings. 
Okay, once again I can feel that this picture means nothing to you.  Never fear; let me explain.
 The top picture is how most of us think of amplifiers: you put a signal (+ and -)  into a circuit and you get some kind of output (shown here as the line out the right hand side of the Amp).

In fact, it's perfectly acceptable to only show the +ve side of the input.
Essentially what this image shows is what we mentioned above; you put a small signal into the amp, and you get a bigger signal out. What actually goes on inside the triangle is inconsequential.

However, the bottom picture gives us a little more information. Here we still see our inputs and outputs (U+, U- and U0). There's no difference there. But we now also see two new variables; Ucc and Uee. These represent the "power" that we are supplying to the amplifier.

In normal operation, the input signal is small enough so that the output (U0) is less than the total power that you are feeding to the op-amp (Ucc and Uee).

But if your input is too big, you can get to a situation where U0 needs to be bigger than Ucc. Since the op-amp has run out of power it stops amplifying and simply outputs the maximum power; i.e. Ucc.

Two examples of clipping; we're interested in the lower example


If you look at the image above, you'll see what I mean. The dashed "threshold" lines indicate Ucc and Uee; in other words, the maximum and minimum possible voltages of the output, U0. For the moment, we're only going to look at the lower example.

You'll notice that as soon as the output reaches the maximum (Ucc) or minimum (Uee), the waveform flattens out, no matter what the input signal looks like.

You'll also notice that this looks exactly like a square wave.

If you have a look at the previous article on Wave Theory, you'll note that a Square Wave contains the original frequency as well as a number of  harmonic frequencies. If we were to look at the above waveform on a spectrum analyser you would initially see a single peak (the input frequency). As the input increases beyond the Ucc threshold, you would start to see the single peak get shorter and more harmonic peaks growing up, as if out of nowhere.

As the clipped wave starts to look more like a square wave, you get more harmonics

These harmonics are the "distortion" that we are hearing. 

Since no amplifier circuit is perfect, you'll always have a bit of this Harmonic Distortion cropping up along the way. It's one of the measurements of amplifier power; Total Harmonic Distortion (or THD), and is expressed as a percentage. The percentage refers to the amount of power that is "lost" in the harmonic peaks. In short, a lower number is better - the number is telling you how much "false" information you're hearing due to the amplifier circuit.

Essentially, you can "fake" an amplifier's power rating by increasing the THD tolerance. An amplifier than can deliver 100W at 0.1% THD might be able to deliver 300W at 5% THD. That's because you're increasing the output of the amplifier, and also increasing the harmonics that are creeping into your original signal.

Is that a bad thing?
Distortion isn't necessarily a bad thing. Rock 'n' Rollers have been using fuzz pedals and distortion generators for decades. As with all sound-related things, everyone has their own opinion. But you also need to make sure that you're getting the right gear for the job. A 300W 10% THD amplifier might be perfect for your car stereo, but you might want a 300W 0.1% THD amplifier for your Hi-Fi speakers in your living room.

Quantisation Noise
The other, very similar form of distortion that I'd like to discuss here is Quantisation Noise.

As we saw in my PCM post, digital audio effectively segments up an analogue signal into a bunch of "stepped" signals.
A 4-bit quantisation (grey) of a sine wave (red)

Hopefully, it won't take you much imagination to realise that the grey, "digital" signal will have a number of harmonic frequencies tied up within its square-wave-like structure.
If not, then scroll back up and have a look at the last image showing the build-up of harmonics in the Square Wave example again!

Of course, just as we explained in the PCM article, the above representation is a gross simplification of a digital signal; even a 16-bit signal has millions of steps, making the harsh corners a little more manageable.

That being said, a digital recording will always have a certain amount of distortion, or "Quantisation Noise," inherent in the system. In low bit-rate recordings or transmissions (e.g. bad digital radio), this comes through as a watery-garbling of the audio signal.

Here is an example of an 8-bit recording, and then the noise generated as this is reduced in bit-depth to:
4 Bits (like the above image)


But does it matter?
I've just had a look at a couple of Analogue-to-digital converters in 16 bit terms. These list the THD of the entire converter (including the op-amps and the rest of the circuitry) at about -100dB. That equates to a THD of about 0.0009%. 

Whilst there is a lot of complaints about the "sound" of digital recordings (and yes, I do admit that there are differences between the two), the chances are that the THD of the amplifier is many, many times greater than the harmonic distortion that is caused by the "digital-ness" of the recording.


Monday, December 19, 2011

Sunday, November 20, 2011

Wave Theory - Part 1

Apologies for the delay here. I started writing a number of things into this post, however it became a long, boring, technical marathon. So I'm going to break up these types of articles and post some interstitial things as well. I originally wanted to look at Quantization Noise, but there were so many concepts involved that I decided to start with the "Basics".


Superposition

Science and nature detest corners. Curves, even constantly varying ones, are much easier to deal with. This is because curves add together nicely through a principle called "Superposition." Long story short, curves can be added together and taken apart without any difficulty. 

If you've ever looked at music on an oscilloscope, you'll see a random squiggle that never seems to stop changing. Whilst it can make for an interesting visual effect, it's not very helpful. However, due to Superposition, we can break that seemingly random signal down into the component inputs. This is usually called the Fourier Function Transform, but you might be more familiar with the term "Spectrum Analyser." (I strongly advise not looking too deeply into it unless you really like maths.)

If you run a signal though a Spectrum Analyser, you change the random squiggle into a series of columns. Each column represents a frequency range; the height of each column denotes the amount that each particular frequency range is contributing to the original signal.


Input signals (Left) and the resultant Spectrum Analysis (Right)
Thanks Wiki Commons
Let's have a look at some pictures; because they are easier to understand than words.
The top picture is your standard sine-wave. Since there is only one frequency, the Spectrum Analysis shows us that information; one tall column. All of the power in this signal is contained within that narrow frequency range.

The second set of images shows static; a totally random signal at low level. If we look at the Spectrum Analysis, you'll see that the power is spread randomly across all frequencies. We'll get into the origins of static one of these days. Just not today.

The last set of images shows these two signals superimposed onto each other. At each point, the "height" of the two signals is added together to make the bottom left image. It's a bit hard to see in the image, but you'll note that the curve is no longer smooth. However, this is nearly impossible to tell by looking at the input signal.
However, when we look at the Spectrum Analysis, we can clearly see that most of the energy is still in that main frequency range, but there is energy spread out across the other frequencies.

This effect is normally called "Noise".

Repeating Patterns
There is one other cool thing about superposition.

Basically, any repeating pattern can be built up with the right combination of sine waves. Take, for instance, a square wave:

Once again, Thanks Wiki!

If you have a look at the above image, you'll see three lines.
The Red line is a "true" square wave. The Green-dashed line shows a Fourier Approximation of the square wave using 5 component waves. The blue-dashed line uses 15 component waves. These waves are superimposed onto each other, like this:


The left-hand images shows our four component waves and their relative powers. Just by looking at the left-hand side, we can see that the "Fundamental" frequency, which is the same as the frequency of the square wave, has the most power.

The second column shows the superposition, but without adding the waves together. The third column shows the resultant, superimposed wave. As we go down the list, it starts looking more and more like a square wave.

The right-hand column, again, is what you'd see if you put the signal into a Spectrum analyser. There's a couple of points to note here:

  •  For a square wave of a fundamental frequency F, the frequency of the component waves (f) is as follows - f = (2n+1)F, where n starts at 0 and goes to infinity
  •  The power of each wave drops off significantly as n gets bigger (or, to put it another way, as the frequency of the component wave goes up, the power of that component goes down).


It turns out that once you get past n=16 or so, the power in the higher frequencies is so low that it no longer matters if you include them or not.



Next Time
It is very tempting to plough ahead here and talk about why I just wrecked your mind with superposition, but I won't.

Here's a hint though, it has to do with square waves and noise, and why digital and analogue aren't all that different.

As always, please feel free to post questions in the comments section!

Tuesday, November 1, 2011

Pulse-Code Modulation - Music to our ears

Okay, so we've just spent a lot of time looking at how data moves around a network. I hope that you were all with me for the ride. If not then please feel free to comment on any of the posts and I'll answer your post.

For the next couple of weeks I'd like to start looking at the data that you actually put into those packets.

One of the easiest places to start looking at digital signals is the humble Pulse-code Modulation, or PCM, method of encoding analogue information into a digital signal.

Analogue vs Digital
No, I'm not going to get into the "aesthetic" differences between analogue and digital, save to say that the only instrument that you can trust is your own ears. If it sounds better to you, then it sounds better to you.

I would like, however, to clarify something quickly. An "Analogue" signal is a proportionate signal with no real limitation. The local air pressure around a microphone or a signal can be measured by a device and turned into an electrical signal that is proportionate to the pressure. The higher the pressure, the higher the voltage. The voltage is an "Analogy" of the pressure.
This is what we refer to as an "Analogue" signal. Most simple electronic devices will process and run on analogue signals.
At some point, everything is an "Analogue" signal; the pressure changes that reach your ear drums or the light changes that reach your retinas are "Analogue".

A "Digital" signal, however, is somehow encoded so that it is no longer proportionate to the original signal. Through some kind of electronic process the Analogue signal is broken down into symbols of some kind that are readable only by devices that use that format. Before we can interact with them again then they need to be converted into an Analogue signal again. This process is called "Encoding" (Analogue to Digital) and "Decoding" (Digital to Analogue). Combine "enCOder" and "DECoder" and you get CODEC... but we'll get to those later.

Pulse-Code Modulation
Anyone who's ever Google'd "Digital Audio" will have seen a picture similar to this one:
Let's assume that we are looking at the Encode (Analogue to Digital) side of things (although the process is exactly the same in reverse).
The red line is our input signal; a standard sine-wave. This could be anything;  an audio signal, the number of people that like or dislike the current Prime Minister... it doesn't matter. We have a signal that is changing as time goes on.

The analogue signal is continuous and unbroken.

Pulse-Code Modulation sets a value (shown above as 0-15) for each equivalent amplitude. To convert the signal into a digital one, we record the value of the analogue signal at the start of each of the time divisions shown along the bottom access. This process is called "Sampling" - you are taking a sample of the Analogue signal at each of the time divisions.

You'll note that on the image above you can see a difference between the continuous Analogue (Red) signal and the Digital (Grey) one. It looks like a lot, right? In fact, the small differences in images like the above are one of the main arguments used by Analogue supporters. However, there is something missing from this picture...

Bit Rate
The picture above gives you a pretty good look at what you'd see in a phone-line; a 4-bit system. A bit is a single binary "symbol". One bit gives two states; two bits gives four, three bits give eight and four bits give sixteen states.
The number of "bits" that a digital signal contains is referred to as the Bit Depth. In basic terms, the higher the bit depth, the better the quality.

Even your most basic audio (CD-quality) has a bit depth of 16 bits; or about 65 thousand different states. The Human ear isn't really able to detect that kind of resolution; it would be like trying to look at the millimetre markings on a ruler that was 10 meters away.

Still, there are higher bit depths; standard digital audio (AES/EBU-3, which I will cover in a future article) runs at 20 bits (1,048,576 states) or at 24 bits (16,777,216 states). At this point, you're pretty much splitting hairs with a 2000-pound bomb...

But there is another factor that affects the quality of the sound; the rate at which the audio is sampled.
As a general rule, you want to take a sample at twice the frequency of the highest frequency you want to hear. The label on a new-born baby reads 20Hz-20,000Hz, although by the time you've used an MP3 player and gone to a concert or two you will be lucky if you can hear above about 17,000Hz.
Thus, the main sampling rate used in digital audio is 44.1kHz (CD-quality). "Professionals" will use 48kHz, or even go as high as 96kHz. Once again; at that level you are recording detail that humans just can't perceive. It's like taking a photo in ultraviolet; it might look brilliant, but there is no way for us to see the result.





I would like to continue this article, but in the interest of keeping things concise I will hold off for the time being. Next article I will look at a couple of the strange effects of PCM, and how we avoid them.
But for now, I must away. Until next time.