THE DATA CENTRE JARGON BUSTER

So you might have noticed that technology is a pretty big deal nowadays and that won’t be changing for a very long time. But with this rise and rise of digital data, we had to find somewhere to put it all – as well as a place to process, manage, store and transmit it to other computers when and wherever we wanted. So the data centre was born.

Thirty years ago they didn’t exist but now everybody’s using data centres and they are a common part of the UK’s urban landscape. Just like choosing a bank to store your money safely, it’s equally important to house your critical information in a place you can trust and so it will pay to understand the technical jargon.

If you’re sitting there wondering what a data centre actually is or if you really need one – our list of common jargon terms will hopefully make the whole ‘data centre thing’ a lot clearer and easier to understand  so you can make the right choice.

The act of placing computers, routers and servers within an off-site location, or data centre, is called colocation. These buildings provide a specialist place and way of managing the critical power and cooling infrastructure required to support computer operations safely and reliably.

One of the key reasons to use a data centre is if you don’t have the capacity yourself. And the amount of data that this component or system can handle is called the bandwidth. So how much bandwidth does a data centre need? This will be influenced by the levels of network traffic and other factors such as computer power and file sizes continuing to grow at exponential rates.

With a large data centre potentially using as much electricity as a small town it is important processes are in place to ensure data is not compromised should a power outage occur. All data centres have a Main Switch Board (MSB) which monitors the electricity that enters via the power grid. If a loss of voltage pressure is detected, and thus the possibility of the data being compromised, the MSB will automatically move the load to the battery backup and eventually to generators.

If a power outage does occur the Uninterruptible Power Supply (UPS) will provide power to the data centre straight away. The UPS is powered by batteries or flywheel – powered systems and will provide power until a secondary power source, such as a generator, comes online.

With a room or multiple rooms full of hard-working servers, it’s understandable that a data centre will get hot sometimes. To combat this, many use a system that incorporates hot aisles and cold aisles. This layout aims to maximise air flow to minimise energy costs – by facing the colder front of the racks to the air con output ducts and the hot backs to the intake ducts.

Data can be stored in a cloud as well as in a data centre however both vary slightly. The data centre works in a fixed location with computers, routers and servers working together to store the data whereas a cloud uses virtualisation which is a layer of software that sits between a computer and its operating system. Virtualization is the underlying technology responsible for the cloud’s ability to mask physical computing resources and create “virtual” environments to simulate to the user what they would be experiencing were they interacting with actual hardware.

There are also three different types of clouds that can be used. A private cloud will only be used by one organisation. It’s their own private library of backed up data. Private clouds are easier to customise than other clouds. However the setup costs are much higher than those of a public cloud.

A public cloud is similar to a public library – any organisation can use it to store their data. This is also known as a multi – tenant environment.  One benefit of a public cloud is that the services can be adapted to your needs.

Private and public clouds can work together to form a hybrid cloud. This occurs when an application runs on the private cloud until extra resources are needed. The application will then connect to the public cloud to help distribute the data.

You must beware the single point of failure! This is just one component that could bring the entire system down if it stops functioning properly. If high availability is important, these vulnerable points must be addressed with a specific solution such as redundancy.

Redundancy is the duplication of critical system elements – from component and system level up to infrastructure – with the intention of increasing the reliability of the system. This usually takes the form of a backup or fail-safe and helps to ensure high availability and fault-tolerance.

N+1 (or greater) redundancy is a method put in place to ensure that data centres keep operating even during component or power failures. N+1 (or greater) redundancy means that there is +1 extra back up/power supplies for the total number of necessary power equipment.

Technology is a complex issue and these explanations may give you a clearer vision of what a data centre offers, but they can’t give you the full picture. For even more comprehensive advice and guidance about how a data centre can take some of the strain of your data requirements – call The Change Organisation on 01227 779000.

Leave a comment