
Hyperscale Datacenters
02/02/18 • 10 min
Today on the show - hyperscale datacenters. After this episode, you'll know what they are, what makes them special and why are they important for the cloud.
#Epsiode transcript:#
##Prologue##
As use of computers grew rapidly in the 1990s, so did the need for servers and datacenters. Back in the day, network connections were slow and expensive. Therefore, the datacenters had to be built close to the companies and users using them. Usually that meant building the datacenters into the office building’s basement.
There was this Nordic company and their business model heavily relied on using a lot of servers. So naturally, they also had to have quite a massive basement. This essentially meant the basement was business-critical for them. If the computers were to be harmed, the company would lose their reputation, business, everything. The office was in an area with low natural disaster risks. For example, there had been no recorded earthquakes in modern history.
However, the basement of this company's office was flooded few years ago. This wasn’t just an inconvenience for the office workers. The flooding was a serious threat for the future of the company, as the server room was completely flooded. As everyone knows computers and water don't mix well together. The situation seemed dire: the company could lose all their data, and their business could go under. At this darkest of the hours, the friendly neighborhood sysadmin jumped in and saved the day by swimming to the servers and rescuing them.
In the end it affected their business, but they avoided a catastrophe. So how could this situation have been avoided? That's what we're discussing in today's episode: --Hyperscale Datacenters.
##Introduction##
Hi, and welcome to Cloud Gossip. I'm Annie and I am a cloud marketing expert and a startup coach. Hey, my name is Teemu. I'm Cloud developer, Devops trainer and an international speaker. And I'm Karl and I'm a cloud & security consultant for enterprise customers, and I also moonlight as an international speaker. Today on the show - hyperscale datacenters. After this episode, you'll know what they are, what makes them special and why are they important for the cloud. This podcast is part of a 4-part series, which you can find either on Apple Podcast, Android podcast apps or on our website CloudGossip.net.
##History of datacenters##
Hi, this is Karl again. So, what is cloud? Cloud - as we know it - is a network of modern, hyper-scale datacenters. These hyper-scale datacenters of today are different from the datacenters we've had previously. Let's look at the history of datacenters leading up to the cloud. Before modern hyper-scale datacenters, we used a single server at a time.
The first datacenters - actually had only single server -- that was filling the whole room. Once we got further, the server size came down and we started to have data centers: multiple servers connected to each other.
The idea was that pretty much every company with computing needs would build their own datacenter. A datacenter is a specifically-made space to host multiple servers and take care of all their needs, such as electricity, heating, ventilation, air conditioning and network.
As all the companies were building their own datacenters, they had to maintain the physical security. This meant installing locks, keycard readers or any other security measures that the customers required. The physical location had to be carefully picked and deals with energy providers had to be made.
When companies were running their own datacenters, it was a big deal that they were responsible of building, installing, updating and "end of lifing all the servers in their use. End-of-lifing means that when the physical server is so old that it's no longer feasible to replace the broken parts and rather cheaper to buy a new server, the old server is disposed in a secure way.
The hard drives are wiped clean in a secure way, so that there's no way that somebody could recover our data from them. After that they are physically destroyed.
When the servers would eventually have hardware failures, the servers would be out of use. This is called an outage. Preparing for outages involves taking care of the spare parts for the servers. The datacenter owner had to purchase enough spare parts for their own use, or make sure they had access to the needed parts when needed.
These tasks of running a datacenter required a lot of personnel. Once up and running, a typical datacenter could have one administrator per two dozen servers. A typical midsize company could easily have 1000 servers in their datacenter. This meant having over 40 people on payroll just to keep the lights on and servers running.
##Problems with traditional datacenters##
Hi, it’s Annie again. Running their own datacenter caused a lot of headache to the companies. A major problem was outages. When an outage ...
Today on the show - hyperscale datacenters. After this episode, you'll know what they are, what makes them special and why are they important for the cloud.
#Epsiode transcript:#
##Prologue##
As use of computers grew rapidly in the 1990s, so did the need for servers and datacenters. Back in the day, network connections were slow and expensive. Therefore, the datacenters had to be built close to the companies and users using them. Usually that meant building the datacenters into the office building’s basement.
There was this Nordic company and their business model heavily relied on using a lot of servers. So naturally, they also had to have quite a massive basement. This essentially meant the basement was business-critical for them. If the computers were to be harmed, the company would lose their reputation, business, everything. The office was in an area with low natural disaster risks. For example, there had been no recorded earthquakes in modern history.
However, the basement of this company's office was flooded few years ago. This wasn’t just an inconvenience for the office workers. The flooding was a serious threat for the future of the company, as the server room was completely flooded. As everyone knows computers and water don't mix well together. The situation seemed dire: the company could lose all their data, and their business could go under. At this darkest of the hours, the friendly neighborhood sysadmin jumped in and saved the day by swimming to the servers and rescuing them.
In the end it affected their business, but they avoided a catastrophe. So how could this situation have been avoided? That's what we're discussing in today's episode: --Hyperscale Datacenters.
##Introduction##
Hi, and welcome to Cloud Gossip. I'm Annie and I am a cloud marketing expert and a startup coach. Hey, my name is Teemu. I'm Cloud developer, Devops trainer and an international speaker. And I'm Karl and I'm a cloud & security consultant for enterprise customers, and I also moonlight as an international speaker. Today on the show - hyperscale datacenters. After this episode, you'll know what they are, what makes them special and why are they important for the cloud. This podcast is part of a 4-part series, which you can find either on Apple Podcast, Android podcast apps or on our website CloudGossip.net.
##History of datacenters##
Hi, this is Karl again. So, what is cloud? Cloud - as we know it - is a network of modern, hyper-scale datacenters. These hyper-scale datacenters of today are different from the datacenters we've had previously. Let's look at the history of datacenters leading up to the cloud. Before modern hyper-scale datacenters, we used a single server at a time.
The first datacenters - actually had only single server -- that was filling the whole room. Once we got further, the server size came down and we started to have data centers: multiple servers connected to each other.
The idea was that pretty much every company with computing needs would build their own datacenter. A datacenter is a specifically-made space to host multiple servers and take care of all their needs, such as electricity, heating, ventilation, air conditioning and network.
As all the companies were building their own datacenters, they had to maintain the physical security. This meant installing locks, keycard readers or any other security measures that the customers required. The physical location had to be carefully picked and deals with energy providers had to be made.
When companies were running their own datacenters, it was a big deal that they were responsible of building, installing, updating and "end of lifing all the servers in their use. End-of-lifing means that when the physical server is so old that it's no longer feasible to replace the broken parts and rather cheaper to buy a new server, the old server is disposed in a secure way.
The hard drives are wiped clean in a secure way, so that there's no way that somebody could recover our data from them. After that they are physically destroyed.
When the servers would eventually have hardware failures, the servers would be out of use. This is called an outage. Preparing for outages involves taking care of the spare parts for the servers. The datacenter owner had to purchase enough spare parts for their own use, or make sure they had access to the needed parts when needed.
These tasks of running a datacenter required a lot of personnel. Once up and running, a typical datacenter could have one administrator per two dozen servers. A typical midsize company could easily have 1000 servers in their datacenter. This meant having over 40 people on payroll just to keep the lights on and servers running.
##Problems with traditional datacenters##
Hi, it’s Annie again. Running their own datacenter caused a lot of headache to the companies. A major problem was outages. When an outage ...
Previous Episode

Containers?!
Today on the show: why containers? Where do they come from, and which problems do they solve?
Epsiode transcript:
Prologue
Hi, Karl here. Let me tell you a story from a couple of years back.
Imagine a team of quite stressed out developers. This team at Nokia Research Center had been preparing for a Demoday, to showcase their new applications to an excited audience. Luckily the team had already finished building their application -- or so they thought.
During the evening before the Demoday, they started to prepare the application to be showcased in the demo. This meant moving the application into a server that was located on the second floor of the office. Yet, the size of the application was huge, so the file transfer took all night.
In the morning, half an hour before the demo, the project manager asked for a small change to the application: could the developers change the color of one of the buttons from blue to green. This wasn't a hard task: The developer was able to make the change in a minute, and he could show the result on his computer to the project manager.
But how could they make the change apply to the server? They had no other solution than to grab a USB stick and start running...
These types of problems could be solved with a technology called -- containers :)
Introduction
Hi, and welcome to Cloud Gossip. I'm Annie and I am a cloud marketing expert and a startup coach. Hey, my name is Teemu. I'm Cloud developer, Devops trainer and an international speaker. And I'm Karl and I'm a cloud & security consultant for enterprise customers, and I also moonlight as an international speaker.
Today on the show: why containers? -- Where do they come from, and which problems do they solve? And by the way, no worries if you didn't understand all of the terms used in the beginning, that is why this podcast exists. Glad to have you with us! This podcast is part of a 4 part series, which you can find either on Apple Podcast, Android podcast apps or on our website CloudGossip.net.
Terminology
Okay, so in the intro we highlighted the problems of software development. Now -- we will do a rundown of terminology, and the history leading to containers. Things in real life are more complicated and things will have more layers to it. But here we have tried to simplify and find the best definitions and examples to get you started and grasp the basics.
Let's talk about application development process, which is essentially the process of how applications are built and made available to the users. The process starts with developers building applications on their own computers. And finally, when applications are finished, they are moved to the servers.
We call this deploying to production, which is a fancy name for essentially releasing an application. The biggest difference between development and production phase is that, on the latter the application is continuously running on the server to serve a lot of people -- not just the developer.
So, what are servers? They are expensive computers that are specially made to serve thousands of users at the same time and are never meant to be powered off. Where computers are made for personal use and normally turned off after use.
As an example, a regular computer might store your holiday pictures, your favorite games or you might browse Facebook with it.
Servers are the infrastructure that all internet services run on top of, like a house is built on a foundation. Servers typically house software that thousands of users can use at the same time. For example, Facebook itself, or any of Google's sites are housed on servers.
Hey, did you know?! Previously, we had servers so big, that they filled entire rooms. They would also cost a lot of money, in the realm of hundreds of thousands of euros.
Servers have evolved over the years to be smaller and nowadays you can fit them under your desk. This is very much the same process as what happened with mobile phones; evolving from old and clunky phones, into the small smartphones we currently use.
Let's switch gears and talk about operating systems. On both regular computers and servers, we have an operating system, otherwise known as the OS. The OS is a collection of software that communicates between computers and applications. Operating system makes these all work together.
For example, developer's computer might have a MacOS operating system, and the server might have Windows Server operating system. If an application has been built on top of one operating system and is then placed on a server with a different operating system, things can get a bit messy.
Why does this happen, you might ask? Well, if the application has been built and is used in another system it might not function properly in the new environment – the same way if an athlete trains in a high-altitude environment, ...
Next Episode

What is the cloud?
Today on the show: Infrastructure as a Service, Platform as a Service, Software as a Service. What are these cloud service models and how do they compare? Nice of you to join us!
Prologue##
The famous cloud advocate David Chappell has defined three most important events of post-dotcom boom IT world. The first event was the IPO of Salesforce Dot Com in 2004. It proved that the Software-as-a-Service is a serious business model.
The second event was the launch of Amazon Web Services in 2006, which was the first public cloud platform.
And the third event was the release of the original Apple iPhone in 2007. It started the mobile-first era with the phones eventually becoming tiny computers in our pockets.
The common enabler behind all these 3 important events was the cloud. So, the cloud in all its forms has been instrumental in major developments in the IT world.
So, it's important to understand what exactly the cloud is and what are its different variations.
##SaaS, PaaS or IaaS
We will start the episode by defining the terms and needed concepts an then we will move onto how they have shaped the world. Software-as-a-Service or SaaS is a business model where software companies sell their products for a monthly subscription instead of a one-time purchase. Examples of Software-as-a-Service cloud services are Salesforce Dot Com, Google Gmail, Dropbox and Microsoft Office 365.
But why is it called Software as a Service? It means that we don't install the software ourselves. We don't have to worry about the servers, we don't have to worry about updating the software. We just use the service. We can add our own files and account details, and even change the background color to our liking. The amount of customization that we can do or administration that we have to do is limited. Software-as-a-Service cloud model is about getting ready-made software that we can start using right away. If we want to change how the software behaves, we are limited to what the cloud service provider allows us to customize.
So how does Platform-as-a-Service model differ from Software-as-a-Service? In Platform-as-a-Service cloud model the cloud service provider gives us a set of so-called "building blocks" and we can build virtually any software with those building blocks. For example, a Platform-as-a-Service cloud provider can let us host websites on their platform. We just have to write the code that puts the building blocks together. This is the key difference: in the Platform-as-a-Service model we have to build the software ourselves, whereas in the Software-as-a-Service model we just use the existing application as it is.
In Infrastructure-as-a-Service model we have even more control. This means that the cloud provider is taking care of the datacenter: physical venue, servers, network capacity, electricity, heating, ventilation and cooling. The Infrastructure-as-a-Service cloud provider takes care of the physical hosting for us. We just get remote access to the virtual machines, and storage. We can do essentially anything with the servers: we can install any operating system or any software in them. We can build our own software on top of them.
As a developer, I like Platform-as-a-Service, because that makes me most productive and I don’t have to worry about the virtual machines as in Infrastructure-as-a-Service. I only have to take care of the coding.
Challenges
So why did people start moving to Software-as-a-Service in growing numbers since 2004 & Salesforce? One of the reasons arises from comparing the on-premises and cloud worlds -- the speed of change. Previously, if a company wanted to use email, they had to install the email systems to their own data center. Even if they outsourced the data center to a hosting provider, the steps would still be numerous: plugging in a new server, installing the operating system and finally installing and configuring the email software.
This clearly will take up quite a bit of time. And that's not all! Getting the software in place is not enough. With any software that we are responsible of, there comes the need to update. We would need security updates at least once a month. If there is a new version of the email software, an upgrade or "migration" would have to be made, for example every 5 years. All of these tasks take a lot of time and expertise.
On the cloud world, the cloud provider is doing the difficult work of installing and validating updates and keeping the services running. With that, the IT organization can have more time on their hands. So, they can actually start thinking on how to use these tools better. Regardless of who is responsible of the software maintenance, it's actually all about change management. When you perform any number of updates in an existing software, you essentially change things.
And as long as there are changes, there are always...
If you like this episode you’ll love
Episode Comments
Generate a badge
Get a badge for your website that links back to this episode
<a href="https://goodpods.com/podcasts/cloud-gossip-111527/hyperscale-datacenters-9923921"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to hyperscale datacenters on goodpods" style="width: 225px" /> </a>
Copy