Categories
Cloud datacentre Engineer

In 2014 the Market for Cloud Equipment Will Double its 2009 Tally

Cloud Equipment Market Will Grow From $110Bn in 2009 to $217Bn in 2014.

2009, according to a Cisco sponsored report by Forrester Research Inc, saw a significant uplift of sales of equipment into the cloud services sector despite the global recession. Figures show significantly greater growth in equipment sales that support next generation managed services as opposed to traditional Customer Premises Equipment.

2009 market growth

Their forecast for this market is that sales will grow from $110Bn in 2009 to $217Bn in 2014, a CAGR of 15%. It is all very exciting, I guess, unless that is you are stuck selling on premises equipment in which case you probably need to start thinking of career alternatives.

This information came from the Cisco Managed services seminar at the Tower of London last week. What struck me was the huge number of elements that make up the big cloud services picture. I counted 62 different technology areas that Cisco claim make up the whole market. These include areas such as Computing as a Service, Platform as a Service, Infrastructure as a Service and Software as a Service. The range is mind boggling.

This isn’t something that an ISP can undertake on a broad scale, at least not during the initial development stages of this market. You have to cherry pick your offerings.

Forrester have segmented the market into Unified Comms, Metro Ethernet, Security, Managed VPN (MPLS I assume) and Data Center . This may help. Timico plays in all these market segments to greater or lesser degrees which is somewhat reassuring.

In my mind you have to ignore the buzzwords and get on with satisfying what your customers need. In many cases customers will already have a good idea but there will be many more looking for guidance.

The case for Virtualization, which is a big part of the infrastructure play when it comes to talking about managed services and the cloud, is very strong.

I looked at one specific example of a company that had 217 machines/servers occupying 9 racks. On average each server has 500GB of storage (an assumption on my part but a reasonable one) but a memory utilisation of only 30 – 40%. That’s a usage of only 43TB out of a total available of 108TB (plenty of rounding here).

If this server estate could be distilled onto a robust Storage Area Network that represents a huge potential cost saving, just taking disk space into consideration. More memory is saved because these systems typically recognise which operating systems are being used by the Virtual Machines and do not replicate multiple instances of such software.

What’s more aggregated processing power = better individual VM performance. In other words the processor capacity available to any single machine is far greater than it previously had access to on a single server. This inevitably results in performance efficiencies. The bandwidth story is the same. An individual stand alone server is likely to be served by a maximum of 1Gbps whereas a VM will probably get 10Gbps.

The example I looked at will result in 217 VMs on single 8U blade centre with a capacity 32 servers though we won’t need all 32 for this specific customer.

As Cisco has suggested the market is undergoing a big change right now. One that requires significant investment in infrastructure. I suspect that many familiar names will fail to make it through. It will be interesting to see who emerges into the clear skies beyond the cloud 🙂

Charts are courtesy of Cisco with Data from Forrester Research Inc.

Categories
Business Cloud datacentre

Timico to spend £7m on datacentre, NOC and virtualisation

I am quite excited to be able to announce that we have begun the building of a new 18,000 sq ft, three storey facility at our Newark Corporate HQ. This will house a datacentre with up to 150 4KW racks on the ground floor. 

The first floor is designed as a Network Operations Centre and will provide us with a great 24×7 monitoring facility, screens galore and mirrored glass – the works. The initial build is costing £5m but we are planning a further £2m spend over the next three years, mainly on increasing the capacity of our virtualisation platform and Storage Area Network.

The Newark site already has diverse fibre connections but we will be adding a further link to Manchester to increase our route options out of the UK.

This facility will allow us to offer customers we host in London Docklands an alternative DR option in the midlands. The bigger play though will be virtualisation and the private cloud. We have been offering bespoke virtualisation services for three years or so but this will represent a big step up. Look out for announcements on this later in the year.

The header photo (click to see all of it) is of me and Construction Manager Gary Davies of Lindum Construction doing the ceremonials for the “groundbreaking”. The pic below is me actually doing some digging – I went out and bought a new spade especially 🙂

Categories
Business datacentre

building for growth

surveying the ground prior to starting on the new Timico Datacentre buildI’m looking forward to another year of building growth in business. Watch this space for news but to give you a clue the header photo is of the plot of land behind our current offices. The bloke in the yellow coat is a surveyor. There will be a webcam involved together with a few giant boys toys.

It’s currently minus one degrees out there so I am glad I have a nice warm airconditioned office to sit in:) Click the photo to get a bigger view of the plot.

Categories
Cloud datacentre Engineer peering

Notes from London Internet Exchange (LINX), including Telecity and Datacentre Market Growth

I usually attend the quarterly meetings of the London Internet Exchange (LINX). At the risk of boring readers you do find some fascinating facts at these get togethers.

LINX has 383 members with 56 new applications in 2010. That’s huge growth. Members come from 50 countries – so despite having London in its name LINX is very much international in its orientation.

LINX has 304 10Gig ports and carries over 776Gbp/sec peak traffic – roughly the same amount of traffic as around 160,000 Standard Definition video streams or 40,000 High Def. Traffic is up 22% in the last three months!

LINX members can reach around 78% of all websites in the world through their London connections. Interestingly historically LINX traffic has been fairly smooth whereas an individual ISP will see spikes based on high profile events such as the Olympics and the Football World Cup. Now even LINX is starting to see the effect of these events. The Chilean mine rescue is one example. People watched it on TV at home and then carried on using the internet once they had arrived in the office.

At LINX71 datacentre operator Telecity have just told us that they are selling out colocation space as fast as they can build it. They currently have around 23MW in the UK with a further 21MW in build.

Mind bogglingly they say that Google has as much datacentre space in Liege in Belgium as does Telecity in the entire UK.

More interesting facts as the surface – you read them first on trefor.net

Categories
broadband datacentre Engineer

Next Generation Broadband: The Digital Village Pump

Google satellite image of Ashby de la Launde in Lincolnshire

The story of Next Generation Broadband Access into the Final Third has to be all about the Digital Village Pump. The phrase has a certain flow to it but this is not about water. This DVP is about bytes.

The concept is that you run a fibre into a village and it terminates into a secure “datacentre” owned and run by the local community.  In the picture below the DVP is tucked away nicely at the back of a building in the centre of the village.

Digital Village Pump set in a modern day utilitarian "datacentre"
Digital Village Pump set in a modern day utilitarian “datacentre”

The DVP is air cooled with minimal ongoing maintenance and running costs.

How you get the fibre into the village in the first place is going to be different for each community.

There is very often an existing fibre run in an area – serving a school for example. It is not untypical for such runs to have multile strands of fibre, most of which are unused. This just needs identifying. It maybe a wireless feed.

How that community then distributes the connectivity is up to them. It isn’t necessarily feasible to expect people with no experience of data networks to do this themselves but the idea is that they engage a management company to look

Categories
Business datacentre

2,200 properties in the Newark area lose elecric power – communications services OK

2,200 properties in the Newark area have lost elecric power due to a substation failure.  I’m told it will take a couple of hours before “normal service is restored”.

That’s cool.  I can hear the reassuring sounds of the backup generator humming away. Comms are still up but the microwave oven in the kitchen, which is not a key service and therefore not supported on the jenny, has a half cooked meal in it. Customer services are still functioning. 

Categories
datacentre End User internet social networking

@tref on Twitter…Two Years, Ten Weeks, Two Days and Counting

I joined twitter 802 days ago on 17th May 2008. Since then as @tref on Twitter I have sent 2,623 tweets, an average of just over three a day. Not too bad for anyone who thinks I spend too long on the site.

In June, according to twitter COO Dick Costolo twitter had 190 million users, growing by 300 thousand a day. These users were generating 65million tweets a day – that’s enough for twitter to be building its own brand new datacentre to handle all the traffic.

Categories
datacentre Engineer

Shock Horror – High Performance Laptop Costs $100m

Did you know that a state of the art supercomputer costs $100m?  The price never comes down with time – the speed just goes up. Today’s leading edge box — actually it’s a datacentre full of racks, not a single box — has over 1Petaflops of processing power.

Such is the progress of technology that in three years time this will not even be in the top 500 of supercomputer performances. At that point the maintenance costs also start to ramp up so your average supercomputer owner just bins it and buys another one. It’s what I’d do 🙂

What is also interesting is that today’s supercomputer processor speed flows down to the laptop of 12 years hence. So in 2021 you will get one hell of a bang for your buck. Whether you will need that much power to send emails and operate Word 2021 (or whatever it will be called then) is unlikely.

I guess the power will be usable for improved 3D HD graphics for gaming and TV but I’m not sure what other apps will need it.  Whole brain simulations on a laptop perhaps.  Build it and they will come…  It does point to a huge continued growth in network bandwidth usage.

Of course the laptop won’t cost $100m. I just put that in for effect! My guess is that all laptops will come free with subscription to network services.

Categories
datacentre Engineer

GigE replaces old ATM infrastructure at Timico Docklands datacentre

I’ve been rolling my sleeves up at our Docklands Datacentres today. Having decommissioned all our old 155 Mbps STM1 pipes and replaced them with 622Mbps STM4’s we are now gearing up to replacing the 622’s with resilient Gigabit Ethernet connectivity to the BT21CN network.

The picture below shows part of the rack containing our first ever 155Mbps connection.  For those interested this was an STM4 partitioned into 4STM1’s.

For those not interested the real point is that this complete rack that was originally pretty much dedicated to hosting our central pipe connectivity to the BT ADSL network can now be replaced by a single port in a 3U chassis. You can get around 13 of these switches in a rack, each with potentially up to 15  GigE connections. In theory that’s up to 195 connections instead of just 4 with 313 x the bandwidth.

That’s progress folks.

STM4 Mux

Old STM4 chassis. Couldn’t get the whole rack in view. This is only half of it.

3U chassis supporting up to 15 GigE connections
3U chassis supporting up to 15 GigE connections

What replaced it!

Categories
Business Cloud datacentre

Salesforce.com Cloud Workshop: More from the CIO Council Meeting

In considering moving some of their business operations to the cloud the CIOs round the table at last week’s Salesforce.com cloud computing workshop voiced some interesting issues that they had had to get to grips with.

Firstly in running with a cloud based service a business is effectively entrusting key corporate data to a third party and effectively relinquishes control over it.

This means that you have to be sure of the integrity of the cloud. Salesforce.com operates 3 global datacentres in North America, Europe and Asia. These are linked with multiple OC48 fibre connections and replicate with each other on an ongoing basis. Of course this doesn’t preclude a domino effect type disaster.

A prudent business will also store it’s own data elsewhere. Coincidentally Timico’s own cloud storage service backs up to two secure and geographically diverse locations so customers then have their data stored in three spots – our two and their own local storage. It would be over the top for us to shift data to the Far East 🙂

The nature of the concern voiced at the workshop was not so much the safety of the data but its retrievability in the event that a customer wanted to take it’s business elsewhere. So when looking at a cloud service the portability of your Bytes is important. Whilst simply retrieving stored data is straightforward (bandwidth permitting) retrieving the business logic built into may not be so careful planning is likely to be required.

Different cloud services almost certainly offer different applications and features and it will be a while before these harmonise into a single set of features in the way that PBXs and CRM packages have done over the years. At the moment though you are unlikely to be able to move to a like for like service. Choose your partner well at the outset.

Another comment from the floor related to the fact that although part of the sales pitch from a cloud vendor was ease of scalability typically this meant that they let you scale up easily but were not so accommodating when you want to scale down. It is understandable that service providers want to maximise their take but I tend to agree that people should be able to reduce their commitment as well as grow it. It should be a stimulus for the quality of a service to be kept up.

Our VoIP service does typically allow customers to do this with one month’s notice so it can be done.

Categories
Business datacentre internet ofcom

Video Streaming Regulation: Is Ofcom Going after YouTube?

This may be something that has been going on for sometime in the background, but Ofcom today launched its consultation into regulation of video on-demand (VOD) services.

Following the Audio Visual Media Services Directive, the Government is to regulate VOD services which are ‘TV-like’. The consultation is looking at whether the Advertising Standards Authority (ASA) should regulate advertising in VOD services and is proposing that VOD services be regulated by the Association for Television on Demand (ATVOD).

The regulation will consist of a range of minimum content standard, new VOD rules delivered through a co-regulatory framework,  and Ofcom will be given primary responsibility to ensure the effective operation of the co-regulatory framework.
VOD regulation has to be in place by December 19 and Ofcom is seeking views by October 26.

I did wonder whether this meant that Ofcom would be trying to regulate the likes of YouTube. The consultation document does tell us that whether a service is in scope for regulation is defined by a range of criteria, including: whether the principal purpose of a service is to provide “television-like” programmes, on an on-demand basis, to members of the public; whether such a service falls under UK jurisdiction for the purposes of regulation; and whether the service is under a person’s “editorial responsibility”.

I suspect that YouTube falls outside of the UK for jurisdiction but this might not be the case in my mind if a specific video was stored on servers based in the UK. I don’t know where specific bits of the YouTube cloud are but it isn’t beyond the realms of possibility that some of it could one day be in UK datacentres. Looks like another potentially messy situation to me.

PS I note that my post titles are getting more and more tabloid-like and sensationalist. I rely on my friends to tell me when it is getting out of control 🙂

Categories
Business datacentre voip

Discussing VoIP Strategy and Solutions

We were discussing VoIP strategy today.  Timico supplies a mix of hosted VoIP and in-premises equipment based on what is best for the specific customer need. In looking at PBXs it occurred to me that there should be a standard platform that will run anyone’s PBX software just like it is in the PC world.

Then I realised that this is where the world has been for some time now and that platform is actually the PC. With the advent of SIP trunks replacing the need for analogue or ISDN line cards all you need is a PC running a PBX software application plugged into your network somewhere (or at one of our datacentres).

I’m sorry if this is stating the blooming obvious to most of you but the fact is it has crept up on us to the point that most PBXs are now really just PCs and the vendors are trying to exit the hardeware game.  No longer do you need the specialised modules that handle conversion of IP traffic to outmoded devices and services.

The vision that came with SIP when I first started working with the protocol almost ten years ago has finally come to fruition.  You can now buy an off the shelf piece of hardware (ie the PC), run a wide variety of PBXs on it – take your pick, the choice is yours – and choose from hundreds of different handset  types at all sorts of price points and feature sets.

The problem now is that actually at the moment this choice still introduces a level of complexity to the game that will take some time to go away.  It still doesn’t make sense for a service provider such as Timico to offer a huge range of PBXs and handsets to our business customers.

When they go wrong, and this they are certainly going to do, you out there running your businesses and concentrating on what you do best need us to come and fix the problem. Or at least to send a replacement PDQ so that you can get on with life.  Nobody can do this if they have a hundred telephone handsets in their product range.

I’ll keep you posted on my exploration through different handsets and solutions. It is where I started in this game  and is a fun part of the job.

Categories
datacentre Engineer peering

Interexion talk on green datacentres at Linx66

Some interesting talks at the today’s Linx66 sessions at Goodenough College in London. Lex Coors, VP of international datacentre operator Interexion discussed the green datacentre. One of the slides that caught my attention related to best practice in how end users can keep their power consumption to a minimum.

Most of these are pretty obvious but worth reproducing here with the percentages being the potential efficiency gain:

eliminate comatose servers 10 – 25%
virtualise 25 – 30%
upgrade older equipment 10 – 20%
reduce demand for older equipment 10 – 20%
introduce greener more efficient servers 10 – 20%

If you add that lot up you potentially get more than 100% but it does give people a feel for where their efficiency savings and therefore cost savings can be made.

The original source was McKinsey and the Uptime Institute (2008).

Categories
datacentre Engineer

It’s all about wiring

Following my post on our fibre installation earlier in June The build of our new datacentre module in Newark continues.

Datacentres, whilst giving the appearance of being high tech,  are all about wiring and plumbing.  So I’m getting in the cable monkeys and plumbers.

Couple of photos below give you a feel for part of the process. Underfloor power connections to each rack space and a coil of fibre that might look innocuous but will carry the lifeblood of the datacentre, ie the data itself.

It makes me think of the pony express, or the old stage post mail system and how things have changed. I’m getting romantic in my old age.

cabling

 

 

 

 

 

 

 

 

 

fibre

 

Of course it will be tidied up a bit before we open for business.

Categories
datacentre Engineer

It's all about wiring

Following my post on our fibre installation earlier in June The build of our new datacentre module in Newark continues.

Datacentres, whilst giving the appearance of being high tech,  are all about wiring and plumbing.  So I’m getting in the cable monkeys and plumbers.

Couple of photos below give you a feel for part of the process. Underfloor power connections to each rack space and a coil of fibre that might look innocuous but will carry the lifeblood of the datacentre, ie the data itself.

It makes me think of the pony express, or the old stage post mail system and how things have changed. I’m getting romantic in my old age.

cabling

 

 

 

 

 

 

 

 

 

fibre

 

Of course it will be tidied up a bit before we open for business.

Categories
Business datacentre internet

Powergate initial tranche is 95% sold

Following on from yesterday’s post re Telecity’s new capacity plans in Europe the company told me today that the first tranche in Powergate, its new West London datacentre, is 95% sold. That’s 95% of 4.5MW according to Telecity, and in less than a year!

With a total of 10MW potentially available there is still some way to go but it wouldn’t mind betting that they are already looking for a site for their next UK build.

Categories
datacentre Engineer internet peering

LINX 65 and Telecity

First day of LINX65 produced the usual interesting mix of talks. Today included IPv6 and VoIP QoS.

The sponsor’s talk at the end was given by Rob Coupland, COO of datacentre operator, Telecity. In Europe Telecity operates in London, Paris, Amsterdam, Stockholm, Milan and Frankfurt. A good footprint to have.

What was interesting was the statistic he floated that the company is doubling its datacentre power capacity over the next couple of years.

I counted 26.5MW in total! They plan to sell this over the next 3 – 4 years. This is a big bet that they appear to be confident of placing based on the uptake that they are already seeing. One of the big drivers they are (unsurprisingly) seeing is content provision.

I’m not making any comment re the effect on Global Warming here seeing as we at Timico are also in the business. I guess at the scale that we are talking about though cooling efficiencies will make a huge difference.

Categories
Business datacentre security

Security Tightened at London Datacentres for G20 Summit

Security is already pretty tight at our London datacentres.  This coming week will see security stepped up further as the G20 Summit takes place in town.  I’m not going to go into any details but at least BT are less likely to have any 21CN line cards stolen next week.

I’ve also had a number of meetings rescheduled from next week due to “security concerns”

Categories
Business datacentre events

Terrific Tina Turner at the O2

Tina Turner was great. Amazing in fact considering she is 69 years old (allegedly). What’s that got to do with a technology blog?  Only that I went along to a concert at the O2 Arena last night and was absolutely bowled over with the quality of what I saw.

The quality of the show, the quality of the venue – wonderful acoustics, and the quality of the hospitality on offer. My thanks to hosts,  Telcity and specifically sales manager Sharon Newling for looking after us in their suite.

Telecity is one of Timico’s high quality datacentre partners – we have a number of suites and cages at both Harbour Exchange and Sovereign House in London’s Docklands.

Just to round off the story I was pleased to take along with me Barry Skillett of Paypoint and Terence Long of RTP Solutions, both Timico customers. What’s more the O2 Arena is run by AEG, also a customer.

tinateam

From left to right Barry Skillet, me, Sharon Newling, Terence Long. I am obviously enjoying myself and obviously in need of a haircut!

Below – Tina herself on stage.

tina

Categories
datacentre End User internet

Gmail Down for the Morning Yesterday

Google themselves use Gmail, so someone certainly noticed that the service was down.

Gmail email was down yesterday, you may have noticed.  Certainly you might if you were one of their 113m strong userbase although I imagine that most are consumers and because it happened in business time it may not have had that significant an impact.

The service fell over because one of Google’s European datacentres failed which in turn had a knock on effect on some of their other datacentres. I have recently been visiting datacentres with a view to planning our next phase of expansion. Datacentres are rated in Tiers from 1 to 4, 4 being the most secure reliable and therefore most expensive.

In a Tier 4 datacentre you will find the ultimate in security mechanisms, biometric security, weighing machines etc. You also find the highest levels of resilience to power and connectivity failure. I was interested to learn recently though that there is a sensible limit to how much it is worth spending on a data centre as even Tier 4’s have been shown by modelling that they are vulnerable to catastrophic chain reaction failures .

I don’t know what Tier datacentres are operated by Google but they do employ someone specifically to manage reliability of their site. It just goes to show that when software and computers are concerned there is no such things as a 100% reliability.

In this case if you are totally reliant on a single email system it seems that there will always be a potential reliability issue. What you can do is have a totally separate mail system coming from a separate platform. I use both timico.co.uk from an Exchange server and trefor.net from our ISP platform.

Although I don’t ever recall the ISP mail platform letting me down certainly the Microsoft product has occasionally given me cause to resort to the backup. With a backup you can always call someone and ask them to resend to the other mail address and also use it yourself to send.

Most people have a personal email address but you might not want to give that out to a business acquaintance and in any event this type of email typically has file size storage and download restrictions. I’m sure others will have views on this subject but that’s my five pence worth.

Categories
datacentre Engineer H/W hosting

Containerised Storage

In the process of checking out our datacentre expansion options I have been meeting with a number of vendors. Today I met Verari Systems who manufacture high density blade based storage solutions and sell datacentres in a container. Yes that’s the same type of container you see hauled around on the back of trucks world-wide.

The beauty of containerised datacentres is the time to market. Four months from ordering you can be up and running with new capacity. You just need to supply the power and a secure place to put the container.

What impressed me was the quoted 11Petabytes of storage that Verari could achieve in a 100KWatt container designed to hold between 10 and 15 racks. This, for the mathematically challenged/lazy amongst us is in round terms the equivalent of eleven thousand Terrabyte PC hard drives.

Keeping the maths simple a rack can hold 42 servers (PCs) so ten racks would have the equivalent of 420 servers. The Veraris solution offers 26 x more density of storage than a PC. I have been buying Servers with 3Terrabytes of resilient storage – Verari still offeres 8 x the density.

Categories
Business datacentre internet

Building Datacentres: The Costs are Rising

Datacentres are quite a hot topic in the Internet Service Provider world, and their costs are rising, largely due to the increasing costs of power and cooling.

In the UK the major datacentres have typically been located in London’s Docklands. This is because Docklands is where most of the world’s major network providers connect. The cost of connectivity has traditionally been far too high to locate critical network infrastructure outside the capital.

I am sat in the LINX meeting in London writing this post listening to Bob Harris, Technical Services Director of Telehouse, one of the major datacentre players in Europe. Timico is already located in Telehouse North and East. Well the news is that they are building a Telehouse West (not particularly new news).

What is interesting are the financials associated with this project.

  1. £165m over 5 years (£80m over 1st 2 years for the first two floors)
  2. 5 floors with 985 sq metres per floor
  3. 425 racks per floor providing 4KW per rack = total 2125 racks
  4. Business plan to fill the facility over 3 – 5 years

That works out roughly at £78,000 per rack or just under £20,000 per KW. In terms of contribution to the operating costs the capital depreciation is £258 over 25 years, which is incidentally a long time in this game – 10 years might be considered more normal and the period has been arbitrarily chosen by me for illustration. Remember this is before anyone starts charging for operating costs.

I think the costs of this project point towards a trend to start building datacentres outside of London. Communications costs have plummeted and service providers and businesses are going to start hosting all but their most critical, perhaps latency sensitive, infrastructure outside the M25.

You can follow the progress of the Telehouse West building on their webcam here. I’ve pasted a picture of a typical backup generator that is used by datacentres to give you a feel for where the costs are incurred.

generator

PS I don’t think there is room for a Telehouse South, in case anyone was wondering.

Categories
Business datacentre security

It’s all about Security, Security, Security

I enjoy this business so much because of the wonderful diversity it provides me in terms of issues, problems and successes. The latest is the fact that the firewall at our corporate headquarters has been the subject of a number of attacks by some unfriendly person.

These attempts to break into corporate networks happen millions of times daily around the world, which is why businesses need to be on top of their security strategy. What interested me here was the fact that this was the same attack coming from a number of different places around the world.

The sources were in China, the USA, Poland, Australia and a couple of other countries whose names escape me. The same common username and password combinations were used each time from each different source (lesson here – never use “admin” and “password”) .

Of course the same individual or organisation is almost certainly involved in all of them. That person will have systematically hacked into a certain type of server whose operating system and security patches has not been kept up to date. It is likely a company server hosted at a datacentre somewhere.

Our course of action, if the attack persists, is to look up the owner of the IP address from which the attack is coming and ring the business up to let them know they have a problem. In the case of the Chinese source we send them an email – only because they will almost certainly be in bed. 🙂 Usually this sorts the problem out and indeed the recent spate of attempted break ins has abated. No doubt there will be more.

We know what to do in these cases but it is a lot to ask of a business that is not and ISP or doesn’t have a highly skilled IT department, which is why it very often makes sense to outsource your security management.

Categories
Business datacentre

Looking in on Microsoft’s Internet Strategy

In spending over $2Bn on network infrastructure, Microsoft is showing just how seriously it is taking internet business. And opening windows into its internet strategy.

I happened to be reading the New York Times today – as you do. The specific article revealed that Microsoft’s share price has dropped 5% – simply because Microsoft President Steve Ballmer mentioned that he thought technology stocks were overvalued – oops.

The main intent of the article was to look at Microsoft’s internet strategy. Its attempt to buy Yahoo has been high profile. However, what is slowly emerging is its other plans in the general area of “internet”.

Microsoft is moving into the Software as a Service (SaaS) game, which I’m certain means online based versions of the type of application that business buys today and sticks on a server in the corner of the office.  Microsoft Exchange and Sharepoint, for example. It likely means much more, though. Another interview on the web by Om Malik with Debra Chrapaty, Microsoft VP of Global Foundation Services (!!??) revealed some of the extent of the Microsoft investment in this area.

Two years ago Microsoft was said to be spending $2Bn on its network infrastructure. Some of today’s facts are absolutely astounding:

  • The company is adding 10,000 servers a month to its network.
  • New data centers being planned/under construction are equivalent of over 15 US football fields of data centre space (sounds a lot but it is probably the same as five rounders pitches J ).
  • Plans to cut of 30% to 40% in data-centre power costs company-wide over the next two years. (not buying it’s electricity from my UK supplier then – mine has just jumped UP about 150%)
  • Current network backbone runs at about 100 gigabits per second, but soon Microsoft plans to bump it to 500 Gigabits. For comparison BT21CN connectivity being offered to ISPs is based on 1gigabits rising to 10gigabits although I’m sure that their backbone must be faster than that.
  • Building out its own Content Delivery Network – 99 nodes on a 100 gigabit per second backbone.
  • For Microsoft, total data grows ten times every three years. The data in near future will soon approach 100s of petabytes.
  • Their datacentre opened in Quincy, Washington opened in April 2007 and when complete will consume 48 megawatts of energy. Microsoft can tap up to 72 MW of energy coming from hydro-electric power.
  • In San Antonio, Texas two further datacentres are planned for opening in September 2008 covering  447,000 square feet on 44 acres.

These facts and figures are just beyond comprehension for us mere mortals and are an indication of how serious the internet business is becoming.

By the way did you know that Microsoft owns Expedia, the travel site. I didn’t.

Categories
Business datacentre

Microsoft's Internet Strategy

I happened to be reading the New York Times today – as you do. The specific article revealed that Microsoft’s share price has dropped 5% – simply because Microsoft President Steve Ballmer mentioned that he thought technology stocks were overvalued – oops.

The main intent of the article was to look at Microsoft’s internet strategy. Its attempt to buy Yahoo has been high profile. However what is slowly emerging is its other plans in the general area of “internet”.

MIcrsoft is moving into the Software as a Service game which I’m certain means online based versions of the type of application that business buys today and sticks on a server in the corner of the office.  Microsoft Exchange and Sharepoint for example.

It likely means much more however. Another interview on the web by Om Malik with Debra Chrapaty, Microsoft VP of Global Foundation Services (!!??) revealed some of the extent of the Microsoft investment in this area.

Two years ago Microsoft was said to be spending $2Bn on its network infrastructure. Some of today’s facts are absolutely astounding:

The company is adding 10,000 servers a month to its network.

New data centers being planned/under construction are equivalent of over 15 US football fields of data centre space (sounds a lot but it is probably the same as five rounders pitches J ).

Plans to cut of 30% to 40% in data-centre power costs company-wide over the next two years. (not buying it’s electricity from my UK supplier then – mine has just jumped UP about 150%)

Current network backbone runs at about 100 gigabits per second, but soon Microsoft plans to bump it to 500 Gigabits. For comparison BT21CN connectivity being offered to ISPs is based on 1gigabits rising to 10gigabits although I’m sure that their backbone must be faster than that.

Building out its own Content Delivery Network – 99 nodes on a 100 gigabit per second backbone.

For Microsoft, total data grows ten times every three years. The data in near future will soon approach 100s of petabytes.

Their datacentre opened in Quincy, Washington opened in April 2007 and when complete will consume 48 megawatts of energy. Microsoft can tap up to 72 MW of energy coming from hydro-electric power.

 

In San Antonio Texas two further datacentres are planned for opening in September 2008 covering  447,000 square feet on 44 acres.

 

These numbers are just beyond comprehension for us mere mortals and are an indication of how serious the internet business is becoming.

 

By the way did you know that Microsoft owns Expedia, the travel site. I didn’t.