A data centre’s a data centre isn’t it? Power, aircon, security and diverse connections in and out of the building? Pick a location and a Tier level – 1,2,3,4 to determine the resiliency and cost. Throw in some green credentials and your uncle’s name is Bob!
Well you could put it like that. You wouldn’t be looking at the whole picture. Before building our Newark data centre we at Timico did a fair bit of market research. We looked at the needs of our customer base and what was available in terms of infrastructure technology that fitted the bill. The latter ranged from getting all the electromechanical bits right and ensuring that the cloud story was absolutely up there. When talking cloud we are talking the connectivity, storage processing and virtualisation infrastructure.
There was one piece of the story that we found compelling. This was how to make sure that if something went wrong the company and its customers were armed with everything they needed to get the problem fixed as soon as humanly possible. Or even before the problem occurred.
To be able to do this required a monitoring capability that was unrivalled in the industry. The company decided that it needed a Network Operations Centre at the heart of its business that would have full 24 x 7 visibility and control over its estate.
We visited a number of NOCs at companies at the top of the data centre league but it was clear that these guys were just playing at the edges. Checking to see if main power feeds were up, aircon working, that kind of thing.
As a supplier of not only the data centre infrastructure but also of the network and the online platform (ok cloud) we already had a capability that exceeded that of suppliers of pure colocation by a country mile.
We set about to integrate this monitoring capability with the resources of its new data centre. In fact we went a lot further. The NOC now uses a set of leading edge tools that sets the benchmark for data centre services.
Our customers can get access to the same screens that are viewable by front line DC staff. What’s more ticketing and trouble management tools such as ServiceNow can be integrated with the customer’s own instances of these tools.
There are huge benefits from having a NOC integrated with both DC and network. Traditionally a NOC and any monitoring function, if there is one, sits remotely to both the network and DC. Access is often via VPN through a firewall to the MPLS core. If this access point fails visibility of the network (and data centre) is lost. At Timico the monitoring takes place inside the MPLS core which is fed by dual 10Gbps diverse connections. Access to the monitoring can take place from anywhere within the MPLS network.
This means that if for whatever reason the access point to the network goes down the monitoring also fails. Timico monitoring remains accessible regardless of the online status of any particular site.
There is more. The NOC is located onsite at the DC. If your colocated kit has a problem one of our support team is on hand to help – seconds away. If yours is a virtualisation play then our highly qualified engineering team is there, on the spot and on the phone. Wherever you need the help.
There is even more. The majority of DC networks are built using traditional Enterprise network technologies. In particular, BGP is used to distribute routing information and provide link failure detection between the various DCs and between the DCs and their ISP.
While BGP offers a scalable way to distribute large amounts for routing information this is at the expense of convergence times. Even when BGP is tuned for better convergence a failure will disrupt IP connectivity for much longer than modern applications can tolerate.
At Timico all network connectivity is carried over a carrier grade MPLS network utilising MPLS Traffic Engineering which re-routes traffic within 10s of milliseconds. This is vital for today’s applications whether they are VoIP, Unified Comms, SQL database transactions and so on.
In addition, at Timico all DC core routers hold the full internet routing table which ensures all traffic travels along the optimal path over our 10Gb core network. Companies that don’t carry the full internet routing table on their first hop routers can only offer active/standby failover or crude load sharing at best.
We haven’t finished here. We use an innovative user authentication system that confirms a user’s credentials and location before authorising them to perform certain tasks.
Traditionally, limiting access to a device based on the connection source address has been achieved using an access control list configured on the device itself. This doesn’t scale well and the resulting policy is applied to all user accounts.
By associating the permitted locations against an individual user account we are able to enhance security through finer control. We also gain greater agility when deploying new management systems by centrally managing this attribute.
So that’s what you need to look for when looking at a data centre.
3 replies on “What to look for when choosing a data centre”
Tier I, II, III & IV
Have you got a contingency site (for DC and/or NOC) ?
i.e. Fire/flood/airplane landing in wrong place…
A makeshift NOC would be fairly easy to popup but the DC is harder 😉
You’ll probably tell me it’s not required – then in a few years they’ll be a secondary DC and you’ll tell me how no one should use a supplier without dual DCs 🙂
Yep – have data centre suites in Docklands, a couple of data centres in Fareham and also the ability to backup and store in the Timico HQ