It’s All About Storage – datadomain #VMware

datadomain storage

Nice bit of lab kit we have just installed downstairs. This is a datadomain DD630 backup storage device.  You can see that it has 12 x 1TeraByte hard drives – that’s quite a chunk of storage compared to your laptop or desktop PC.  The box above it is running test servers using VMware (click on the header photo to see it all).

This is all lab work being done in preparation for the new data centre when it opens in January. The twelve 1TB drives result in around 9TB of useful space once RAID, hot spare and storage of the OS are taken into consideration.

The beauty of this though is that we are likely to be able to store far more than 9TB of real data once it has been “deduped”  – for example identical copies of operating systems removed. In our trials we are backing up some VMs and are seeing 2TB of data being compressed and deduplicated down to only 140GB on the datadomain. We won’t necessarily get the same savings when the system scales up but it is easy to see that it is an attractive piece of kit.

One of the nice features is that if you lose your primary VM server then the system allows you to boot from this backup whilst it rebuilds the original server in the background.  This can save a couple of hours of work – very valuable in  a problem situation.

As we start building out the virtualisation platform I’ll do some more update posts. The inset photo is the same kit with the front cover on.

Published by Trefor Davies

Liver of life, father of four, CTO of trefor.net, writer, poet, philosopherontap.com

Join the Conversation

  1. Trefor Davies

7 Comments

  1. Good point Umar – it’s the sort of thing that would sit nicely next to the pile of bricks – perhaps I should apply for an arts council grant 🙂

  2. I’ve just done a 2nd days backup to our nice new data domain – and the stats today are pretty impressive!

    The backups we are doing are using a full system backup – not incremental. No special software was used, it’s a backup of the same VMs that were backed up last week to test it. 666.7GB of data (pre-compression) was written to the datadomain during the backup, and impressively, only 14.1GB of data was written to the disk. This was a 97.9% reduction.

    In summary, having done two lots of full system backups now of about 20 windows servers, some running SQL, some running BizTalk, and a handful of Linux servers too – 3701GB of backups exist, but only are taking up 173.1GB of disk space – a 95.3% reduction. Very Impressive.

    I am currently researching an open source alternative, and will put an article on my own blog with my findings.

  3. 600>14GB wow that’s impressive!

    What I would suggest is that you also test a ‘restore’ to prove that your backups are in fact valid/intact!

    Sorry if this is *sucking eggs* – but it’s not unknown for people to come unstuck after thinking there backups were fine 🙂

  4. I’m testing the device purely for it’s storage reduction technologies at the moment. We will be using a backup solution soon that will use vSphere’s changed block tracking system to implement incremental backups, and this software also has the capability to verify the backup automatically (by booting up a VM from the backup and scripting checks).

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.