Delivering Christmas

christmas treeRecently I’ve been doing some work with a number of our partners at one of our major accounts, to help them meet their business critical deadlines around this time of year. Now it’s all up and running I thought i’d share what we did… St. Nicholas manufacturing and logistics is a global toy maker and logistics company head quartered in LapLand (Ok so you know where this is heading!), these guys are a very long established business, however they are always keen to innovate, so this year we have brought some of our leading partners together to help them meet all of there varied business needs. Most of the year, St. Nicks are busy manufacturing toys… but this time of year the business scales up massively. They receive global orders, through multiple communication channels ranging from email, online ordering, individual meetings and their traditional method of handwritten note. WishList From order, and these guys take orders up until Christmas Eve, the toys are manufactured individually and it can be literally any toy on the planet, which are delivered in a massive logistics project, with each product delivered around the globe on a single day, Christmas Day. This all presents a huge set of challenges, as it does for any global business, especially businesses that have huge peak in compute requirements. St. Nicks certainly have that challenge, the scale of their requirements for a 6 week period at the end of the year is huge, however once December 26th rolls around business returns to normal levels. Traditionally of course these guys would have invested huge amounts in an infrastructure that would of had to of scaled to meet these once a year needs, while the infrastructure would idle for the rest of the year… however it would still be powered and cooled and maintained costing St Nicks serious money throughout the year. Alongside the needs for a flexible infrastructure, St Nicks also have some large data security challenges, they keep information from billions of customers, that information is critical until production is complete, it then needs archiving away, while continuing to remain hugely secure. They also retain some massively sensitive information in what they term their “naughty or nice” datastore,it is critical this datastore is only ever accessed by authorised personnel. Their final challenges are around the massive network of external lifestyle verification engineers (E.l.v.e’s) the Elves spend around 6 weeks visiting customers around the globe taking orders on their mobile devices, so the security of these devices is highly important to the business. As you can see their requirements are wide ranging, but fortunately for them this year we’ve been able to pull together some of our industry leading partners to deliver the ideal infrastructure.

Infrastructure to deliver Christmas

Hybrid all the way was the answer to the scalable infrastructure challenges – we needed both a scalable local infrastructure to support the requirements for most of the year, however we then utilised a range of hybrid technologies to allow us to massively scale our infrastructure for the hectic 6 week period leading up to December 25th. This was easily done, with a mix of Windows Server, System Centre and Azure to deliver our compute and NetApp clustered OnTap, NetApp private cloud storage and Softlayer datacentres to deliver our storage infrastructure that allowed us to retain control of our data and its location, while been able to integrate it with cloud compute as and when necessary. The use of System Centre and HyperV on premise also allowed us to migrate our key production and delivery systems into Azure seamlessly to use the massive compute provided by a global hyperscale environment to process our manufacturing, orders and logistics planning during the critical 6 week year end window. We housed a large part of our internal compute in a Softlayer datacentre, using Softlayers unique ability to provide hardware quickly and easily in an Infrastructure as a Service model, and with their high bandwidth connections into HyperScalar platforms, this allowed us to easily integrate cloud compute when we needed it. Our Clustered OnTap NetApp storage has allowed us to deliver both scale up and scale out, with the ability to move data completely non-disruptively into both higher powered controllers and super fast flash disks to allow for extremely quick data analysis and reporting, While integration with both our NetApp private storage housed at Softlayer and NetApp cloud OnTap ensured we could easily move data into our public compute facilities. This provided us with extreme flexibility and Christmas agility which is critical to this unique business.

Protecting our Christmas Data

The protection of this data is of course critical at St Nicks. it’s key that the personal data they receive meets strict data governance rules, which we also ensure only the correct staff access the relevant details. Varonis, our data governance expert partners, had the answer, their DatAdvantage solutions, ensures that we are fully aware of who has access to what data and we can quickly identify any data that erroneously ends up stored in areas where it shouldn’t, allowing us to quickly address any data storage policy breaches, as well as identifying  that all users have correct and appropriate permissions. Protecting this data is of course a key aspect of any data infrastructure strategy, our use of Catalogic DPX allows us to protect key systems quickly and effectively, but critically, especially during the height of St Nicks business, the ability to recover any data, almost instantly, back into the production environment  is key to ensure that the no significant downtime occurs due to data loss. Catalogic ECX also allows us to fully catalogue our data environment so we know where any piece of data is at a given time, regardless of it’s location, on-premise, hybrid based our cloud based. When the rush is over, we go through a significant archiving project, our partners at Waterford software help us to move data into our long term cloud archive, interfaced by the addition of our Panzura archiving controller, allowing us to have access to the most local data in our local cache while allowing long term archival of data in very low cost and highly resilient cloud object stores. This management ensures our local infrastructure is kept lean and not overrun with aged and stale data. In the New Year we will introduce Actifio into the infrastructure which will allow us to efficiently take a copy of the production data set, which can presented to multiple other parts of the business for test, dev, QA and DR seamlessly.

Mobile Security

elvesThe last element of this years project was to enhance the protection of the many thousands of mobile devices in use by the E.L.V.E.’s. It was important to us that these devices where both secure and that we could protect the data on them. For this we introduced two partners Druva and WinMagic to the infrastructure to manage the range of devices in use, from Tablets to Smartphones (IOS, Windows and Android). Druva allow us to deliver both backup and data leakage controls to the mobile devices, using their Druva InSync product ensures that data is protected on each of the mobile devices and seamlessly sync’d back to our data centre datastores, ensuring we protect this critical information. The InSync Data Leak Prevention tools also ensures that data only ever travels where it needs to and this is critical when you look at the sensitivity of the data the E.L.V.E’s are handling. Finally of course it’s important we protect the data on these devices as inevitably with such a large mobile workforce, occasionally these devices are lost or end up in places they shouldn’t, so of course we deploy device encryption across this large estate, now over such a wide range and large number of devices, encryption management can be a real headache, so this year we’ve introduced WinMagic into the infrastructure and this has allowed us a single management platform to control all of the different encryption elements in play from the range of mobile operating systems St Nicks are utilising.

Christmas Rush Begins

As you can see it’s been a busy few weeks, but all of those partners that we’ve been able to work with this year have all delivered real value to St Nicks operations allowing us to deliver a flexible, resilient, scalable infrastructure, where we are managing and securing the huge amounts of data  that the enterprise generates, we’ve also been able to add security of both data and devices to the thousands of mobile users that St Nicks employs. Fingers crossed as the big day arrives for our friends up in LapLand, the new efficiencies and capabilities we’ve added this year, will allow St Nicks to carry out their great work even more effectively that ever before and that when you all wake up Christmas Day, you’ll see the fruit of our labours around the bottom of your tree!

Merry Christmas

Santa-Claus-AOK then, hopefully you’ve all figured out, that of course we are talking about about Santa’s workshop and sadly I’ve not been involved in delivering his infrastructure this year, because of course none of us know how that job is done… and even if I was involved, it would be very remiss of me to share it all with the world! Of Course if Santa did indeed use IT in the huge logistics operation that he carries out, then the challenges above would certainly be a reflection of parts of his business! and you’d think cloud compute, agility, data management and security would be right up there on the list, as they have been with many customers I’ve worked with this year. However all the partners and technology I’ve mentioned have all played a part in projects I’ve worked on this year and all provide leading solutions to complex business problems and Santa wouldn’t go wrong deploying any of them! Well I hope you’ve enjoyed this little Christmas BLOG and for those who’ve read this BLOG this year thank you, hopefully I’ll be able to welcome you back next year with some more tech based articles. In the meantime to you all have a Great Christmas and I Wish you well for 2015.



Dead Busy! and a week in tech

busy_manIt’s been a little while since I’ve managed to get a BLOG post out, it’s been a hectic time recently, keeping the elves in order as they make toys for Christmas is a tough job!

OK, so more realistically there’s been a lot of work going on with a range of customers and lots of it has been really interesting, lots of interesting developments on how we deploy hybrid infrastructure and some excellent first exposure to NetApp’s latest version of their industry leading Data OnTap operating system, but more on that in a more detailed BLOG post Soon.

So while all this working has been going on, the tech world has been very busy, with all kinds of announcements and new technology releases going on, I thought it would be apt to maybe try to round up a few of the latest releases that have caught my eye, because I know for many of you, that you often don’t get the chance to scour the industry to often, as you have plenty on your plate, well hopefully this little round up, which if people like it i’ll try to make a weekly feature…can help…

Now i’m going to be picking out things that have caught my attention and putting a bit of a view how I see them affecting the world and I’m not pretending i’m going to catch everything, but hope you find interesting some of the things doing the tech rounds at the minute…

NetApping everywhere

Anyone who’s followed any of my social media stuff will know i’m a big fan of NetApp and their technology… what I’ve really enjoyed about working with these guys over the last 8 years or so, is that I’ve always been impressed by their view of the world and how they look to innovate their storage platforms to meet the ever changing needs… now there has been over the last couple of years a suggestion the chaps from Sunnyvale have not been innovating so much…but heck have they stepped up to the innovation plate over the last couple of months.

Some of this stuff needs a post of it’s own as some of it is really important for anyone who is looking at how they take advantage of cloud infrastructure.

What can a storage vendor be doing that’s so important to using cloud platforms… Richi Jennings at NetApp wrote an interesting post a little while a go that talks about the importance of realising its not businesses that move to the cloud, it’s data and of course it is… (read Rich’s article here)

So if you are going to embrace cloud services in your business then it’s important that the technology industry fully supports that, NetApp have definitely done that and ensured cloud and hybrid cloud is a big part of what they are about (don’t worry if you don’t use NetApp – other storage technologies are available) and they’ve been belting out the tech to support this…

  • Cloud OnTap – a couple of weeks ago NetApp announced the release of cloud OnTap as a service on Amazon AWS – yep you can drop onto the Amazon store and in minutes have yourself a NetApp storage infrastructure available in the cloud. Now in reality right now, the use of Cloud OnTap is probably most likely to pay off in your dev and test environments, but the fact that you can present storage, that seamlessly integrates with your on premise NetApp deployment and then NetApp’s cloud manager software allows you to manage all of your NetApp based storage via one single platform that’s pretty impressive. I’d also say it’s imperative that if you are going to integrate cloud storage into your environment, its got to be easy…if you tie this kind of technology in with things like express route (for Azure) and direct connect (for Amazon) which gives dedicated bandwidth into public hyperscalar platforms then the reality of moving business data short term or long term into public cloud storage platforms becomes realistic and achievable.
  • SteelStore Acquisition – So a few weeks back NetApp also announced the purchase of SteelStore from Riverbed. Although this is not particularly unique, but it does expand out the NetApp cloud integration portfolio, for those who don’t know, Steelstore is an on-premise appliance that provides a gateway to back end cloud based object stores, then the on-premise appliance interfaces with your enterprise backup solution to provide easily accessible cloud storage in which you can house your backup and archive data– why would you do this? cost and simplicity really – the idea of an unlimited storage pool available for your backups and archive, at around 3p per Gb is a pretty cheap way of backing up and archiving your data.

The last bit of NetApp news was the announcement this week of NetApp adding  support to the growing Vmware EVO rail ecosystem. what’s EVO Rail? EVO rail is Vmware’s own hyper converged platform, built on Vsphere it is an appliance built by Vmware partners to deliver an out of the box virtual platform, according to the blurb, it provides a virtual platform ready for use within 15 minutes. NetApp have announced that they plan to realease a version of this, built on Clustered OnTap giving NetApp enterprise storage above and beyond the EVO Rail storage capability built on VSAN, so if you want easy hyper converged deployment with enterprise class storage then this could be the beast for you…look out for this in the New Year.

Anyway enough NetApp – what else has been going on in the tech world?

IBM and Docker


One of the things I’m casting an interested eye over is the use of application containers as a way of delivering applications, a container gives you a self contained run time environment within which you place your application  and then this container is portable between platforms.

Docker until recently has been the only real game in town and this has been underlined by a couple of huge announcements that takes this from the Linux dev world to the wider market place.

Firstly Microsoft now support Docker containers in Azure and in the not too distant future they will be supporting Docker containers in the next version of Windows Server, then hot on the heels of this was the announcement that IBM are going to be supporting Docker in their cloud platforms as well (Have a read of the press release).

What does all this mean?

Well potentially…and I stress the word, potentially, this could shake up how we see platforms and infrastructures of the future built.

The main way today that we share our hardware platforms so we can deliver logically separate applications is via virtualisation. We pop a hypervisor on a box then install multiple copies of operating systems inside virtual machines. This of course means that we have to manage and maintain all of these OS’s and applications.

Well let’s take a look at how containerisation could completely revolutionise this. If to logically separate our apps, rather than installing lots of OS’s we could just install one OS and then logically separate our applications into containers, so that’s one operating system we need to maintain and patch etc, while keeping our applications in there logically separate containers, that’s a huge overhead reduction.

It also is potentially a useful step in the software defined future for our datacentres. If we have our apps in completely portable containers, then the ability to move between on premise, hybrid cloud, public cloud etc…becomes really easy…

A tech to keep an eye on…

Azure RemoteApp

Last up in the news round up is the release of RemoteApp on Azure. For any of you out there who have ever built a remote desktop infrastructure (terminal server for you old school folk!) you’ll realise to do this on any scale take a bit of effort and potentially quite a bit of compute to make it work.

Well RemoteApp as a service takes care of that, you spin up your RemoteApp Azure platform, drop your apps into it and heah presto there it is up and running.

I just think that’s a great use of a cloud service. To give you an idea of how much of a simplification that is, I’m currently rolling out a 400 user RemoteApp deployment for a customer, where we have gone through a test environment, some proof of concept and now we have designed the full on infrastructure which is based on 8 servers and quite a bit of compute resource, as well as 3 weeks of PS to get it built.

RemoteApp, is spin up the server, get the App installed on it…test it… if that works…click purchase and away we go (so OK a bit more work…but not much more) and we have a build.

Delivered quickly, scales at will, massively reduces the PS needed to deploy and will be updated etc. in the background.

Of course doesn’t work for everyone and for a couple of reasons actually doesn’t work for this client, however as a powerful example of Software as a service, when the environment is right, I actually think a really good one.

Anyway…that’s a quick round up of some of the stuff that’s been catching my attention in the last few weeks… it’s actually taken me a week to write this, with workload etc…so I’ve already got a bunch of other topics to update you with…hopefully I’ll get those out before Christmas.

Hope you enjoyed the post…