Back to tenuous pop lyric links it is… the first one of 2015… So i hear you ask…who was waiting? Let me explain, pre Christmas we did an infrastructure upgrade for one of our customers, these guys are probably looking to exploit technology and technology trends as much as anyone I get to deal with, they are keen on looking at how technology can continue to transform what they do for their customers as well as how it can keep them ahead of their competition and underpin continued growth and success. A bit of a warning before you delve in… this is a pretty solution specific post for me, talking a lot about NetApp clustered Data OnTap, however please don’t get lost in that, it happens to be what the solution is based on, the important thing is about how we reached the decisions we did… so hopefully you’ll still find it interesting even if you use other technology and have no interest in changing…you’ve been warned! Their current infrastructure, which we deployed for them 3 or so years ago, has served the business well up until this point, with virtualisation providing the ability to scale infrastructure as needed and their NetApp storage allowing them to efficiently store and protect data, using both local snapshots and mirrored replication to multiple locations. Earlier last year the discussions began about what the upcoming technology refresh would have to do for them, so there where a few things high on the list;
- Flexibility – any infrastructure would need to be able to bring in new technologies without disruption, be that new applications, hypervisors or storage types
- DR and Continuity – a big consideration for the business
- Scalability – the business is continuing to grow rapidly, so they need the ability to scale the infrastructure out at all levels as needed
- Availability – non disruptive operations – service availability hugely important
- Cloud – How is cloud going to change IT use in the future
Some of the decisions where pretty straightforward to make… at a hypervisor level we would migrate to Server 2012R2 (from 2008R2 – giving better scale and performance) however we would have the flexibility to deploy Vsphere into the environment…either of those hypervisor choices would provide us with the flexibility we need and the ability to provide both strong continuity options as well as increasingly commonplace and simplified cloud integration… The big decision for us was around storage, not really around the need to change vendor (although there was a discussion with EMC) but more about what kind of storage infrastructure did we want. As a NetApp customer, this really boiled down to two things…did they stay with the more traditional NetApp 7-mode environment, or did they look at making a shift to clustered data ontap (cdot)… In today’s NetApp world of course the answer is increasingly that you deploy Clustered OnTap, this is the future for NetApp, there is no plans to further develop 7-mode and all the future good stuff is all going into cdot, however in early 2014 the issue in the mid-enterprise space, where you would be using the NetApp 2500 range of controllers, was the way the 8.2 versions of CDOT required certain disks for it’s default setup was hugely limiting, however this would be addressed in 8.3 by removing the need for dedicated disks to do certain tasks, making it much more realistic in the mid-enterprise space… The question was, did we move on with 8.2 or where they prepared to wait until 8.3 shipped, even though this would still be some 6-8 months away.. This comes down to something i’ve blogged about before…knowing what you want…now the guys at this customer are right on top of that, both technically and maybe more importantly they are in touch with the business needs, this allowed us to prioritise what was actually important… Flexibility – that was top of the list, it was critical that the new infrastructure allowed them to make infrastructure changes completely non disruptively, if we needed to install increased processing power, new disk technologies, integrate different data management processes, so the business could react quickly to changes in its requirements without disruption. cdot is all about flexibility – as a scale out technology, the ability to apply additional compute and different storage which we can seamlessly utilise with absolutely no disruption, is something we just couldn’t do with the traditional 7-mode approach. Integrate new technology – a significant issue for these guys was the need to be able to utilise new technology quickly – one of their big challenges is reporting, they receive a lot of data and the need to provide reports for their customers and to do that quickly is a big challenge, increasingly this is going to need flash disk, however what they didn’t want was flash that was in a silo somewhere (ruling out separate all flash array), again right up the street of cdot, the ability for us to integrate all flash controllers simply into the environment, we can then move report data on the fly and present it to separate reporting compute power without skipping a beat, hugely attractive and again something we can’t do with traditional 7-mode deployments. These where the two main drivers for the technology refresh we where to embark on, now bear in mind that a version of cdot that commercially and technically would work for these guys was still some 6-8 months away, gave us a dilemma, technically the answer was cdot, but could the business wait all that time. It comes back to that point again, know exactly what you want from your tech and understand how important it is, this was not about a tech refresh just because they should or could, this was a about a refresh that was going to drive the business. Knowing that allowed them to make the decision to wait, in fact not only wait but actually purchase at the time (as commercially that made sense) but then wait until the relevant technology shipped ahead of deployment. I really liked that thinking, as it was all about getting what was right for the business both technically and commercially, and prepared to take a long-term view, rather than short-term, rushing in with a solution that may of potentially restricted their future growth. The implementation has proved the value of that thinking – NetApp clustered OnTap 8.3 went in like a dream, the flexibility we now have by abstracting the entire storage infrastructure from any hardware, allowing us to move it around the physical infrastructure completely without disruption ticks all of the boxes, the ability to take advantage of new protocols (SMB 3.0 for HyperV) gives us even greater long-term flexibility. It also means that we can upgrade controllers, add different storage tiers or add more compute totally non-disruptively. Next up with these guys is how we use cloud as part of their future growth and continuity planning, as well as integrating some central desktops into the equation, this will mean the flexibility of the underlying infrastructure is going to be pushed quickly. NetApp is really strong here for them, with great integration with AWS and Azure as well as more niche global cloud providers, but that’s another story. Was it worth the wait…absolutely.. the lesson of this though… was not about patience…but about completely understanding what you need from your technology and aligning that to what your organisation needs to be able to achieve overall goals…. If you want to find out more about NetApp clustered Data OnTap click here or if you want to ask more techie questions, drop a comment or contact me on the social network links on the page… Hopefully this post can help you when it comes to looking at long-term decision-making… do what’s right in the long-term and not something to rush, just because you can… Thanks for reading….