Back in August, I was pleased to be involved in Storage Field Day 20. For those of you not familiar, these events bring together a group of experienced technical people from across the IT industry with key storage vendors who share details of their latest innovations and gather feedback from the groups. Vendors don’t come to receive a pat on the back or be told how well they are doing, they want honest feedback to whether what they are doing is going to “hit the mark” for enterprises of all sizes across the globe and that is exactly the environment Stephen Foskett and his team have built.
A number of my co-attendees over the three days of the event have written detailed pieces on what was covered by the vendors so rather than repeating their work, I thought it may be interesting to look at the overall messages which I thought offered an indication of the general direction of the storage industry. (Please note I could only attend two of the three days, so have not seen the presentations from either VAST Data or Pure Storage).
Building the Cloud Like Experience in your Datacentre.
It should not come as a surprise that perhaps the main topic was ways to deliver a more cloud-like experience to our storage environments. What does that mean? If we look at what cloud allows us to do, it is scale, portability and automation of deployment, all of which are things we are now used to inside of the public cloud, however, this is much more difficult to do on-premises.
Nebulon, are new to the market and have a very innovative datacentre based but cloud-like approach. Physically Nebulon does not build a server but has developed a storage card that your server vendor supplies. This card takes the internal storage, turning your server into a pod that is presented to a storage network. The storage network is the innovative part of their approach, described as cloud defined storage, Nebulon provides a SaaS-based central management service that pushes its storage definitions to the management cards installed in multiple servers across an enterprise’s datacentres.
These storage definitions can be anything and can be changed on the cards as and when needed, for example, if you wanted a vSphere cluster running across 6 servers today deploy that profile. Tomorrow you want a Kubernetes cluster, then you push that new profile out and have a brand-new storage infrastructure sitting under your compute stack. OK, there are probably not lots of use cases for that drastic a change, but you can see where that flexibility has real attraction, service providers, dev and test environments, shared DR facilities or those wanting a more flexible approach without some of the costs often associated with that.
Supplying the cards through established server vendors is smart and positions them as an interesting vendor with a fresh approach to storage that is worth watching.
Insight is crucial
The development of more insight into the way storage and the data it holds is used was very much to the fore. Qumulo and Cisco championed the importance of improved data insights. Qumulo, who provide scalable storage file systems for unstructured data, shared the development of their in-built analytics. This will allow them enhanced capabilities in two key areas. Firstly performance, their analytics allowed users to easily delve into metrics to identify misbehaving actors and quickly address them.
Perhaps, more importantly, was their ability to natively start to look for behaviour that could identify potential data leaks and other security threats. This is becoming an interesting area of development in the storage industry as major vendors have listened to the feedback from their customers for this kind of analytics to be platform native rather than having to rely on external 3rd party tools. Cisco echoed much of this in their discussions around their own management developments which also included interesting plans to provide better automation for deployment across their entire stack.
You want innovation do it yourself!
Lastly, I wanted to give Intel a special mention, their presentation focussed on the development of OPTANE and its potential impact on the technology market from storage to in-memory databases. But what caught my attention was not just their enthusiasm for its ability to support innovation, but their commitment to proving it by developing DAOS.
DAOS is an open-source project which provides a standalone file system designed to exploit the capabilities of their memory technology. With some staggering performance metrics when tested I thought it was a great example of a technology company backing its own innovation and an example of how technology companies should be behaving and driving innovation to the market. Intel is already reaping rewards with several partners now taking the technology to innovate themselves.
The strategic direction
I mentioned at the beginning how vendors use Storage Field Day to gain feedback on whether their technology is solving relevant problems, so it should be no surprise that their messages, overall, reflected what we are hearing from enterprises every day, provide a more cloud-like experience in our datacentres, more insight into how our systems operate, enhance our security and of course bring us innovation and this was underlined by each vendor.
While SFD20 only presented a small subset of the data industry it is comforting to know that it seems that that at least our messages are been heard and appropriate solutions been delivered.
If you want to find out for yourself you can check out the presentations from the event on the SFD 20 website techfieldday.com/event/sfd20/ where you will also find an extensive list of articles from my co-attendees offering much more depth into the technology on show than I’ve shared here.