The State of the data nation – Howard Marks – Ep70

A couple of times a year I like to do a show that reviews the current state of the data market, the chance to take a check on the challenges facing both the makers and consumers of data technology, how they are been addressed, the technology changes and trends that decisions makers should consider and what the future holds for the industry.

I always think these shows are useful for those who are tasked with making strategic decisions and designing data platforms for their businesses, I know when I speak with people on these topics it’s always useful to understand current market thinking and the general direction that the technology vendors are taking.

Earlier this year I spoke with industry analyst Chris Evans as we looked ahead at what 2018 had in-store (you can find that episode here). For this half year review, I was very fortunate to get some time with renowned industry analyst Howard Marks.

Howard is founder and Chief Scientist at DeepStroage.net as well as co-host of the excellent Greybeards on storage podcast. With over 30 years’ experience as a consultant and writer on the storage industry he is very well placed to comment on the current state of the market and its direction.

In episode 70 of Tech Interviews, I chat with Howard about a range of topics. We discuss the current rate of change of the industry and is it the rate or amount of change that’s concerning us?

We look at the impact cloud is having and how much of a driver to change it is, Howard shares some thoughts on Software as a Service (SaaS) and its impact on traditional roles.

We examine in more detail changing roles, how storage admins need to be in charge of “data paranoia” and we ask if simplification is a good thing? and why cloud simplicity doesn’t sit well with organisational complexity.

We end our show looking ahead, what Howard would like to see the storage industry tackle, why a focus on data management will be key and the impact that storage class memory is going to have on both producers of and consumers of technology.

Howard shares some fantastic insights and left me with a lot of food for thought. I am sure he will you too.

To find out more about what Howard does, you can visit DeepStroage.net follow Howard on twitter @DeepStorageNet and if you deal with data and want to understand the data technology market then get the Greybeards podcast on your listening playlist, you’ll find it here.

Thanks for listening.

Don’t forget me and Mrs Techstringy are taking on the Prudential Ride London event for the Marie Curie charity in the UK to help support their work in delivering end of life care, if you can help and support us, it would be much appreciated, you can find our story here.

Advertisements

Managing all of the clouds – Lauren Malhoit – Ep67

As the move to cloud continues we are starting to see a new development, with organisations no longer relying on a single cloud provider to deliver their key services, many now opting for multiple providers, from their own data centre to hyperscale big boys, multi-cloud environments are becoming the norm.

This multi-cloud environment makes perfect sense, the whole point of adopting cloud is to provide you with the flexibility to consume your data, infrastructure, applications and services from the best provider at any given time, which would be very difficult to do if we only had a single provider.

However, multi-cloud comes with a challenge, one rather well summed up at an event recently by the phrase “clouds are the new silo’s”. Our cloud providers are all very different in the way they build and operate their infrastructure and although when we take services from one provider we may well not notice or care, when we start to employ multiple vendors it can quickly become a problem.

How to avoid cloud silo’s is seemingly becoming a technology “holy grail” engaging many of the world’s biggest tech vendors.  This is only good news, as we move into a world where we want the freedom and flexibility to choose whichever “cloud” is the best fit for us at any given time, then will will only be able to do this if we overcome the challenge that comes with managing and operating across these multiple environments.

Taking on this challenge is the subject of this week’s podcast with my guest Lauren Malhoit of Juniper Networks and co-host of the excellent Tech Village Podcast.

Lauren recently sent me a document entitled “The Five Step Multi Cloud Migration Framework” It caught my attention as it discusses the multi-cloud challenge and provides some thoughts on how to address it and it is those ideas that form the basis for this week’s show..

We open the discussion by trying to define what multi-cloud is and why it’s important that we don’t assume that all businesses are already rushing headlong into self-driving, self-healing, multi-cloud worlds. We chat about how a strategy is more likely to be for helping a business start along this road, rather than managing something they already have.

We explore how multi-cloud doesn’t just mean Azure and AWS, but can equally apply to multiples of your own datacenters and infrastructure.

Lauren shares her view on the importance of automation, especially when we look at the need for consistency and how this is not just about consistent infrastructure, but also compliance, security and manageability.

We also ask the question, why bother? Do we really need a multi-cloud infrastructure? Does it really open up new ways for our organisation to operate?

We wrap up looking at the importance of being multi-vendor, multi-platform and open and how that openness cannot come with a cost of complexity.

Finally, we discuss some use cases for multi-cloud as well as taking on the challenge of people in our business and the importance of how a multi-cloud world shouldn’t be seen as a threat, but as an opportunity for career growth and development.

I hope you enjoy what I thought was a fascinating conversation about an increasingly pressing challenge.

To find out more about the work Juniper are doing in this space you can look out for forthcoming announcements at Juniper.net as well as check out some of the information published on their Github repo’s.

To find out more about the work Lauren is doing you can follow her on twitter @malhoit or her blog over at adaptingit.com

Also check out the fantastic Techvillage Podcast if you are interested in career development and finding out about the tech world of others in the IT community.

Juniper also have some great resources for learning about designing a multi cloud environment check out the original white paper that inspired this podcast The Five Step Multi Cloud Migration Framework and you’ll also find some great info in this post Get Your Data Center Ready for Multicloud

Until next time – thanks for listening

NetApp Winning Awards, Whatever Next?

WP_20160518_07_53_57_Rich_LI.jpgIn the last couple of weeks I’ve seen NetApp pick up a couple of industry awards with the all flash A200 earing the prestigious Storage Review Editors Choice as well as CRN UK’s storage Vendor of the year 2017, this alongside commercial successes (How NetApp continue to defy the performance of the storage market) is part of a big turnaround in their fortunes over the last 3 years or so, but why? What is NetApp doing to garner such praise?

A bit of disclosure, as a Director at a long-term NetApp Partner, Gardner Systems, and a member of the NetApp A-Team advocacy programme, I could be biased, but having worked with NetApp for over 10 years, I still see them meeting our customers’ needs better than any other vendor, which in itself, also suggests NetApp are doing something right.

What is it they’re doing? In this post, I share some thoughts on what I believe are key parts of this recent success

Clear Strategy

If we wind the clock back 4 years, NetApp’s reputation was not at its best, tech industry analysts presented a bleak picture, the storage industry was changing, with public cloud storage and innovative start-ups offering to do more than those “legacy” platforms and in many cases, they could, NetApp were a dinosaur on the verge of extinction.

Enter the Data Fabric, first announced at NetApp’s technical conference, Insight, in 2014. Data Fabric was the beginning of NetApp’s move from a company focussed on storing data to a company focused on the data itself. This was significant as it coincided with a shift in how organisations viewed data, moving away from just thinking about storing data to managing, securing, analysing and gaining value from it.

NetApp’s vision for data fabric, closely aligned to the aims of more data focussed organisations and also changed the way they thought about their portfolio, less worried about speeds and feeds and flashing lights and more about how to build a strategy that was focussed on data in the way their customers were.

It is this data-driven approach that, in my opinion, has been fundamental in this change in NetApp’s fortunes.

Embrace the Cloud

A huge shift and something that is taking both customers and industry analysts by surprise is the way NetApp have embraced the cloud, not a cursory nod, but cloud as a fundamental part of the data fabric strategy and this goes way beyond “cloudifying” existing technology.

ONTAP Cloud seamlessly delivers the same data services and storage efficiencies into the public cloud as you get with its on-prem cousin, this provides a unique ability to maintain data policies and procedures across your on-prem and cloud estates.

But NetApp has gone beyond this, delivering native cloud services that don’t require any traditional NetApp technologies, Cloud Sync, allows the easy movement of data from on-prem NFS datastores into the AWS cloud. While Cloud Control provides a backup service for Office365 (and now Salesforce) bringing crucial data protection functionality that many SaaS vendors do not provide.

If that wasn’t enough there is the recently announced relationship with Microsoft, with NetApp now powering the Azure NFS service, yep that’s right, if you take the NFS service from the Azure marketplace this is delivered fully in the background by NetApp.

For a storage vendor, this cloud investment is unexpected, but a clear cloud strategy is also appealing to those making business technology decisions.

Getting the basics right

With these developments, it’s clear NetApp have a strategy and are expanding their portfolio into areas other storage vendors do not consider, but there is also no escaping that their main revenue generation continues to come from ONTAP and FAS (NetApp’s hardware platform).

If I’m buying a hardware platform, what do I want from it? It should be robust with strong performance and a good investment that evolves with my business and if NetApp’s commercial success is anything to go by, they are delivering this.

The all-flash NetApp platforms (such as the award winning A200 mentioned earlier) are meeting this need, a robust enterprise-level platform, allowing organisations to build an always-on storage infrastructure that scales seamlessly with new business demands. 6-year flash drive warranties and the ability to refresh your controllers after 3 years also give excellent investment protection.

It is not just the hardware however, these platforms are driven by software, NetApp’s ONTAP operating systems is like any other modern software platform, with regular code drops (every 6 months) delivering new features and improved performance to existing hardware via a non-disruptive software upgrade, providing businesses with the ability to “sweat” their hardware investment over an extended period, which in today’s investment sensitive market is hugely appealing.

Have an interesting portfolio

NetApp for a long time was the FAS and ONTAP company, and while those things are still central in their plans, their portfolio is expanding quickly, we’ve discussed the cloud focussed services, there’s also Solidfire with its unique scale and QoS capabilities, Storage Grid a compelling object storage platform, Alta Vault provides a gateway to move backup and archive data into object storage on-prem or in the cloud.

Add to this the newly announced HCI platform you can see how NetApp can play a significant part in your next-generation datacenter plans.

For me the awards I mentioned at the beginning of this article are not because of one particular solution or innovation, it’s the data fabric, that strategy is allowing NetApp, its partners and customers to have a conversation that is data and not technology focussed and having a vendor who understands that is clearly resonating with customers, analysts and industry influencers alike.

NetApp’s continued evolution is fascinating to watch, and they have more to come, with no doubt more awards to follow, whatever next!

VMworld – It’s a Wrap – Ep42

VMware, along with Microsoft, is perhaps the most influential enterprise software company in the industry. VMware and their virtualisation technology has revolutionised the way we deliver IT infrastructure into businesses of all types.

It is not just traditional virtualisation they have made commonplace, the way they have driven the industry to accept our IT infrastructure can be software-defined, has made it more straightforward for us to adopt many of the modern technology platforms, such as cloud.

Today, however, the infrastructure revolution they helped create presents challenges to them, as the broad adoption of cloud and new ways of managing and deploying our infrastructure has led to the question “how do VMware remain relevant in a post virtualisation world?”

The answer is, of course, found by understanding how VMware see those challenges and what their strategic plans are for their own future development. There is no better way of doing that than spending time at their annual technical conference VMworld.

In last week’s show (Was it good for you? – vmworld 2017 – Ep41) we discussed with 4 attendees their views on what they learnt, what VMware shared and what they thought of the strategic messages the heard during the keynotes.

This week, we wrap up our VMworld coverage and a look at the modern VMware with two more insightful discussions.

Firstly, I’m joined by Joel Kaufman ( @TheJoelk on twitter) of NetApp. Joel has had a long relationship with VMware in his time at NetApp and has seen how they have evolved to meet the needs of their business customers and their ever-changing challenges.

We discuss that evolution as well as how NetApp has had to deal with the same challenges, looking at how a “traditional” storage vendor must evolve to continue to remain relevant in a cloud-driven, software-defined world.

 

To wrap up, I wanted a VMware view of their event and I’m joined by a returning guest to the show and voice of the VMware Virtually Speaking Podcast, Pete Flecha.

We discuss the key messages from the event, VMware’s place in the world, what VMWare on AWS brings and how VMware are getting their “mojo back” by embracing new ways of working with tools such as Kubernetes, delivering deeper security, tying together multiple platforms with their NSX technology and how VMware is giving us the ability to “Software Define All Of The Things”.

Pete gives an enthusiastic insight on how VMware view their own show and how they are going to continue to be extremely relevant in enterprise IT for a long time to come.

If you want to hear more from Pete you can find him on twitter @vPedroArrow and you can keep up with all the latest VMware news with Pete’s excellent podcast here at www.vspeakingpodcast.com.

That completes our wrap-up of VMworld 2017.

If you enjoyed the show why not leave us a review and if you want to ensure you catch our future shows then why not subscribe, Tech Interviews can be found in all of the usual homes of podcasts.

Thanks for listening.

Viva Las VVOL’s

In this episode I’m joined by Pete Flecha, Senior Technical Marketing Architect at VMware, as we discuss VVOL’s. VMware’s new approach to delivering storage to virtual WP_20161116_12_44_06_Rich_LI.jpginfrastructures. VVOL’s look to address many of the problems traditional SAN based storage presents to Virtual infrastructures. Pete provides an intro to the problems VVOL’s look to address, how they go about it and what we can expect from the recent vSphere 6.5 release that brings us VVOL’s v2.0.

Although I’m not a VVOL expert I find what VMware are looking to do here really interesting as they look to tackle one of the key issues that IT leaders constantly look to address. How  to reduce the complexity of their environments so they can react quicker to new demands from their business.

VVOL’s allows for the complexity of any underlying storage infrastructure to be hidden from the virtualisation administrators, giving those managing and deploying applications, servers and services a uniformity of experience, so they can focus on quickly deploying their infrastructure resources.

As we all strive to ensure our IT infrastructures meet the ever changing needs and demands of our organisations, anything that simplifies, automates and ensures consistency across our environments is, in my opinion, a good thing.

It certainly seems that VVOL’s are a strong step in that direction.

In this episode Pete provides a brilliant taster of what VVOL’s are designed to do and the challenges they meet. I hope you enjoy it.


If you want more VVOL details Pete is the host of VMware’s fantastic vSpeaking podcast and last week they had an episode dedicated to VVOLS’s you can pick that up here.

vSpeaking Podcast ep:32 VVOLs 2.0

You can find all the other episodes of the vSpeaking podcast here

You can keep up with Pete and the excellent work he’s doing at VMware by following him on twitter @vpedroarrow

And of course, if you have enjoyed this episode of the podcast please subscribe for more episodes wherever you get your podcasts. You won’t want to miss next week, as I discuss data privacy with global privacy expert Sheila Fitzpatrick.

Subscribe on Android