VMworld – It’s a Wrap – Ep42

VMware, along with Microsoft, is perhaps the most influential enterprise software company in the industry. VMware and their virtualisation technology has revolutionised the way we deliver IT infrastructure into businesses of all types.

It is not just traditional virtualisation they have made commonplace, the way they have driven the industry to accept our IT infrastructure can be software-defined, has made it more straightforward for us to adopt many of the modern technology platforms, such as cloud.

Today, however, the infrastructure revolution they helped create presents challenges to them, as the broad adoption of cloud and new ways of managing and deploying our infrastructure has led to the question “how do VMware remain relevant in a post virtualisation world?”

The answer is, of course, found by understanding how VMware see those challenges and what their strategic plans are for their own future development. There is no better way of doing that than spending time at their annual technical conference VMworld.

In last week’s show (Was it good for you? – vmworld 2017 – Ep41) we discussed with 4 attendees their views on what they learnt, what VMware shared and what they thought of the strategic messages the heard during the keynotes.

This week, we wrap up our VMworld coverage and a look at the modern VMware with two more insightful discussions.

Firstly, I’m joined by Joel Kaufman ( @TheJoelk on twitter) of NetApp. Joel has had a long relationship with VMware in his time at NetApp and has seen how they have evolved to meet the needs of their business customers and their ever-changing challenges.

We discuss that evolution as well as how NetApp has had to deal with the same challenges, looking at how a “traditional” storage vendor must evolve to continue to remain relevant in a cloud-driven, software-defined world.

 

To wrap up, I wanted a VMware view of their event and I’m joined by a returning guest to the show and voice of the VMware Virtually Speaking Podcast, Pete Flecha.

We discuss the key messages from the event, VMware’s place in the world, what VMWare on AWS brings and how VMware are getting their “mojo back” by embracing new ways of working with tools such as Kubernetes, delivering deeper security, tying together multiple platforms with their NSX technology and how VMware is giving us the ability to “Software Define All Of The Things”.

Pete gives an enthusiastic insight on how VMware view their own show and how they are going to continue to be extremely relevant in enterprise IT for a long time to come.

If you want to hear more from Pete you can find him on twitter @vPedroArrow and you can keep up with all the latest VMware news with Pete’s excellent podcast here at www.vspeakingpodcast.com.

That completes our wrap-up of VMworld 2017.

If you enjoyed the show why not leave us a review and if you want to ensure you catch our future shows then why not subscribe, Tech Interviews can be found in all of the usual homes of podcasts.

Thanks for listening.

Advertisements

Viva Las VVOL’s

In this episode I’m joined by Pete Flecha, Senior Technical Marketing Architect at VMware, as we discuss VVOL’s. VMware’s new approach to delivering storage to virtual WP_20161116_12_44_06_Rich_LI.jpginfrastructures. VVOL’s look to address many of the problems traditional SAN based storage presents to Virtual infrastructures. Pete provides an intro to the problems VVOL’s look to address, how they go about it and what we can expect from the recent vSphere 6.5 release that brings us VVOL’s v2.0.

Although I’m not a VVOL expert I find what VMware are looking to do here really interesting as they look to tackle one of the key issues that IT leaders constantly look to address. How  to reduce the complexity of their environments so they can react quicker to new demands from their business.

VVOL’s allows for the complexity of any underlying storage infrastructure to be hidden from the virtualisation administrators, giving those managing and deploying applications, servers and services a uniformity of experience, so they can focus on quickly deploying their infrastructure resources.

As we all strive to ensure our IT infrastructures meet the ever changing needs and demands of our organisations, anything that simplifies, automates and ensures consistency across our environments is, in my opinion, a good thing.

It certainly seems that VVOL’s are a strong step in that direction.

In this episode Pete provides a brilliant taster of what VVOL’s are designed to do and the challenges they meet. I hope you enjoy it.


If you want more VVOL details Pete is the host of VMware’s fantastic vSpeaking podcast and last week they had an episode dedicated to VVOLS’s you can pick that up here.

vSpeaking Podcast ep:32 VVOLs 2.0

You can find all the other episodes of the vSpeaking podcast here

You can keep up with Pete and the excellent work he’s doing at VMware by following him on twitter @vpedroarrow

And of course, if you have enjoyed this episode of the podcast please subscribe for more episodes wherever you get your podcasts. You won’t want to miss next week, as I discuss data privacy with global privacy expert Sheila Fitzpatrick.

Subscribe on Android

Bringing containers to the masses

No doubt that one of the hottest topics in the IT industry right now is containers, the world of Docker and its ilk are fascinating developers, IT industry watchers and technology strategists the world over.

The containers world is still something that on the whole is restricted to (another IT buzzword warning) the world of DevOps, developers and coders are seeing containers as a great way to quickly develop, deploy and refresh thier applications.

However, this is not the point of this blog, full disclosure, I’m no containers expert, and if you want to know what containers are, then there is a whole bunch of resources out there that can give you all the background you need.

Why write about, “containers to the masses” then?

As I mentioned, containers right now, certainly from the infrastructure side of the house, are still a bit of a mystery, locked away in Linux or a cloud host somewhere, not something we can easily get a handle on in our Windows or vCenter worlds. The idea of these strange self-contained environments running in a way we understand and can manage seems impossible.

And there’s the crux of this post, for many of us, the idea of enterprise wide containers is a long way off. And that’s a problem. In the modern IT world, it’s critical that those who administer infrastructure and business technology cannot be seen as blockers to delivering agile IT in our increasingly DevOps world and if we are, then we are not serving our organisations or our careers well.

How do we square that circle? how do we deal with the problem of delivering agile development platforms for our developers in a world of traditional infrastructure.

A couple of weeks ago, I attended one of the excellent Tech User Group events in Manchester (if you’ve never checked out one of their IT community events then you should, have a look at the website) and among the great topics on the agenda we had speakers from both VMware and Microsoft.

Now I think it’s fair to say if we were to do a poll of the major enterprise infrastructure providers, Microsoft and VMware would feature strongly and it is those platforms that infrastructure guys know and love, however, they are also the things that seem a long way removed from the modern DevOps world, well that is until now.

At the event, I saw a couple of presentations that shifted my view on deployment of containers in the Enterprise, Cormac Hogan from VMware and Marcus Robinson from Microsoft, both covered how these software giants where looking at the container space.

The approach overall is pretty similar, but importantly both are taking something that maybe we don’t quite understand and seamlessly dropping it into an environment we do.

Both are focussing on delivering support for Docker , by pretty much publishing Docker API’s so that dev’s can use all of their Docker skills to deploy containers into these infrastructure environments, without knowing, or to a degree caring what the infrastructure looks like.

That works both ways, with the infrastructure admins, seeing the container resources as they see any other resource, but again, not understanding or caring what they are.

Let’s take a little look at the two implementations;

Microsoft

Firstly, there has been support for containers in Azure for quite a while, so this is nothing new, but what Microsoft are doing is bringing that native container support on-prem in Windows Server 2016. This is done with two slightly different container delivery methods;

Windows Server Containers – provide application isolation through process and namespace isolation technology. A Windows Server container shares a kernel with the container host and all containers running on the host.

Hyper-V Containers – expand on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not shared with the Hyper-V Containers.

Check out this video for more details on Server 2016 Container Deployment;

https://channel9.msdn.com/Blogs/windowsserver/Containers-in-Windows-Server-2016/player

VMware

As with Microsoft there are two distinct routes to deliver containers into the VMware driven enterprise.

vSphere Integrated Containers – provides a Docker-compatible interface for developers while allowing IT operations to continue to use existing VMware infrastructure, processes and management tools. And it offers enterprise-class networking, storage, resource management and security capabilities based on vSphere.

Photon OS™ – is a minimal Linux container host, optimized to run on VMware platforms. Compatible with container runtimes, like Docker, and container scheduling frameworks, like Kubernetes.

Check this video from VMworld 2016 for a short intro to vSphere integrated containers;

And a brief intro to Photon OS can be watched here;

In my mind, it is the management of these that is key to their adoption, from the Dev side both will be deployable using Docker API’s and Docker client, so a methodology developers already understand. To the enterprise admin, it’s a Windows Server or a Vmware environment that they understand and can manage.

Certainly, in the enterprise the idea of deploying Docker containers has been hampered by the need for Linux container farms, and when you are in an environment that “doesn’t do Linux” that’s a problem, however bringing the likes of Docker seamlessly into your traditional enterprise infrastructure systems like Windows Server and vSphere so that they can be managed within your traditional IT frameworks is massive.

Like I said, I’m no containers expert, not a developer, however I have spent 20+ years working in infrastructure environments and the more Marcus and Cormac spoke, the more my light bulb moment brightened, if you can take these flexible development environments out of the dark corners and place them in an environment that enterprise IT can manage and understand you are opening the world of Docker and containers to a whole new audience.

Watch out masses… here comes Containers!

To find our more from the excellent presenters on the day, you can follow both Marcus and Cormac on twitter

Marcus Robinson @techdiction

Cormac Hogan @CormacJHogan

For a bit more information have a look at some of these resources.

For an introduction to containers from Microsoft read Mark Russinovich’s BLOG

Read more on Windows Containers here

For info from Docker on their Microsoft relationship check here

For an introduction to the latest on VMware containers check here

vSphere Containers on Github

read here for an introduction to Photon OS

Gold medals for data

Last week was the end of a wonderful summer of sport from Rio, where the Olympics and Paralympics gave us sport at its best, people achieving life time goals, setting new records and inspiring a new generation of athletes.

I’m sure many of you enjoyed the games as much as I did, but why bring it up here? Well for someone who writes a BLOG it’s almost a contractual obligation in an Olympic year, to write something that has a tenuous Olympic link. So here’s my entry!

One part of the Team GB squad that really stood in Rio were the Olympic cyclists, winning more gold medals than all of the other countries combined (6 of the 10teamgb_trott_archibald_rowsell_barker_rio_2000-1471125302 available) a phenomenal achievement.

This led to one question getting continually asked “What’s the secret?”. In one BBC interview Sir Chris Hoy was asked that question and his answer fascinated me, during his career the biggest impact on British cycling was not equipment, facilities, training, or super human cyclists. It was data, yes, data, not just collecting data, but more importantly the ability to extract valuable insight from it.

We hear it all the time

“those who will be the biggest successes in the future are those that get the most value from their data”

and what a brilliant example the cyclists where. We see this constantly in sport where the smallest advantage matters , but not just sport, increasingly this is the case in business, as organsations see data as the key to giving them competitive edge.

We all love these kind of stories, how technology can provide true advantage, but it’s always great to see it in action.

A couple of weeks ago I was on a call with the technical lead of one of our customers. He and his company see the benefit of technology investment and how it delivers business advantage. I’ve been lucky enough to work with them over the last 4 years or so and have watched the company grow around 300% in that time, we were talking with one of his key technology vendors and explaining to them how their technology was an instrumental part of their success.

During the call I realised this was my opportunity for a tenuous Olympic link BLOG post and how, as with the cyclists, getting the best from data was delivering real bottom line success to the business.

The business is a smart energy company, doing very innovative stuff in the commercial and private energy sectors. They’re in a very competitive industry, dominated by some big companies, but these guys are bucking that trend and a great example of how a company that is agile and knows how to exploit its advantage can succeed.

In their industry data is king, they pick up tonnes of data every day, from customers, from devices, from sensors, and manipulating this data and extracting valuable information from it is key to their success.

Until about a year ago they were running their database and reporting engines (SQL based) on a NetApp storage array, running 7-mode. That had worked but a year ago we migrated his infrastructure to clustered data ONTAP to provide increased flexibility, mobility of data and more granular separation of workloads.

However, the smartest thing they did as part of the migration was to deploy flashpools into their environment, why was this so earth shattering?

A big part of the value of their SQL infrastructure is reporting. This allows them to provide better services to their customers and suppliers giving them advantage over their competitors.

However many of those reports took hours to run, in fact the process was request the report and it would be ready the next day.

The introduction of flashpools into the environment (flashpools are flash based acceleration technology available in NetApp ONTAP arrays) had a dramatic effect taking these overnight reports and delivering them in 30-60 minutes.

This significant reduction in report running times, meant more reports could be run, more reports producing different data that could be used to present new and improved services to customers.

Last year the technical lead attended NetApp Insight in Berlin. One of the big areas of discussion that caught his interest was the development of all flash FAS (AFF), NetApp’s all flash variants of their ONTAP driven FAS controllers.

They immediately saw the value in this high performance, low latency technology. So earlier this year, we arranged an AFF proof of concept to be integrated into the environment, during this POC, the team moved a number of SQL workloads to the flash based storage and it’s no understatement to say this transformed their data analysis capabilities, those 30-60 minute reports where now running in 2-3 minutes.

aff-performance-on-sql
An example of the kind of performance you can get from AFF (this is an AFF8080 cluster running ONTAP 8.3.1 – new platforms and ONTAP 9 have increased this performance further)

But this was not just about speed, this truly opened up brand new capabilities and business opportunities, now the organisation could provide their customers and suppliers with information that previously was impossible, providing quick access to data was allowing them to make decisions on their energy usage that gave true value.

They knew the proof of concept had gone well, when on taking it out the business began asking questions, why is everything so slow? Why can’t we do those reports anymore? And that was the business case, the deployment of NetApp flash was not just doing stuff quickly, or using flash because that’s what everyone says you should, this was because flash was delivering results, real business advantage.

As Chris Hoy discussed at the Olympics, it was not just getting the data because they could, it was getting the most out of it and in a sport where often 10th of seconds are between you and a gold medal, any advantage is critical.

A competitive business environment is no different, so an investment in technology that gives you the slightest edge makes perfect sense.

Today, all flash FAS is integrated into their new datacentre running the latest iterations of ONTAP, ensuring a low latency, high performance infrastructure, ensuring that they can continue to drive value from their most critical business asset, their Data.

A great use of technology to drive advantage, in fact Gold medals for data usage all round.

gold-medals

Hope that wasn’t to tenuous an Olympic link and if you have any questions then of course, @techstringy or via LinkedIN are great ways to get me.

If you’re interested in Flash you may also find this useful “Is Flash For Me?” from my company website.

 

Data Fabric – What is it good for?

Data-Fabric_shutterstock

Anyone who’s seen my social content recently will know I’m a big fan of the concept of data fabric, now the thing with I.T. is we love to get excited about phrases like this and assume everyone else will “get” what we’re talking about..well imagine my surprise the other day when I was talking to a business colleague and he asked me…

Data Fabric, what is it good for?

That’s a great question isn’t it (even if the immediate answer is to start to channel a bit of Edwin Starr)… what on earth is it?..and I guess he probably isn’t the only person asking..

I started by making it clear, when I’ve been talking about data fabric, my discussions have been around the strategic conversations I’ve been having with our customers and storage vendor NetApp, that’s probably not a surprise to those who know me, I’ve had a long association with them, so no shock there, however maybe more surprising is even if I did want to look elsewhere, no one else is really having this discussion and really, they should be…

So when my colleague then went on to ask two other questions, it got me thinking, that the answers would maybe make a good BLOG post..

What where these two questions? well, first up;

What problem does it solve ?

To be clear, data fabric is not a product or a bit of technology, it’s a strategy. I read a great article recently from the founder of CoHo, another storage vendor, who talked about how often the data storage debate gets lost in technology and completely loses site of the primary point of any business looking at data storage, they have data storage challenges and they want someone to solve a problem for them, not to go on about different feeds and speeds and flashing lights…

So let’s see if we can answer that, with a focus on business problems and not get lost in technology (and there is plenty of innovative tech behind the NetApp data fabric story) hopefully you’ll find it interesting and see why it’s maybe more important than ever that those making storage decisions need to think way beyond the silo of specific technologies and look much wider.

Today most of the customer conversations I have, pretty much always include two key topics, using the cloud and managing data, be that it’s security, availability or flexibility of access, the problem with those two things is that they don’t necessarily complement each other very well.

Many of the data strategies I see implemented often include data silo’s, flash over here for one project, archive storage elsewhere, because I don’t want it on flash, some stuff over here in the cloud, because i want access to that all over the place, or need it as part of my DR, but now i have a whole host of tools managing these things and the data in one silo can’t move to another, the problem with this is, as it becomes more complex, the more difficult it becomes to manage, the more difficult it becomes to control, mistakes happen, exposing our data and our businesses to risk.

Even if that doesn’t happen you end up put in a corner with all your data sat separately and no ability to easily move between your silo’s.

That’s what a data fabric strategy fixes, it addresses all of these data silo’s by allowing you to put your storage where you want it, while allowing it to be managed by a single toolset, allowing seamless movement between your storage types.

So this leads to the second question –

How does NetApp help me solve that problem then?

This is for me where NetApp have been smart, how so?

First let’s look at how other smart technology ecosystems deliver their data. let’s look at Apple for instance;Apple Data Fabric

If you buy into the Apple eco system, with your iTunes account, your Mac, your iPad, your phone etc. – as a user you don’t even think about how you get access to your content from one device to another – it’s just there, Apple have created a data fabric.

But if we look at our enterprise IT, are we doing the same, in many cases, no we are not…

NetApp have been smart and looked at this model and asked can we do something similar?

Many of you know NetApp as a storage vendor – supplying physical storage arrays. However what NetApp actually do, is is write software and their biggest software solution is their OnTap operating system and it is this operating system, that many people don’t necessarily see, that is the core of the data fabric.

How? in the end NetApp’s storage capability is delivered completely via the OnTap operating system and because fundamentally OnTap is a piece of software, like any piece of software it can be installed on any capable platform.

So what? – well just imagine if the storage operating system you have sat on your storage array, could be moved around, and maybe dropped into a virtual machine, or could sit in front of a big lump of public cloud storage.

Once you’ve done that, you really have an opportunity to break down storage silo’s and provide real flexibility of choice of where you put your data.

image

If you look at the image above, you can see Data OnTap right at the core of what the data fabric looks like, the OS can then be installed onto or over the top of any of those multiple platforms, once it’s there you have all of the same features and functionality regardless of what sits behind it.

We can sit our OnTap OS on top of an disk array full of SAS or SATA, or maybe it can sit on some All Flash infrastructure, but maybe we don’t have NetApp arrays, no problem, let’s sit it in front of a 3rd party disk array, or maybe we want it out in our branch office as a VM, or maybe we want it in the cloud sat in front of some AWS storage.

That gives us one operating system on a range of devices, one set of tools to manage it, the same capability across each of those platforms, which ultimately gives us the capability of easily moving data around our fabric, across different storage types, so we want to move our SAS data onto all flash – no problem, drop flash into our fabric and over it goes. Want to move data into the cloud, no problem, let’s mirror it across – what about when we want it back.. no problem, we mirror it back.

It’s that operational flexibility that addresses the issues we discussed in the answer to the first question, that of failing to look at the big picture and potentially puts our data into silo’s that can not be moved to other platforms, does that matter, well in some cases maybe not, but in many, if you are thinking strategically about your business technology, then you need to consider whether the decisions you are making are going to give you the flexibility to respond to changing business needs, allowing you to take advantage of future technology changes etc.

I appreciate that we have talked a lot about NetApp here, but at the minute I’ve not really seen this joined up thinking at this scale elsewhere, however if your technology partners are offering this kind of fabric, that’s great, explore it. All I ever look at with posts like this, is to get those reading it thinking about strategic considerations they may not have done before, hopefully this post has done that.

Hopefully some food for thought…