Analysing the data availability market – part one – Justin Warren & Jeff Leeds – Ep29

Now honestly, this episode has not gone out today sponsored by British Airways, or in any way taking advantage of the situation that affected 1000’s of BA customers over the weekend, the timing is purely coincidental.

However, those incidents have made this episode quite timely as they again highlight just how crucial to our day to day activities as individuals and businesses technology is.

As technology continues to be integral to pretty much anything we do, the recent events at BA and the disruption caused by WannaCrypt are all examples of what happens when our technology is unavailable, huge disruption, reputational damage, financial impacts, as well as the stress it brings to the lives of both those trying to deal with the outage and those on the end of it.

Last week I spoke with Veeam’s Rick Vanover (Remaining relevant in a changing world – Rick Vanover – Ep28) about how they where working to change the focus of their customers from backup and recovery to availability, ensuring that systems and applications where protected and available, not just the data they contained.

As part of my time at the recent VeeamON event, I also took the opportunity to chat with the wider IT community who attended, not just those charged with delivering availability and data protection, but also those who looked at the industry through a broader lens, trying to understand not just how vendors viewed availability, but also at the general data market trends and whether businesses and end users where shifting their attitudes in reaction to those trends.

So over the next couple of weeks, I’ve put together a collection of those chats to give you a wider view of the availability market, how analysts see it and how building a stack of technologies can play a big part in ensuring that your data is available, secure and compliant.

First up, I speak with Justin Warren and Jeff Leeds.

Justin, is a well-known industry analyst and consultant as well as the host of the fantastic Eigencast podcast (if you don’t already listen you should try it out) Justin is often outspoken, but always provides a fascinating insight into the wider industry, and shares some thoughts here, on how the industry is maturing, how vendors and technology is changing and how organisations are changing or perhaps not changing to meet new availability needs.

You can follow Justin on twitter @jpwarren and do check out the fantastic Eigencast podcast.

Jeff Leeds, was part of a big NetApp presence at the event and I was intrigued why a storage vendor, famed for their own robust suite of data protection and availability technologies, should be such a supporter of a potential “competitor”.

However, Jeff shared how partnerships and complimentary technologies are critical in building an appropriate data strategy, helping us all ensure our businesses remain on.

You can follow Jeff on twitter at @HMBcentral and find out more about NetApp’s own solutions over at www.netapp.com

I hope you enjoyed the slightly different format and next week we’ll dig more into this subject as I speak with Andrew Smith from IDC and technology vendors Pivot3 and Data Gravity.

To catch it, please subscribe in all the normal homes of Podcasts, thanks for listening.

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

Tech Me For the Weekend – 26th May 2017

Was busy living it up in New Orleans at VeeamON last week, so didn’t get a chance to give you the weekly round of top tech titbits – but with a UK holiday weekend coming up, I didn’t want to leave you without some top content to enjoy – so here goes – enjoy the long weekend with this top tech..

Podcasts

The On-Premise IT roundtable (and yes On-Premise on purpose!)

I’m new to this show from the team behind tech field day – so a little behind in episodes but this first one was a cracker, Is DevOps a Disaster?  a roundtable discussion picking through the DevOps minefield and seeing if there is really anything to this DevOps thing.

Is DevOps a Disaster? The On-Premise IT Roundtable 2

Virtually Speaking Podcast and Interview with Michael Dell

Always a big fan of this show as a great way to keep up with VMware tech, but a bit of a departure this week as Pete and John are on the road at DellEMC world and they catch up with two key leaders from the business Chad Sakac and none other than Michael Dell himself, talking about the business and where they are heading.. fascinating stuff.

Virtually Speaking Podcast

The Geek Whisperers

Always like to give this show a plug when there is a new one.. and this week they are joined by Emily Hendershot and Renee Woods discussing the art of keynote presentations, a chat about the do’s and don’ts, if you fancy adding keynote speaker to your C.V., as always a great listen.

The Geek Whisperers

Tech Interview

Don’t forget my only little gem of a show, now with added theme tune! – this week we come from VeeamON as I catch up with Rick Vanover to discuss the future of Veeam, news from last weeks conference, including some of our favourite announcements, and what we can expect to hear in the future from them, give it a listen and let me know if you like the theme tune!

Remaining relevant in a changing world – Rick Vanover – Ep28

Articles

More of a reader than a listener?, worry not, these articles should keep you going

Office 365 adoption pack

I don’t normally go for product announcement stuff, but I made a bit of an exception this week with this Microsoft blog post on their new PowerBI dashboard for Office365. Not so much for the dashboard itself, but more because the area of data visualisation is a really interesting one and a topic I’m keen to understand better, and just thought this was an excellent example of the power of data visualisation.

Microsoft Office BLOG

Multi Cloud v Stacking

I thought this post from NetApp raised an interesting debating point, as many of us look to how we can take advantage of cloud services, this article raises a good question about whether you should consider a multi cloud strategy, obviously a NetApp slant, but a very good question, well worth a read.

NetApp Article

VeeamON Wrap Up’s

As I mentioned earlier, I was away in New Orleans last week at the VeeamON conference, lots of great announcements from the Veeam team, an awful lot to catch up on.

If you want a comprehensive list of the announcements then look no further than Michael Cade and his daily wrap up posts from the event, should give you all the Veeam goodness you could want.

VeeamON2017 Shakedown Part 1

VeeamON 2017 Shakedown Part 2

VeeamON it’s a wrap

Of course I couldn’t forget myself could I?, my own take on the VeeamON event and where Veeam are heading as a company is right here;

VeeamON It’s a Wrap

Well hopefully all of that will keep you busy this long weekend, enjoy it, whatever you are doing.

 

 

 

VeeamON It’s a Wrap

Last week as you may of spotted I attended Veeam’s technical conference VeeamON, I blogged a couple of pieces  while I was there (Veeaming On and On and On and Veeam On It – Day Two at Veeam ON),  but thought it was time to give a bit of an overall take on the event.

Day Three

Day threes main focus was Veeam’s relationship with Microsoft, especially the Microsoft cloud platforms. WP_20170518_09_40_34_Rich_LIThat focus is important in two ways, firstly, as Veeam look to move the conversation to one of wider availability, rather than just protection, support for the big public cloud players is going to be key.

Secondly, it’s refreshing to see a vendor putting this kind of focus on Microsoft cloud, too many vendors focus only on AWS and although there is no problem with that, it does ignore the amount of organisations, especially those big Microsoft shops, who have Azure as a key part of their data fabric strategy, that’s before we even begin to look at those who have Office365 as part of their software stack.

What was announced?

Veeam Disaster recovery in Microsoft Azure, combines Direct Restore and the new Veeam PN (Veeam Powered Network) providing the ability to not just recover your VM’s but importantly automate one of the trickiest parts of building cloud environments, networking, and when building a DR solution, the amount of automation you build in to it can make a big difference to the success or otherwise of your recovery strategy.

We also heard of extensions to Veeam Backup for Office365, with support for multi-tenancy, allowing organisations that have multiple Office365 deployments to protect those workloads with a single Veeam Backup platform.

This is additionally useful for those who deliver backup as a service using Veeam, the ability to use a single installation to back up multiple 365 customers is going to make your service much more efficient.

We also heard about the addition of native support for object storage in Availability Suite V10, including Azure Blob storage. In my opinion the use of object storage for long term archive and retention is going to become the norm relatively quickly, so native support as part of your availability solution, removing the need for 3rd party gateways, is a real plus.

What did I think?

Events like this for me are about trying to get a handle on business direction, it’s important technology companies have a direction that recognises the changing needs of businesses, both now and in the future.

I made the investment attending with one question in mind, “how are Veeam going to continue to be relevant in a changing world?”.

As our relationship with and requirements from our data, applications and technology change, the idea that our data only lives on-prem in virtual machines is unrealistic, so as a company who’s traditional strength is protection of those types of workloads, how, as your customers move away from your traditional strength, do you react to that and meet these changing needs.

Our digital lives

Today, technology is of course a constant in both our work and personal lives, be that Facebook or our own internal business apps, we increasingly rely on them and our tolerance for their unavailability is pretty low and in a world where it’s easy for us and our customers to move on to the next supplier, If our systems are unavailable, it’s not just our lack of patience that’s a problem, it also presents a real risk of significant business impact.WP_20170517_09_08_31_Rich_LI

In that context, Veeam have recognised that data protection alone is not the answer and availability has to be the focus. Of course they are absolutely right, our businesses are hugely reliant on data, however, it’s not just the existence of it, it’s our ability to access it, use it and to get value from it that’s important and we can only do that if our data and its supporting applications and services are available.

Veeam’s data fabric

I’ve written about data fabric before, normally in the context of NetApp and found it interesting to see Veeam using this same language, but again, they are right, we can no longer design a data infrastructure that includes silo’s, places where our data lives cannot be disjointed from our wider infrastructure and it needs to be flexible and mobile, it’s key our data and supporting services are where we need them, when we need them.

Veeam’s focus on easily moving data around from our physical servers, to virtual, to cloud was clear, and supported by announcements like native public object storage, Office365 backup, protection using both Azure and AWS and the ability to make both our data and services quickly available in all of those areas, as well as move between them is quite compelling.

Broadening the conversation

This strategic shift from Veeam, is not just technically useful, if you are Veeam, a Veeam partner, using Veeam or considering it, it encourages you to take a wider view of your data protection strategy, it stops us focussing on “backing up stuff” and doing the thing that we really need to do, focus on the availability of our systems.

I think we still see many people just focussing on data protection and although that is still important, it does sometimes mean we are blinkered and not considering the wider services needed to support our data and allow our businesses to be quickly operational again in the event of a service interruption.

Staying relevant?

Personally, I think Veeam’s messaging was exactly where I’d hoped it was, recognising the changing world, talking about problems I recognise and see our customers are experiencing and looking to deal with.

It’s also good to see them not only embracing trends such as cloud and object storage, but also recognising gaps, adding agents to allow more comprehensive physical server protection for example, is important as Veeam aim to deliver services to larger enterprises.

Of course, the trick with all of this is not the messaging, but will be in the execution.

For now Veeam are still delivering a product that their customers love and “just works” and if they can do the same in all of these wider areas, then Veeam will be relevant for a long time to come.

Keep on Veeaming ON!

If you want some more thoughts from Veeam, why not catch up on my Tech Interviews podcast, where I spoke with Veeam’s Director of Technical Product Marketing & Evangelism, Rick Vanover, as we discussed future strategy, some of the announcements from the event as well as what more we can expect from Veeam in the future give it a listen.

Remaining relevant in a changing world – Rick Vanover – Ep28

One of the biggest challenges we face in technology is constant change. Change is not bad of course, but it presents challenges, from upgrading operating systems and applications, to integrating the latest technology advancements, to responding to new business problems and opportunities.

But it is not only those implementing and managing technology who are affected.

Technology vendors are equally effected, the IT industry is full of stories of companies who had great technologies but have then been blindsided by a shift in the needs of their customer base, or a technology trend that they failed to anticipate.

It was with this in mind that I visited Veeam’s, VeeamON conference.

Veeam are a technology success story, a vendor who arrived into the already established data protection market and shifted how people looked at it. They recognised the impact virtualisation was having on how organisations of all types where deploying their infrastructures and how traditional protection technologies where failing to evolve to meet these new needs.

Veeam changed this and that is reflected in their tremendous success over the last 9 years, today they are a $600M+ company, with 100’s of thousands of customers. But the world is now changing for them also, as we start to move more workloads to the cloud, as we want more value from our data, as security starts to impact every technology design decision, and of course as we all live ever more digitally focussed lives, our needs from our systems are changing hugely.

How are Veeam going to react to that ?, what are they going to do to continue the success they’ve had and to remain relevant in the new world that much of their market is shifting into ?.

For this week’s podcast, I look at that very question and discuss Veeam’s future with Rick Vanover, Director of Technical Product Marketing & Evangelism, Rick is a well-known industry face and voice, and we had an excellent conversation looking at Veeam’s future aims.

We discuss their repositioning as an availability company, look at how Veeam are developing a range of solutions to give them an availability platform and how this platform will allow their customers to build a strategy, to not only protect their critical data assets, in a range of different data repositories, but will also allow them to move their data seamlessly between them.

We also take a look at some of the big announcements from the show and pick out our top new features.

In my opinion, Veeam’s strategic vision is a good one, the ability to provide organisations with the data protection they need regardless of data location and the ability to move data between those locations is important, but, as ever, remaining relevant will be dictated by their ability to execute that vision.

Hope you enjoy the show.

To find more about Veeam you can of course check out their website www.veeam.com and engage with them on twitter @veeam and if you want to catch up with Rick, he can also be found on twitter @RickVanover.

Over the next couple of weeks we will be looking more at availability and protection, as we talk with the wider technology community as well as industry analysts on how they see the evolving data market.

To catch those shows then subscribe in all the normal ways.

Oh and I hope you like the new theme tune!

Thanks for listening.
Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss

Veeam On It – Day Two at Veeam ON

Day two of Veeam ON in the can, and a big day for thier core product Veeam Availability suite, with the announcement of Version 10, delivering some key new functionality. There was also some smart additions to the wider Veeam Platform family, but more on those at a later date.

Let’s start with Availability Suite V10, still very much at the core of what Veeam are delivering;

Physical Servers and NAS

While Veeam introduced the ability to backup physical servers with their free end point protection tool, V10 sees that capability more tightly integrated into the suite, this with the addition of agents for both Windows and Linux strengthens their capabilities in the wider enterprise, allowing Veeam to truly move just beyond virtual machines workloads.

NAS support is also a very welcome addition, allowing direct interaction with data housed on those enterprise storage repositories, housing TB’s of unstructured data. In a Veeam world previously the only way to protect that data would be if it resided on a Windows File Server and for many of us, that’s just not the case.

Although great additions, I don’t think I’m been overly harsh suggesting these are “table stakes”, fleshing out the suite to capture as many potential data sources as possible and really bringing them in line with most of the enterprise data protection market.

But, the announcements did more than just fill gaps, recognising both critical business challenges and embracing key technology developments in how we store our data much more effectively.

Continuous Data Protection

Some workloads in a business are a real challenge to protect, their availability is so critical to our business that they have the most stringent recovery point and time objectives, tolerating close to zero outages and data loss.

Often this is dealt with by the application design itself taking advantage of clustering and multiple copies of data across the business (think SQL Always on and Exchange DAG’s for example), but what if your application doesn’t allow that, how do you protect that equally critical asset.

CDP is the answer, limited currently to virtual machines hosted within a VMware environment (due to it exploiting specific VMware technologies) CDP provides a continuous backup of that key workload and in the event of a critical failure, not only can Veeam now make that workload quickly available again, but data loss will be only a matter of seconds, allowing us to meet the most stringent of service levels for those critical applications.

Object and Archives

My personal favourite announcement is the addition of native object storage support in V10. Object storage is becoming the de-facto standard for storing very large datasets needing long term retention, it is the basis of storage for the public hyperscale providers such as Microsoft and Amazon.

The addition of native support, alongside the addition of backup archiving capabilities, really start to introduce the possibility of a backup fabric giving On-Prem production, to backup repository, off to cloud for cheap and deep long-term retention.

Delivering that without the need for large and expensive 3rd party cloud gateway appliances, is a real plus.

The critical inclusion of S3 support also means that if you are already deploying any of the leading object storage platforms into your current infrastructure, as long as they support S3, and those leaders do, you can hook your Veeam data protection strategy straight in.

Veeam have certainly fleshed out version 10 nicely, adding some missing functionality, but also dealing with some tricky availability challenges, while embracing some of those emerging storage technologies.

And that’s just the Availability Suite, more to come on some of the wider announcements – but now, time for day 3…

 

Veeaming On and On and On

I always knew this tech community lark would get me into trouble, blogging, podcasting, engaging with the community, and finally it did! As I got invited out to New Orleans to attend Veeam’s technical conference VeeamON, so admittedly, it’s hardly a punishment!

As nice as been invited to the beautiful city of New Orleans is, the actual attraction for attending is Veeam themselves.

Many of our customers run Veeam solutions in their datacentres and only have good words to say about their backup and recovery tools and that is reflected in the companies impressive growth over the last 9 years, from niche virtual machine backup and recovery tool, to one of the standard tools you expect to see in a data protection strategy.

However, there are challenges. The world, as we know, is evolving, we are becoming more data centric and increasingly focussed on treating our data like the organisational asset it is, making sure it is protected, available, secure and private.

That’s not the only challenge, as cloud solutions become ever more pervasive, organisations are consuming more of their technology via service offerings, Office365, AWS, Azure, Salesforce, etc. the list goes on.

That’s a problem

If you are a traditional backup and recovery company, famed for your ability to protect virtual machines, normally residing on-prem, what happens when your market begins to change and move away from doing the things that have brought you success, where your traditional customers see data as an asset, wanting to move it around between all sorts of data stores, wanting to extract valuable information from it, wanting to move more and more infrastructure away to service providers and store data in other locations, in that world, what place is there for the traditional backup and recovery company?

It’s that question that encouraged me to pop over the Atlantic to VeeamON 2017 to see if and how Veeam are going to answer it.

Over the next couple of days then I’ll provide some immediate thoughts on what I hear on their future direction.

Day one, as with any vendor conference, was littered with announcements, not for regurgitation here, you can see the full range of announcements on Veeam’s website, rather than focus on that, I wanted to share a more general view on what Veeam shared throughout the keynote as well as some of the other sessions on show.

All about availability

There’s been a shift certainly in the way Veeam position themselves, the focus on availability is foremost in the conversations I’ve had today, as well as of course in that press release, but as subtle as this change from focussing on “backup and recovery” is, I do believe it’s a fundamental shift, a couple of weeks back I did a podcast episode with Veeam’s Michael Cade as we discussed availability as part of digital transformation (you can find the podcast here for the details ) how, as we become ever more reliant on our digital life, either as a consumer or of course as businesses and organisations, then our tolerance of system failure is very low indeed.

In a world where we need to be able to respond quickly to challenges, customers and changing markets, an inability to have our key data available is crippling and this subtle shift does allow us to change the conversation to start focussing beyond protecting data, and look at protecting the entire system that sits around it.

There were two other phrases that came from the main stage today, that really stuck with me and provides, I think, a good indication that Veeam are strategically, at least, focussed in the right area.

During the panel discussion on stage, HPE’s Patrick Osborne said he saw Veeam as “an enabler for data movement” and what he meant by that, was in a world of many potential homes for our data, it’s vitally important that we design a strategy that allows us to seamlessly move it to the locations where we need it, when we need it.

Many of you know my feelings on the importance of building this kind of fabric strategy, where regardless of location, on-prem, whitebox, as a virtual storage appliance or sat near to or in the cloud we need to be able to move our data, while maintaining full control over it, interesting that this is something that Veeam are enabling for vendors like HPE.

My favourite quote for the day came from Mario Angers from The University of British Columbia, it came at the end of a discussion about one of the days other key themes, the focus on business outcomes, understanding the business problem you are up against and solving that problem, or as it was perfectly summed up by Mario “fix my problem and I’ll buy it!” maybe something for us all to remember whether we are selling technology to a customer, or we are a buyer of technology selling the solution into our organisation, understand the problem and solve it!

A final thought for this first day, comes from the final session of the keynote, a panel sesions with Veeam’s leadership.

What struck me, was the background of some of the more recent talent to be acquired by the company, Peter McKay from a senior role at VMware, Paul Mattes, from a senior role at Microsoft and Danny Allan from a senior security role. Acquiring staff can be a challenge, but acquiring staff from companies who many aspire to be a part of, well that raises a question, what is the opportunity they see at Veeam that would make them take that leap?

Maybe over the next couple of days that will become clear, but for now, it does make you think, there’s some exciting times ahead for Veeam and today, it sounds like the strategic view is right, of course, the proof as always with companies moving onto a new period in their history, will be in the execution.

More from VeeamOn tomorrow.

Thin Clients, are they still a thing? – Tony Main – Ep27

The IT world is a funny place, we love a bit of innovation, sometimes it’s new and revolutionary, sometimes something whose time has come and sometimes, some just keep coming back and remaining relevant, but maybe relevant in new and different ways.

One such technology is the focus of this week’s podcast as I am joined by Tony Main of Italian thin client vendor Praim.

I’ll be honest, I was a little sceptical of how interesting talking about thin client tech would be, hence the title. Let’s face it we like our devices, we become attached to them, be it a smartphone, Mac, or like me, my Surface Pro, and when that device does everything I need and I have full freedom to connect to what I want from where I want with my flexible and portable device, what on earth would I want with a device the doesn’t do anything!

The more I thought about it, the more fascinated I was to speak with Tony to find out if, why and how thin client devices still have a place in a modern IT infrastructure.

We recorded this episode a couple of weeks ago, but coincidentally, thanks to the global news that is the WannaCrypt ransomware outbreak, this episode became quite timely as we touch on how thin client tech can be a key component in a data security plan.

Do thin clients have a place in modern IT, during this episode we explore that very topic.

But we start out looking at the history of thin client devices, from relatively dumb low power endpoints to the modern high powered, flexible thing we see today.

We look at how the modern technology shift to data mobility, ensuring our data doesn’t reside on any one device, plays very much to the strength of thin client.

Tony shares how the maturity in VDI technology is also changing how organisations view their desktop and application deployment methods making thin client a more attractive proposition.

We discuss how key data trends such as analytics and security, are things that sit well with a thin client model and Tony shares some use cases showing how people are adopting thin client. This includes how a move to mobility of experience, rather than device is also making thin client an interesting option.

We wrap up discussing how Praim are looking at the secure desktop market, taking what they know from thin clients and helping organisations repurpose desktops to extend their life, provide a better experience and deliver a more secure and manageable solution.

Are thin clients still a thing? – Have a listen to the show and then why not share your thoughts on twitter with me @techstringy or a leave a comment with the show notes.

Why not go and find out a bit more about Praim on their website here at https://www.praim.com

You can also follow Praim on twitter @PraimSrl

If you enjoyed this episode of Tech Interviews then why not subscribe on iTunes, Soundcloud or wherever else you get your podcasts;

Subscribe on Android

http://feeds.soundcloud.com/users/soundcloud:users:176077351/sounds.rss