Don’t get caught out by the unexpected – Steve Lambert – Ep 75

In the last couple of weeks the world has shown how the only predictable thing for many of us who deliver technology is the unpredictability of what we have to deal with, from the massive data breach at British Airways to the catastrophic impacts of hurricanes on both the western and eastern sides of the globe, these are incidents that we should be prepared for, the question is, are we?

If your organisation was impacted by something like a hurricane, causing flooding and power outages how would you react? If you’d suffered a data breach, what would you do? Who would you turn to? What’s the plan?

Planning for these incidents is a crucial part of modern business practice, in some cases it is mandated while in others we appreciate the value of planning and develop continuity and incident response plans. However, for some of us, we don’t have one, or if we do, we are not sure where it is, or whether it works!

So, what if you don’t have a plan, or not sure if your plan has value? Then this episode of the podcast is for you as we look at business continuity planning, with my guest continuity planning consultant Steve Lambert of Biscon Planning.

Steve has many years’ experience in the industry at both Biscon and previous to that working in local government emergency planning. In this episode Steve shares his business planning experience to outline some of the steps that you should be taking to ensure, that in the event of an “incident”, you have a plan to overcome it and not get caught out.

I chat with Steve on a range of topics, why do we need a plan at all? And how continuity planning goes beyond IT. We discuss the types of incidents you need to plan for and compare the differences between operational and enterprise risks.

We look at the evolving incident landscape and how data breach is now a key part of continuity planning. Steve then takes us through some of the steps you need to consider when building a plan, from understanding risk appetite, to impact assessment. We also look at the importance of testing plans and crucially how it’s not only your plans, but those of your suppliers, if they have a critical failure do you know how it impacts you?

We wrap up by looking at some practical steps, including how Biscon can help you with a free review and ways you can highlight the importance of planning across your business.

The importance of incident planning cannot be underestimated, and Steve provides some great tips on how to build and test your plans.

To find out more about Biscon and their services you can visit them at https://www.biscon.co.uk/ and follow them on twitter @bisconplanning

If you’d like to test how you would respond to an incident, you may like to follow this scenario shared recently on the BBC’s website.

Until next time – thanks for listening

Advertisements

Logging and learning your public cloud – Colin Fernandes – Ep 74

In the last of our series looking at the shift to public cloud, we discuss getting the best from your cloud and the value of understanding the behaviour of your cloud infrastructure.

Initially the move to cloud was seen as a way of delivering lower cost infrastructure or test and dev environments. However this is beginning to change, today more than ever this move is driven by agility, flexibility and reducing time to delivery, a focus on outcomes rather than cost and technology. This shift is a positive, technology investments should always be about the outcome and a broader end goal and not technology adoption for technologies sake.

When attempting to achieve these outcomes it’s important that our platforms are performing and delivering in the way we need them too, the ability therefore to log, analyse and gain useful insight into the performance of our estate is a crucial part of making sure our public cloud investment is successful.

On this show I’m joined by Sumo Logic’s Colin Fernandes as we look at public cloud, the value of what it delivers and how an understanding of its performance is crucial to not only help achieve desired outcomes, but to do so while still meeting those ever-critical security and governance requirements.

Colin is a self-proclaimed IT veteran with 32 years’ experience in the industry, starting out at ICL and arriving at Sumo Logic via the likes of IBM and VMware and that longevity in the industry puts Colin in a great position to comment on what he sees in today’s market and how cloud has and is disrupting our use of technology.

We start by looking at the similarities Colin sees in today’s shift to cloud to those early days with VMware. We also discuss how organisations are starting to look at cloud as a way to drive new applications and innovation and how this is as much about a cultural shift as it is technology.

We chat about big shifts in focus, with the adoption of serverless and modern design architectures such as containers and the increasingly pervasive ability to utilise machine learning and analytics. We also explore the problems that come with cloud, particularly those “day one” problems of monitoring, security and compliance and why it’s critical that security be part of the cloud delivery cycle and not an afterthought.

We finish up talking about Sumo Logic and what they bring to the market and how their ability to analyse and use data from their customers can provide them with the valuable insight needed to achieve value from their cloud investment.

This is a great time to find out more about Sumo Logic as this week (Starting 12th September 2018) it’s their annual user conference Illuminate, you can track the event via their live keynote stream and you can find that on www.sumologic.com where you can also find more info about what they do.

If you want to follow up with Colin you can find him on LinkedIn as well as via email cfernandez@sumologic.com

I really enjoyed this chat, with Colin’s experience in the market he provided valuable insight into public cloud and how to get real value from it.

Next time we are looking at the world of incident management, how to plan for it and how to ensure a technology disaster or data breach doesn’t catch you out.

Until then, thanks for listening.

Managing multiple clouds – Joe Kinsella – ep73

This show was recorded pre the announcement on August 27th, 2018 of CloudHealth Technologies acquisition by VMware.

This is the 3rd in our series looking at the move to public cloud, the challenges involved and some of the tips and technologies that can help you to overcome them. In this episode we look at perhaps the biggest challenge facing most organisations moving to public cloud, the issues of multi-cloud.

A few weeks ago I published a post about multi-cloud becoming the technology industries holy grail (Tech and the holy multi cloud grail) as they look at ways to extract the complexity from multi-cloud environments and allow us to build solutions that encompass our infrastructure be it on-prem, in a co-lo or a public hyperscale provider. The benefits of multi-cloud deployments are many and will be a major part of our future use of cloud.

On this weeks show we look at those issues surrounding multi-cloud and particularly how to manage it, maintain cost efficiency, govern and ensure security of our cloud based assets. To discuss this I’m joined by Joe Kinsella CTO and Founder of CloudHealth Tech, a company that have built a platform to pull together information from numerous environments, consolidate it into one place and allow you to make informed, proactive decisions to ensure you use your technology in the best way you can.

During the episode we explore some wide-ranging topics, we look at why complexity is an issue, how multi-cloud was initially “stumbled upon” but is now becoming a chosen strategy. We ask why don’t we expect cloud to be complex when much of what we do in our datacentres is very complicated? Joe also confesses that 3-4 years ago he was predicting the death of the on-prem DC and why he has revaluated that with hybrid becoming the deployment reality.

We also discuss the traits for successful multi-cloud deployment and why a cloud first strategy isn’t about everything in the cloud, but more about can we use cloud? should we use cloud?

We wrap up discussing the CloudHealth Tech platform, what it does and how it helps to manage a multi-cloud environment by pulling together clouds, on-prem and automation platforms, connecting all the information to provide the business insights needed for proactive decision making. Finally, we look at the maturity of cloud management and how it needs to move beyond cost control and embrace security and governance as the evolution of multi-cloud management.

Joe gives some great insight and CloudHealth Technologies deliver a very powerful platform, so powerful that VMware saw fit to acquire them.

To find out more about CloudHealth Tech you can visit their website www.cloudhealthtech.com

Follow them on twitter @cloudhealthtech

You can find out more from Joe on twitter @joekinsella, his CloudHealth Tech blog www.cloudhealthtech.com/blog and finally his own blog hightechinthehub.com.

Enjoy and thanks for listening.

Assessing the risk in public cloud – Darron Gibbard – Ep72

As the desire to integrate public cloud into our organisations IT continues to grow, the need to ensure we maintain control and security of our key assets is a challenge but one that we need to overcome if we are going to use cloud as a fundamental part of our future IT infrastructure.

The importance of security and reducing our vulnerabilities is not, of course, unique to using public cloud, it’s a key part of any organisations IT and data strategy. However, the move to public cloud does introduce some different challenges with many of our services and data now sitting well outside the protective walls of our datacentre. This means that if our risks and vulnerabilities go unidentified and unmanaged it can open us up to the potential of major and wide-reaching security breaches.

This weeks Tech Interviews is the second in our series looking at what organisations need to consider as they make the move to public cloud. In this episode we focus on risk, how to assess it, gain visibility into our systems regardless of location and how to mitigate the risks that our modern infrastructure may come across.

To help discuss the topic of risk management in the cloud, I’m joined by Darron Gibbard. Darron is the Managing Director for EMEA North and Chief Technology Security Officer for Qualys with 25 years’ experience in the enterprise security, risk and compliance industry, he is well placed too discuss the challenges of public cloud.

In this episode we look at the vulnerabilities that a move to cloud can create as our data and services are no longer the preserve of the data centre. We discuss whether the cloud is as high a risk as we may be led to believe and why a lack of visibility to risk and threats is more of a problem than any inherent risk in a cloud platform.

Darron shares some insight into building a risk-based approach to using cloud and how to assess risk and why understanding the impact of a vulnerability is just, if not more useful that working out the likelihood of a cloud based “event”.

We wrap up with a discussion around Qaulys’s 5 principles of security and their approach to transparent orchestration ensuring that all this additional information we can gather can be used effectively.

The challenges presented around vulnerability and risk management when we move to public cloud shouldn’t be ignored, but it was refreshing to hear Darron presenting a balanced view and discussing that the cloud is no riskier than any enterprise environment when managed correctly.

Qualys are an interesting company with a great portfolio of tools, including a number that are free to use and can assist companies of all sizes to reduce their risk exposure both on-prem and in the cloud, to find out more about Qualys you can visit www.qualys.com.

You can also contact Darron by email dgibbard@qualys.com or connect with him on LinkedIn.

Thanks for listening.

For the first show in this series then check out – Optimising the public cloud – Andrew Hillier – Ep71

Optimising the public cloud – Andrew Hillier – Ep71

 

The move to public cloud is nothing new, many companies have moved or attempted to move key workloads into the big hyperscale providers, AWS, Azure, Google and IBM, but for some it has been a mixed success.

Somethings of course move easily, especially if your initial forays into cloud are via software as a service platforms (SaaS) such as Microsoft Office365 and Salesforce, but if you’ve looked to move more customised, or traditional workloads this presents a whole set of new challenges.

We have probably all heard of cloud projects (or maybe even had projects) that have not gone to plan, this can be for a range of reasons, cost, technical difficulties, performance. There is a long list of reasons that cloud projects don’t go the way that’s expected. But at the heart of many of those projects is the presumption that cloud is both cheap and easy. It comes as quite the shock we we discover it isn’t!

However, things may be about to change as a new wave of technology companies are emerging that are starting to address, what is, the highly complex world of public cloud platforms. These companies are looking to extract some of the complexity away from the enterprise solutions architect and provide them with tools that assist them in their decision making and design, using a mixture of analytics, intelligence and human interaction to address the complexity of moving to the cloud.

This week is the first in a few shows where we look at the complexity of using public cloud and chat with some of the technology companies who trying to address some of these challenges by taking fresh approaches to the problem and aiming to make the cloud experience better, both technically and commercially.

In this first show I’m joined by Andrew Hillier, co-founder and CTO at Densify. Densify have taken a fascinating approach to the problem, built on Andrews long and strong analytics background.

Densify uses a robust analytics platform to build a full understanding of the workloads that have moved to the cloud, develop a performance profile then automatically modify those applications to fully take advantage of the cloud platform they are running on, ensuring they are optimised for the right services and right commercial cost models.

One particularly unique approach to their platform is the use of the Densify advisor, which then takes this analytics model and pairs it with a human being who works closely with their customer to take them through what the analytics platform has discovered and ensure that they understand any optimisation approach and its impact.

If that sounds interesting then dive in as we discuss a wide range of topics including why public cloud is complicated, why it should never be about the money alone, the limitations of first generation approaches to optimisation and how one of the biggest reasons cloud project fails is people buy the wrong cloud stuff!

Andrew provides some valuable insights and shares what is a pretty smart approach to the problem.

If you want to understand more about Densify you can visit densify.com

Find them on twitter @densify

Or on Instagram densify_cloud

Thanks for listening