This week Chris and Martin talk to Eran Brown, EMEA CTO at Infinidat about the challenges of ransomware and expediting fast recovery. Ransomware is a big challenge for IT organisations as hackers attempt to extort money from a range of businesses that include everything from large enterprises to healthcare organisations. However, is the traditional route of “restore from backup” actually practical or do IT organisations need a more comprehensive set of data protection and recovery techniques?
Eran sets the scene by describing the impacts of ransomware attacks on businesses. Companies with customer-facing websites risk lost revenue and customers who will go elsewhere if their services are down. It’s interesting to note that neither Chris nor Martin have seen a full “restore from backup” scenario in an IT organisation. In many respects, ransomware is a subset of a range of data loss scenarios. The aim for any organisation is to bring systems back into operation as fast as possible, and that means using a range of techniques.
Infinidat provides a set of data protection capabilities that includes snapshots, data replication and InfiniGuard, a purpose-built data protection platform. InfiniGuard uses the temporal location capabilities of InfiniBox to ensure that data recovery from disk is achieved at high speed, the time when recovery is most important.
Eran explains how the Infinidat portfolio can be used to reduce recovery times and bring data back within service level objectives. This includes proactive monitoring, which in the future will be key in ensuring IT organisations identify and quickly work to resolve ransomware attacks.
You can find more about InfiniGuard at the Infinidat website, including the white paper Eran references in the discussion.
Elapsed Time: 00:28:03
- 00:00:00 – Intros
- 00:01:25 – Ransomware 101
- 00:03:05 – Backup systems can be a weak link
- 00:03:45 – What is the financial impact of ransomware?
- 00:07:00 – Ransomware is a subset of other data loss scenarios
- 00:08:30 – Snapshots are a great initial recovery solution
- 00:10:35 – Recovery uses a mix of solutions to achieve
- 00:11:30 – InfiniGuard – a recovery-focused data protection solution
- 00:14:00 – We should be focusing on the performance of recovery not backup
- 00:15:00 – InfiniGuard uses InfiniBox and temporal location to accelerate restores
- 00:18:00 – InfiniGuard pre-fetches data into cache to speed up performance
- 00:20:00 – Customers can use primary and secondary platform snapshots
- 00:21:30 – Increased data volumes in snapshots can indicate ransomware
- 00:27:05 – Wrap Up
Chris: Hi. This is Chris Evans, recording another Storage Unpacked podcast, and, as usual, I’m here with Martin. How are you doing, Martin?
Martin: Hey, Chris. How you doing?
Chris: Yeah, I’m not too bad. Thanks. This week, we have a returning guest, Eran Brown from Infinidat. Hi, Eran.
Eran: Hi, guys. Thanks for having me back again.
Chris: Really looking forward to it. What does is now, number three, I think, you’ve been on?
Eran: I think so. Yes.
Chris: Yeah. Excellent. So you’re an expert. You know exactly how we do it. No problems there.
Eran: I hope so.
Chris: Good. Okay. We wanted today to spend some time talking about ransomware and processes for actually solving or at least avoiding ransomware, and, in particular because we’re going to talk about your products today and explore what Infinidat is doing to help customers solve that problem, we’re going to talk a bit more in detail about what we’re seeing out there and how your products and solutions are fixing that problem.
Chris: So why don’t we start with the subject of ransomware in the first place and just try and level-set everybody about where we are and how ransomware’s developed. Eran, do you want to pick that one up?
Eran: Sure, thank you. So unless you’ve been stuck under a rock for the last couple of years, you already know what ransomware is and how attackers will try to get you to pay them for unlocking your data. What has really changed in the last couple of years, is the type of attacks that we’re seeing, whether it’s state-sponsored attacks to make money for unnamed states – we won’t try to implicate anybody today – coordinated, wide-scale attacks, multi-vector attacks, et cetera, but also the change of attackers to much more sophisticated approaches, starting from trying to silently encrypt the data very slowly in the background or to only encrypt subsets of the data so they remain undetected, attacking service providers instead of end users so that they can have a wider impact and get more people to pay them to unlock their data.
Eran: So multiple things are happening in that arena, and the whole cyber security arena as a whole is very interesting right now, but for enterprises that rely more and more on their data to drive their business, that is definitely an interesting threat to keep a close watch of.
Chris: Absolutely. Martin, from your perspective as an actual user, what’s your view? Have you looked at it and thought there is an increasing level of sophistication that you need to be even more aware of?
Martin: Yeah, definitely. This is increasing the threat levels for the amount of attacks we’re seeing and the amount of things which are appearing out in the wild. We’ve also got things … I mean, we’re expecting a big increase in January as things like Windows 2007 and 8 drop out of support and people stop paying for their security patches from Microsoft. So there’s always things. It seems to be a bit of a perfect storm for ransomware at the moment.
Chris: Are people seeing this … Eran just mentioned they’re targeting the backup system so that you destroy or damage the backup so that when you do corrupt the primary data, that backup data is not there to do the restore.
Martin: Yeah, backup systems are often a weak link. Because of their sheer nature, it’s a good place. Obviously, you’ve got all the backup data, which, obviously, that’s … But also backup systems often have access to every other system in your environment. So if they’re not well-secured, they’re a great thing to attack.
Chris: Absolutely. So let’s go back and talk about the impact again, because I think we’ve sort of touched on it very briefly there, Eran, about the wide impact. But, ultimately, we said there’s many, many times the world expects you to be selling and delivering your products 24 hours a day, and if you’re not doing that, that’s a massive issue for businesses.
Eran: I couldn’t agree more, and there’s one particular study that I think really kind of demonstrates that with one figure. Google ran a study to understand how quickly users abandoned websites if they take a long time to load, and they, of course, monitored many websites across the web and through Google Chrome. Especially for mobile users, which are the vast majority of users, it only takes three seconds for more than half of your users to abandon the website.
Eran: That I think signifies more than anything that instant gratification, “I want to have everything immediately” kind of society or consumer society we live in, and that sets a new bar for businesses in how they plan their infrastructure and the amount of money they lose when that infrastructure fails to support them.
Chris: Yeah. I remember looking at something a few years ago, and it was in conjunction with IBM. They were looking at how they could speed up the data they were collecting from websites, and they were looking at a huge level of sophistication at the way that people click through a website or the way that they follow the path through or their reaction to a web page is not loading fast enough or whatever it happened to be. So, without a doubt, we have incredibly sophisticated users using any application, whether that’s a webpage or whether that’s a mobile app. Very, very sophisticated.
Eran: I think you just gave one of the best examples of what digital transformation really is. It’s if everything’s happening digitally, everything can be tracked and measured and analyzed for constant improvement. But then again, we come back to that increased level of reliance on the data and the infrastructure that runs that data, and that has become a business imperative to be able to recover from all sorts of failures.
Eran: Last time I was here, we spoke about how do you handle things as large as a site level failure and kind of geo clustering? But logical failures are not any different.
Martin: Your cell phone abandonment figure is rather interesting, because I think, as I watch the way my daughter, who’s 18, the way she surfs and does things is she will abandon a website, but do go back. So sometimes I wonder if we overrate the losses, but I think, generally, it’s quite a good point. But I do think that people do go back. They say, “Okay, Amazon today’s not loading,” so they go and look at something else, but they come back again. So I think we have to be very careful when we’re quoting some of these statistics.
Chris: I would agree with that, but I would add a little caveat to it, and I think that is if you’re Amazon or if you’re Google, people have a certain tolerance to put up with that. If you’re a website where you’re looking for … Let’s think. I don’t know. You’re searching for a holiday or you’re looking for some other piece of information. If you don’t find it on that website and you know that you could look on the second or third option on the Google search, you’ll go back to that.
Martin: That’s fair enough.
Chris: I agree with you, but I think some people are a special case, and other ones, probably more generically. I think that sort of goes back to Eran’s point.
Martin: Oh, yeah, I would only add that many websites today, many businesses, they pay for AdWords or other forms of advertising to get people to the website. So it’s not just the cost of the business that you’ve lost. You’ve also created … You’ve paid for the impression. You got the click. So you’ve already made that payment, but then you can get the revenue behind it. So it kind of changes the whole economics of advertising to get people in, plus the reputation component of it.
Eran: Yeah, that’s a good point. Yeah.
Chris: Fair enough. Okay, let’s move on and talk about ransomware. I think we could talk about ransomware in general, but also I think ransomware is part of, I guess, a bigger set of problems that a company might have, in terms of their data being corrupted. You might have the malicious corruption of data through ransomware, for example, but you also might have data corruption for other reasons, and there might be other reasons why you might want to restore some or all of your environment in that scenario, especially if we’re talking about centralized storage here.
Chris: Now, in terms of a toolkit, I think everybody immediately assumes that we’ll do restore from things at ransomware by going to backups, and that seems to be a sort of single dimensional, one-dimensional type of idea. I just don’t think it’s as simple as that. What do you think, Eran?
Eran: I have to agree. I think we have to assume different types of failures and build proper recovery mechanisms for each of them. So physical failures have their own redundancies. Logical failures have separate redundancies. Within the world of logical failures, you have an IT admin that is accidentally doing the wrong thing at the wrong time and may have been wise enough to take a snapshot before he started, just in case something goes sideways, and you can quickly recover from that.
Eran: Then you have ransomware that was kind of lying dormant for six weeks to six months and created a lot of changes in your data. So it really requires a different set of tools, whether they’re quick recoveries from a small change or a small time gap, whether they are larger recoveries – for example, ransomware that’s been dormant for a couple of months.
Eran: It goes back to a couple of questions like, “How quickly do I need to recover? What’s the cost of not being available to my customers? How much data will I have to move around as part of that recovery?” Statistically speaking, the longer time period you recover from, of course, the more data you have to move around.
Eran: This is why snapshots are so efficient, because they act as a very efficient first line of defense, allowing you to recover from short periods of time, let’s say within weeks kind of period, but without moving any data. That usually tackles a lot of your recovery [inaudible 00:08:44], but then when you’re talking about ransomware, yeah, this will usually be something that you take out of the backup scenario.
Chris: I find it really interesting when we look at that, though, because I’m trying to think if I’ve ever really had to spend time recovering an entire environment. I don’t know about you, Martin, but whenever I’ve done anything, I’ve recovered individual servers, one or two servers, but I’ve never had to go back to my entire server farm or VM farm and say, “Let’s recover 100 servers or 200 servers.”
Chris: It seems to me that when you look at the time it takes for people to recover from ransomware, reliance on backup means that they do take forever to get back.
Martin: Yeah, I’ve never seen it done. I’ve never seen … Well, I’ve never been involved in a site recovery which involved recovering hundreds of servers. Worst case, we recovered a couple of databases, but I’ve never seen something quite as catastrophic. I really hope I never do, to be honest, because I’m not quite sure how you go about it, certainly, where you’ve got consistency and trying to work out how you’d recover 200 servers into a known state so everything’s consistent with each other.
Chris: There’s the key bit. If you’re recovering stuff, actually, there’s such a huge number of interdependencies, it’s not going to be that straightforward. Now, as an example, in the larger environments that I’ve worked in, we wouldn’t have relied on any sort of backup or restore from the backup system. We would have relied on having replication, and if we didn’t have replication, we would have at least snapshots. So we would have relied on centralized storage to do that, simply because of the SLA we were trying to deliver. I think that’s what you’re saying, Eran, isn’t it – that, by having that mixture of other technologies, you can actually address the SLAs of getting the business back up in different ways?
Eran: Exactly. Different failure scenarios, different recovery mechanisms, and to Martin’s point about never having to restore an entire farm, I agree with you. I think that’s usually more of a physical failure scenario, like your data center was flooded or lost power and you moved over to the VR. That’s not a logical corruption scenario, at least not commonly.
Eran: What is common, I think, from the logical perspective is … Again, the ransomware example is a perfect one. It was dormant, it was lying around for weeks or months, and then the aggregated change over those six months can be quite substantial to move around using traditional backup solutions that are not snapshots.
Chris: Yep, agreed. So let’s go on and talk about your products, Eran, what Infinidat has got and what you’re doing. Now, we talked about some of this the last time you were on the podcast. Do you want to go back and sort of just remind us, and then we can see where you’ve moved on since then?
Eran: So, usually, when I came on the podcast, most of our discussions were around our primary tier one storage offering, and here we’re focusing on a separate product, which is called InfiniGuard. InfiniGuard looks at a very specific problem. As you send your backups into your backup’s target, you try to minimize the cost of backups. You use the cheapest, highest density media possible, and then you add duplication and compression and all other types of things, synthetic full backups, to that to accelerate your backup cycle.
Eran: However, that creates the obvious problem on the recovery side, because if my backups are duplicated, it means my data is kind of spread around my backup target in a very fragmented way, essentially. Now, we use the cheapest media, which is spinning drives, to store all of our backups. So these drives were never designed to perform highly random I/Os. You need some software above them to provide that layer of efficiency that can drive that performance, because while backup streamers are highly sequential, recovery streams are actually not exactly sequential, because I’m recovering some of my backups from last week and some of my backups from five days ago and some of my backup from four or three and two days ago.
Eran: So I have to kind of recall and rehydrate all of these backups into one coherent and consistent copy, and that can actually trigger a lot of very random I/O on the back end. That’s what slows down recoveries. You talk to anybody who’s run large-scale backup scenarios, and they will say that their backup speed could be as high as ten times compared to their recovery speed.
Eran: I always joke that nobody ever called their IT department and said, “Hey, nothing broke today. Thanks, guys. Keep up the good work.” IT’s always measured at times of crisis. So nobody cares if all the backups are successful if it takes you too long to recover.
Chris: So the interesting thing there that I … Let me add a little flavor to that. So, obviously, when we first started to get the duplicating appliances … and I didn’t really use too many of them in my career, but when we started getting those, as you said, the ingestion of data was a lot more efficient than the recovery of data from those platforms, and the challenge being, because they duplicate, duplication, by its very nature, causes the data to be more randomized across the whole content of what you have.
Chris: Obviously, recovering a single image, shall we say, which could have scatterings of components from various different backup times and so on, means your recovery is going to be random. Now, what’s your opinion, Martin? I never saw all these products in use, but I do know another IBM example, that they ended up putting all flash underneath their product, simply because that was the only way they could get the performance and restore.
Martin: Yeah. So I’ve never really used them in [inaudible 00:13:46], but I can see why. Actually, I think it was an interesting point. I mean, people always talk about backup environment, and if you talk to anybody who’s worked in backup for long enough, we should be talking about recovery environment. Too often, what we’re focused on is actually how quickly you can back the data up, whereas it should really be how quickly can you recover that data? Because the nature of our workloads are very different, a recovery workload can be very random. We should be focusing more on that, and I would like to see the backup vendors, for instance, pay more attention to the recovery options.
Chris: Yeah. So let’s go back and talk about … It’s going to go into a bit more detail, Eran. As I think about this in a bit more detail, the first thing that springs to mind is I know in your primary products that you had a locality of data and a data reference locality technique that you used for knowing where to place data as it was being stored so that, when it was accessed again, you would effectively convert what would have been a random request into more sequential. That was one of the ways that you managed to improve the performance of building a disc system. Is this using similar technology?
Eran: Exactly. So at the backend of every InfiniGuard, our secondary storage, lies an InfiniBox. It’s the same underlying architecture, in that temporal location and understanding that backup streams that came in together are very likely to be recovered together, enabling things like predictive caching and things like that that accelerate the recovery speed a lot more than traditional just a bunch of disks behind some [inaudible 00:15:08] rate. That really sets the gap between them.
Eran: Go back to what Martin said. One of the most commonly untested scenarios, during PoCs, for example, is let’s run a synthetic full backup over 14 or 20 generations and then test the recovery speed. People usually try to cram these PoCs into a week, and then they can only manage a recovery from two, three, five generations. So they kind of limit the amount of fragmentation, et cetera, of their backup, and, as a result, they have a very different experience during the PoC as opposed to the real-life deployment. That leads to a lot of frustration.
Eran: I highly recommend customers really extend those PoCs, take a noncritical environment – because this is not your backup solution yet – and back it up for a whole month, incremental, synthetic full backups. Then see what your recovery looks like, and that will tell you the difference between a simple data layout on low-cost drives and an intelligent data layout that does account for things like temporal locality.
Chris: I think that’s a classic example of needing to think through the requirements of a PoC, Martin, isn’t it? I mean, can you imagine the number of people, if you remember on the [inaudible 00:16:20] storage area, that would run a test against an all-flash system, and they wouldn’t saturate the entire drives and push the drives into doing garbage collection? So you’d never see that cliff edge where you drop off at the end. It’s that classic thing of not really thinking through what the PoC needs to test.
Martin: Yeah, I think, if I was being cynical, I think some vendors won’t necessarily encourage you to do that. They’ll want the PoC to be kept as short as possible, because they know if you keep it for a certain number of months, you may not see the performance you saw initially. So yeah, but the time constraints often on PoC aren’t just driven by the customer – also the vendor, who may want to prevent you from hitting some of those cliff edge scenarios.
Chris: Absolutely. Within the product then, Eran, in the InfiniGuard platform, how are you dealing with that and very randomness factor of the fact that the backup could be … It could be incremental over a number of different days and weeks and months, rather than it being full backups that are continually taken. I can sort of see if you backed up an environment as a full backup, bringing it back would be relatively easy, but the randomness of incremental backups that might last for weeks and months seems a lot more complex.
Eran: So there are differences between backup streams that came in together, and you can use temporal location to place them close together. You can combine that with a lot of predictive caching to try and provide that throughput from DRAM, which is what we try to do with our primary storage and our secondary storage, always. We try to pre-populate things into DRAM predictively to help customers get higher throughput.
Eran: Then there are times where you kind of make a huge seek across the install base, where you have to understand, for example, that this was yesterday’s backup and this was the backup from last week, and you’re moving between them. That is a harder thing to do and requires a lot more understanding of your own data layout to be able to quickly jump between them. However, there’s a whole part in our architecture that spreads the data evenly across all the drives.
Eran: So you kind of keep the system evenly balanced at all points, and then you have the benefit of getting more data from more drives concurrently and then kind of drive the throughput using concurrency and not just the data layout component.
Eran: But if I’m looking at a holistic approach to that, at the end of the day, the easiest way to recover is to not have to move data at all. That’s our snapshot discussion. But when you talk to customers and you ask them, “Why don’t you keep too many snapshots?”, it’s usually, “Well, when we’re paying for an all-flash array, the cost of the snapshot ends up being a lot if we keep them for more than a week or two weeks.”
Eran: That goes back to what we always say about instead of providing performance using all-flash, which is kind of throwing the expensive media at the problem and just saying, “Just put more hardware in,” like putting a bigger engine in a car, we do the same thing with innovation, with software, essentially, and architecture. Then our customers usually keep a longer snapshot retention time, leading to a lot more recovery scenarios that happen from snapshots. So there are multiple secondary, tertiary effects here by choosing a different technology that enables you better protection.
Chris: So how about the combination of snapshots, replication, InfiniGuard? Can I use, for example, snapshots and replication together, because traditional environments, you see taking snapshots or clones of the remote box. It’s seen as potentially a safe way of securing a copy of data. As long as you can actually get that back to the primary side, can you do that sort of scenario with snapshots?
Eran: Yes. In fact, when we started developing our replication engine back six, seven years ago, one of the key requirements we had was that there won’t be a synthetic separation between replication for disaster recovery purposes and replication for archival backup recovery kind of purposes. The use of snapshots actually allows you to be very efficient that way.
Eran: So, for example, maybe you decide to archive all of your snapshots off-site, because you also want them to be not just an accessible copy, but also accessible if your main site is now down. So, for example, some customers might keep a month worth of snapshots on their production environment and six months worth of snapshot on their secondary system. But if they need to recover from a snapshot that is not within the production retention time, it is still only an incremental copy back from the VR side.
Eran: So yeah, we do see customers relying more and more on snapshots, and one of the benefits is that you can actually see ransomware causing inflation of your snapshots, because what does ransomware do? It will encrypt your data, which will reduce your data reduction rates, and it will create higher changes of rights of changes. So your bottom line is snapshot will inflate, and if you keep a long enough snapshot retention, you can very easily identify ransomware.
Eran: In fact, we have an ongoing research project to use our Cloud AI ops to actively announce to our customers, “Hey, that data set seems suspicious, because compression’s going down. Data change is increasing. You might want to look at that as a ransomware candidate.”
Chris: I think the angle of using that sort of data on shared storage is a really positive approach. The reason I say that is I look at that and I compare it to the way that we always talk about software-defined storage and how it was going to take over, because people would buy commodity and then they’d put the software on top and away we go. But those solutions never get the benefit of that shared knowledge and knowledge from that platform that gets put into online services that can do that level of analysis. With enough data, absolutely you can come up with some really interesting pieces, and the snapshot one for ransomware has to be one of the really interesting ones.
Eran: I agree, and we think that consolidating more and more data sets in one single pane of glass could be a key factor in identifying ransomware.
Martin: Yeah. I’ve always wondered why we’ve not seen this happen earlier, because one of the things with snapshots is it’s a really good idea, a good way of identifying where you’ve got rapid rate of change in an environment, and if you’re not used to seeing that, you should be asking why.
Martin: We do it at times. We have looked at snapshots where we’ve seen things grow very, very quickly, or sometimes it’s because somebody’s done something stupid, like deleted a load of files, which they shouldn’t have done, so the snapshot’s picked it up as a change. But it just seems really obvious.
Chris: I do agree with you, and I would say that when I was doing physical backup management on physical servers, which was a long time ago – I won’t say how long ago – but we would look at rate of change of backup, and we would look at rate of change in general across the entire infrastructure to try and get an idea. So we were already looking at that, just simply in terms of capacity management. So it seemed like a logical extension to look at the snapshots.
Chris: But I think the difference now is you have so much more data to use, because you can keep that data for a long, long time. But also, you can share knowledge from other platforms. So, for example … and I don’t know whether you do this, Eran, but, for example, if you see many, many boxes in a large environment, certainly starting to see ransomware. So let’s say a customer has got five or ten boxes. You could actually proactively start doing something on the boxes that haven’t seen that yet to make sure that you’re starting to protect the snapshots. Now, I don’t know if that’s something you do, Eran, but it seems to me that you’ve got that sort of capability.
Eran: So you raise an interesting point. So yeah, we do see interesting results as we coalesce data from multiple data sources. However, I don’t know of any customer that will allow their vendor to remotely do things on their system without going through them.
Eran: So what we’re actively researching now is can we identify the trend, proactively push an alert to the customer that says, “Hey, something suspicious. That dataset is experiencing these ransomware-like behaviors. Go and check it out. By the way, we highly recommend you start taking snapshots on that data set so that later you can go back and recover at least some of these, and if you already have snapshots of that dataset, please do not delete them, because you may need them in an hour to recover your data.”
Chris: But you could lock that, couldn’t you, for the customer in advance and say, “We’ve proactively locked” … I mean, you could give the customer the choice, of course, but you could say, “We’ll proactively lock them.” If you’ve got customers who are keeping six months worth of snapshots, it’s not going to be an imposition to keep an extra few hours worth in order for them to go and do that investigation, and if it’s triggered on their request, then they’ve got that configured so that when that happens, it’s triggered. It wouldn’t be an imposition or a cost for them.
Eran: Oh, it’s not the cost. It’s not the cost. It’s their security people that have to sign off on something having inside access from the outside world, just like we discussed offline about the example of the Texas attack that was trying to attack the MSP to have a wider impact. If I’m a security person, I don’t know how well that vendor’s infrastructure is protected, so I’m not that excited about letting them access from the outside through my firewall into my critical infrastructure, and even getting read-only access, so having all of the data sent to the Cloud and analyze it, is sometimes challenging, which is why I doubt we’ll see what you’re talking about.
Eran: But, because the data gets analyzed in real time, we can reach out, and we very often reach out to customers, whether it’s email, phone, or, depending on the level of urgency and time of day, to let them know, “Hey, something’s happening, let’s work together.” That’s part of our whole proactive support model or support extension of the AI ops component, whatever you want to call that.
Martin: What I would say is, as we’re seeing more gathering of metrics and telemetry on [inaudible 00:25:23], which are going back to vendors, vendors should be able to pick this up and actually almost produce a weather report across their whole estate and say, “You know what? We’ve been seeing these attacks, or we’re seen something going on.” So if you suddenly saw this attack going across, say, all of a customer’s rate or a bunch of customers, you can actually use this as a weather report to say, “We’re seeing something. Please check or reach out to your security vendors.”
Chris: Yeah, I understand what you mean. There’d have to be some very good anonymity within that. You wouldn’t want to in any way expose any customer confidences. But I agree. There’s many ways you could cut it, but, Eran, you’ve got the data. It’s just what you can persuade customers to let you do.
Eran: Yep. Agreed.
Chris: So it sounds like we’re talking about having multiple different approaches here. It might be snapshots on your primary data. It might be snapshots with replication so you can get those into another box in another location. Then we’re talking about InfiniGuard plus the ability to put this data into a dashboard that you can then look at. It sounds to me like you really do need to have … I wouldn’t say holistic, but you need to have a comprehensive approach, Eran.
Eran: The only tool I would consider adding to the comprehensive list you just gave is immutable snapshots, the ability to prevent snapshots from being mutilated or in any way manipulated by any ransomware or malicious code. I think those together create the beginning of a very good protection strategy.
Chris: Great. Okay. So if we’ve interested people in finding out a bit more, other than infinidat.com, where would you push them to? What’s the best website for them to go to, and where can they find more information?
Eran: So InfiniGuard is very well documented on infinidat.com. You can reach out to me, Eran Brown. I’m mostly on LinkedIn. I admit that I don’t like the other platforms as much, and we’re doing more and more around the recovery side of things. You can find a white paper we recently wrote about the holistic approach to protecting from ransomware on our website, if you want to follow up on that reading.
Chris: Okay, brilliant. I’ll put all of those items in the show notes. I think it’s been an interesting discussion. I know we talked about product, but there’s some bits and pieces here that we’ve picked up on, I think, that are going to be really interesting to people to think about, especially around the PoC side of things and how long you’ve run that for and so on. So thanks for the conversation. Really enjoyed it. Thanks, Martin, and look forward to catching up with you soon.
Eran: Thank you both very much.
Martin: Cheers, Chris.
Related Podcasts & Blogs
Copyright (c) 2016-2019 Storage Unpacked. No reproduction or re-use without permission. Podcast episode #3e6a.