Data Management, Analytics & AI

  • Alex Arcilla

  • Pure Accelerate: Focus on Cyber-resilience

    Photo: Charlie Giancarlo (by CB)

    As I wrote in a previous blog, in-person events are coming back! Pure is holding its user conference in Las Vegas this week. My colleague Scott Sinclair, who is also attending, covers some of the announcements in a recent blog. For my part, I will focus on the cyber-resilience announcements at the event, in particular, the ransomware recovery SLA.

    Pure’s CEO Charlie Giancarlo kicked off the session by providing some interesting metrics in his keynote to support Pure’s power and space efficiencies, reliability, labor requirements, and TCO, amongst others.

    According to Charlie, Pure essentially differentiates itself in the market in 4 areas: direct to flash management (which is key at scale), a cloud operating model (run like the cloud, run in the cloud, build for the cloud, and power the cloud), an evergreen program to minimize obsolescence, and a coherent and consistent portfolio of platforms that rely on common technologies and software. I think a 5th one should be added to the list: Cyber-resilience!

    Ransomware Recovery SLA Program

    What it is:

    On the cyber-resilience front, Pure announced the Evergreen//One Ransomware Recovery SLA program, which is sold as an add-on subscription. Existing and new customers can now purchase an add-on service guarantee for a clean storage environment with bundled technical and professional services to recover from an attack.

    Many things can happen when ransomware hits: systems are essentially taken out of production, can be seized by law enforcement, and/or can be used to run forensics, for example. So it could be weeks before you gain access back to your own systems for production. At the end of the day, it’s about being able to recover as quickly and cleanly as possible in order to resume business operations. Of course, this assumes that your data is properly protected in the first place.

    A customer can initiate a recovery via Pure Technical Services at any time. When a customer calls with their request following the incident, Pure immediately starts working with the customer on a recovery strategy and plan, which includes Pure shipping a clean array within 24 hrs (for North America) with a professional services engineer onsite to help. The idea is to have you all recovered and ready to resume production within 48 hours with this “loaner” array. Transfer those immutable snapshots back on the loaner and you are back in business. You have 180 days to return the array.

    In order to maximize your chances and to qualify, end users must turn SafeMode on for all volumes and set retention to 14 days. This is a must-have best practice, in my opinion, regardless of whether you subscribe or not. The management software, Pure1, has a great set of capabilities for data protection assessment and anomaly detection. The software can give end users an assessment of their whole fleet of arrays and benchmark them against best practices, such as looking for customers having safe mode or snapshots turned on, for example. The protection can be very granular, at the volume level. In addition, the software can perform anomaly detection such as looking for signals like abnormal deduplication ratios. When data is encrypted, it becomes less unique and therefore less “de-dedupable.” A sharp dropping of the “normal” deduplication rate would be a key indication. Pure hinted that they will be adding additional signals in the future, looking at latency, file name changes, and other signals.

    Why This Matters

    To be clear, this is not a “marketing” guarantee (“we’ll pay you X if you can’t recover data”…followed by many exclusions and requirements). This is a practical, customer-focused, and outcome-driven service. If an array has questionable data, it will not go back in production. If you have protected your environment, you will need to recover the latest good copy of data (which can take a long time if you don’t use high performance snaphots) on a “clean” system. All the while, everyone is in full crisis mode, which is adding tremendous stress to the teams and processes. This is not only differentiated, it is smart and focused on what matters: resuming business ASAP.

    Christophe Bertrand (left) and Andy Stone (right) – photo by Scott Sinclair

    Panel: Building a Data-resilient Infrastructure

    I also had the pleasure of participating in a breakout session on building a data-resilient infrastructure with Andy Stone, Pure’s Field CTO, and a cyber-resilience expert. I shared some of the findings of our state of ransomware preparedness research and discussed “hot” topics such as budgeting and funding for ransomware preparedness, the reality of recovery service levels, best practices, cyber insurance, etc.

    The level of interest in the topic was clearly very high and many attendees shared their concerns and challenges. Andy reminded the group that no one can do it alone, it’s teamwork, and no vendor can solve the whole problem on their own. More importantly, we discussed how it’s not just the data that needs protection, it’s also the infrastructure, the “Tier 0,” and first line of defense. The ransomware SLA program was also mentioned and triggered many questions and a lot of interest.

    I have the strongest suspicion Andy’s schedule will be booked solid for the next few weeks with client visits and calls.

    A Big Surprise

    Look who came to say Hi on stage at the end of the keynote!

    Shaquille O’Neal and Charlie Giancarlo (photo by me)

  • The Strategic and Evolving Role of Data Governance

    Research Objectives

    • Determine the amount and value of data for a typical organization, and how this impacts data management activities like availability, usability, and security.
    • Connect the dots between the important elements of data governance like classification, placement, and compliance as ecosystems evolve and become more distributed.
    • Help overwhelmed IT organizations find the right combination of process and technology to solve their unique data governance challenges.
    • Identify data governance process and technology gaps that need to be addressed in vendor solutions.

    (more…)

  • Research Objectives

    Gauge current and planned adoption of integrated solutions.

    Identify business and technology drivers for partner solutions.

    Highlight the role of senior IT leadership in the buying cycle.

    Determine objections and critical success factors for integrated solutions.

    Assess the role of current vendor relationships in the buying process.

    (more…)

  • Data Protection Issues for Salesforce Persist

    Mission-critical applications and their associated data must meet stringent data protection SLAs to support business processes, mitigate risk, and place organizations in a favorable position should data loss occur, particularly when due to ransomware. However, a disconnect and many misconceptions exist when protecting SaaS workloads’ data to foster recoverability. The impact of this phenomenon hits SaaS deployments across the board, including Salesforce, which is detailed in this brief. IT professionals, workload owners, and business leaders must closely inspect their current data protection apparatus and processes for Salesforce workloads to put in place effective and efficient data protection solutions.

    (more…)

  • The SaaS Backup Disconnect: Data Loss Is Real!

    The SaaS backup disconnect persists and is causing data loss. One-third of IT professionals don’t do anything to protect their SaaS-resident application data as they believe it is the vendor’s responsibility. The problem with the state of SaaS data protection is that current misunderstandings can lead to data loss, with a majority of organizations reporting lost SaaS-resident data in the last year. There are many ways to lose SaaS data, whether through external events such as cyber-attacks or via internal events. Organizations using SaaS applications should consider deploying third-party solutions that meet core requirements to properly protect their data and ensure recoverability of these mission-critical workloads.

    Already an Enterprise Strategy Group client? Log in to read the full report.
    If you are not yet a Subscription Client but would like to learn more about accessing this report, please contact us.
  • Data Protection for SaaS

    It’s more important than ever that business-critical data is available, but there is still a problematic misunderstanding about the responsibility for protecting SaaS data.

    See fresh research into this market dynamic with the infographic, Data Protection for SaaS.

  • Data Initiatives Spending Trends for 2023

    The complexity of gathering, maintaining, and interpreting huge volumes of data continues to plague organizations. It’s challenging to clean, integrate, and maintain data with goal of gaining rapid insight to help the business. But it’s not slowing down organizations in their prioritization of data initiatives. They recognize the value and game-changing potential of harnessing the power of data. It starts by properly defining objectives and desired outcomes and ends with data driving decision making and action to fuel innovation.

    (more…)

  • Data Protection for SaaS

    Research Objectives

    Organizations are increasingly reliant on SaaS for many of their mission-critical applications and workflows. This means that a significant amount of business-critical data associated with these applications is now also cloud-resident. As a result, it is more important than ever that this data is available or at least recoverable. However, there is (still) a problematic misunderstanding about the responsibility for protecting SaaS data. While maintaining application uptime is the responsibility of individual SaaS providers, the onus for the availability and protection of data typically falls on IT organizations. This data protection gap exposes organizations to potential data loss, compliance and governance violations, and general operational risks.

    In order to gain further insight into these trends, Enterprise Strategy Group surveyed 398 IT professionals at organizations in North America (US and Canada) personally familiar with and/or responsible for SaaS data protection technology decisions, specifically around those data protection and production technologies that may leverage cloud services as part of the solution.

    This study sought to answer the following questions:

    • What steps, if any, do organizations take to protect the data associated with the SaaS applications they currently use?
    • Have organizations experienced any data losses or corruption with any of the SaaS applications they use over the past 12 months?
    • What are the most common causes of data loss or corruption for SaaS-based applications?
    • What benefits have organization realized as the result of using a solution to protect SaaS application solutions?
    • What are the biggest challenges organizations have experienced with the data protection solution(s) they use for SaaS applications?
    • What are the most important characteristics or considerations of a data protection solution, whether third-party or internally developed, for SaaS applications?
    • How do organizations characterize the mission criticality of the major SaaS applications they currently use?
    • What are the recovery time objectives (i.e., downtime tolerance) for the SaaS applications and workloads organizations protect today?
    • What are the recovery point objectives (i.e., transaction or data loss tolerance) for the SaaS applications and workloads organizations protect today?
    • Over the next 12-24 months, what level of IT priority do organizations expect to give to protecting SaaS applications, customizations, and associated data?
    • How do organizations typically fund the data protection solutions used to protect their SaaS-based applications?

    Survey participants represented a wide range of industries including manufacturing, technology, financial services, and retail/wholesale. For more details, please see the Research Methodology and Respondent Demographics sections of this report.

    Already an Enterprise Strategy Group client? Log in to read the full report.
    If you are not yet a Subscription Client but would like to learn more about accessing this report, please contact us.
  • Ransomware Data Recovery Needs Work

    Most organizations are not doing a very good job of protecting all their mission-critical data and applications. And, after suffering a ransomware attack, these victimized companies further report difficulties in recovering clean and recent data that might also have been compromised. Businesses have several options to protect their data and applications from attack but are slow in adopting perhaps the most viable and practical solution: air-gapped data protection infrastructure.

    (more…)

  • Ransomware: The Gift That Keeps on Taking

    Ransomware attacks are frequent, disruptive, and costly, but paying a ransom to the perpetrators as a quick fix is a bad idea. Ransom payments usually don’t guarantee the return of all the stolen data or prevent further attacks. Even the data that’s returned may have been encrypted or compromised. That’s why ransomware attacks must be prevented before they happen. And if they do occur, a foolproof data backup and recovery process must be in place to avoid suffering the consequences of paying a ransom and rewarding bad behavior.

    (more…)

  • State of the Ransomware Preparedness Market

    Findings from a TechTarget’s Enterprise Strategy Group survey gauging the state of the ransomware preparedness market conclude that much work lies ahead for many organizations as they holistically address and resolve ransomware’s ongoing threat to disrupt IT and business operations. Though most organizations are at a relatively low level of ransomware preparedness maturity, a notable gap exists in attack prevention and data recovery between the companies most prepared and the industry average.

    (more…)