Insight

  • In today’s data-driven world, the power of artificial intelligence (AI) and advanced analytics is undeniable. They have the potential to revolutionize industries, drive innovation, and unlock valuable insights. However, behind every successful AI and analytics initiative lies a crucial foundation: excellent data management capabilities.

    In a recent research survey from TechTarget’s Enterprise Strategy Group, Data Platforms: The Path to Achieving Data-driven Empowerment, when they were asked about the most important areas of their data platform, 31% of organizations ranked data management, including databases, at the top. In the same survey, 50% of participants identified faster business decision-making as the leading driver and goal for their modern data platform strategies, with 19% focused on creating competitive advantage. Database performance is critical to reaching the data-driven outcomes desired by organizations.

    Oracle has introduced its latest addition to the Exadata database machine family, the Exadata X10M, which is powered by AMD’s EPYC server processors. This release marks a significant milestone for Oracle and its database performance capabilities. The Exadata platform is known for its co-engineered hardware and software, specifically designed to support enterprise data management. It offers a complete stack system with optimized processors, memory, and storage, as well as system software for efficient data storage, indexing, and movement. With the Exadata X10M, Oracle continues its commitment to delivering high-performance database servers by leveraging AMD’s EPYC CPUs, incorporating up to 96 multithreaded cores, DDR5 memory, and RDMA over Converged Ethernet (RoCE) for low-latency, high-bandwidth connectivity.

    Oracle’s optimization efforts extend beyond hardware to its software stack, ensuring linear scalability and maximum performance across multiple cores. The Exadata X10M stands out with its high performance, surpassing its predecessor, the X9M, in terms of OLTP performance, analytics, and database consolidation. Oracle’s decades-long expertise in both enterprise database software and hardware solutions enables the company to provide a tailored data management platform that combines performance, value, and cost-effectiveness.

    Customers can deploy Exadata on-premises, in the cloud, or as a hybrid solution, benefiting from Oracle’s collaboration with public cloud providers and high-speed interconnectivity options. Overall, Oracle’s focus on innovation and customer-centric solutions positions Exadata X10M as a compelling choice for organizations seeking high performance in their data management and analytics initiatives.

  • Pure Accelerate: Focus on Cyber-resilience

    Photo: Charlie Giancarlo (by CB)

    As I wrote in a previous blog, in-person events are coming back! Pure is holding its user conference in Las Vegas this week. My colleague Scott Sinclair, who is also attending, covers some of the announcements in a recent blog. For my part, I will focus on the cyber-resilience announcements at the event, in particular, the ransomware recovery SLA.

    Pure’s CEO Charlie Giancarlo kicked off the session by providing some interesting metrics in his keynote to support Pure’s power and space efficiencies, reliability, labor requirements, and TCO, amongst others.

    According to Charlie, Pure essentially differentiates itself in the market in 4 areas: direct to flash management (which is key at scale), a cloud operating model (run like the cloud, run in the cloud, build for the cloud, and power the cloud), an evergreen program to minimize obsolescence, and a coherent and consistent portfolio of platforms that rely on common technologies and software. I think a 5th one should be added to the list: Cyber-resilience!

    Ransomware Recovery SLA Program

    What it is:

    On the cyber-resilience front, Pure announced the Evergreen//One Ransomware Recovery SLA program, which is sold as an add-on subscription. Existing and new customers can now purchase an add-on service guarantee for a clean storage environment with bundled technical and professional services to recover from an attack.

    Many things can happen when ransomware hits: systems are essentially taken out of production, can be seized by law enforcement, and/or can be used to run forensics, for example. So it could be weeks before you gain access back to your own systems for production. At the end of the day, it’s about being able to recover as quickly and cleanly as possible in order to resume business operations. Of course, this assumes that your data is properly protected in the first place.

    A customer can initiate a recovery via Pure Technical Services at any time. When a customer calls with their request following the incident, Pure immediately starts working with the customer on a recovery strategy and plan, which includes Pure shipping a clean array within 24 hrs (for North America) with a professional services engineer onsite to help. The idea is to have you all recovered and ready to resume production within 48 hours with this “loaner” array. Transfer those immutable snapshots back on the loaner and you are back in business. You have 180 days to return the array.

    In order to maximize your chances and to qualify, end users must turn SafeMode on for all volumes and set retention to 14 days. This is a must-have best practice, in my opinion, regardless of whether you subscribe or not. The management software, Pure1, has a great set of capabilities for data protection assessment and anomaly detection. The software can give end users an assessment of their whole fleet of arrays and benchmark them against best practices, such as looking for customers having safe mode or snapshots turned on, for example. The protection can be very granular, at the volume level. In addition, the software can perform anomaly detection such as looking for signals like abnormal deduplication ratios. When data is encrypted, it becomes less unique and therefore less “de-dedupable.” A sharp dropping of the “normal” deduplication rate would be a key indication. Pure hinted that they will be adding additional signals in the future, looking at latency, file name changes, and other signals.

    Why This Matters

    To be clear, this is not a “marketing” guarantee (“we’ll pay you X if you can’t recover data”…followed by many exclusions and requirements). This is a practical, customer-focused, and outcome-driven service. If an array has questionable data, it will not go back in production. If you have protected your environment, you will need to recover the latest good copy of data (which can take a long time if you don’t use high performance snaphots) on a “clean” system. All the while, everyone is in full crisis mode, which is adding tremendous stress to the teams and processes. This is not only differentiated, it is smart and focused on what matters: resuming business ASAP.

    Christophe Bertrand (left) and Andy Stone (right) – photo by Scott Sinclair

    Panel: Building a Data-resilient Infrastructure

    I also had the pleasure of participating in a breakout session on building a data-resilient infrastructure with Andy Stone, Pure’s Field CTO, and a cyber-resilience expert. I shared some of the findings of our state of ransomware preparedness research and discussed “hot” topics such as budgeting and funding for ransomware preparedness, the reality of recovery service levels, best practices, cyber insurance, etc.

    The level of interest in the topic was clearly very high and many attendees shared their concerns and challenges. Andy reminded the group that no one can do it alone, it’s teamwork, and no vendor can solve the whole problem on their own. More importantly, we discussed how it’s not just the data that needs protection, it’s also the infrastructure, the “Tier 0,” and first line of defense. The ransomware SLA program was also mentioned and triggered many questions and a lot of interest.

    I have the strongest suspicion Andy’s schedule will be booked solid for the next few weeks with client visits and calls.

    A Big Surprise

    Look who came to say Hi on stage at the end of the keynote!

    Shaquille O’Neal and Charlie Giancarlo (photo by me)

  • Research Objectives

    Customer experience is the sum of a customer’s digital interactions with a company throughout the customer lifecycle, from early online research of a product or service to active use and repeat business such as subscriptions. Most customer experience programs include the measurement of customer satisfaction and sentiment analysis. These processes aggregate and analyze customers’ perceptions and feelings resulting from interactions with a brand’s products and services, most often through short surveys collected throughout an engagement. Customer loyalty and retention are the desired results from the thoughtful execution and continuous improvement of CX.

    In order to gain insights into the technologies that power customer experience, TechTarget’s Enterprise Strategy Group surveyed 400 IT and business professionals at organizations in North America (US and Canada) with knowledge of and participation in their organization’s customer experience initiatives.

    (more…)

  • Modern IT Service Management

    IT service management (ITSM) is going modern, and the approach is paying big dividends. Businesses modernizing their ITSM systems report myriad business and operational benefits, so despite the added complexity that often comes with modernization, organizations continue to move ahead with major upgrades.

    Learn more about these trends with the infographic, Modern IT Service Management.

  • Research Objectives

    As organizations continue to adopt multiple public cloud providers, maintain multiple data centers, and scale edge and colocation environments, IT decision makers must consider a wealth of locations to deploy new workloads and migrate existing workloads. Where an application is deployed depends on numerous factors, including the type of application, the needs of the application, the needs of the business, and the priorities of the organization.

    To gain insight into the strategy, process, personas, and considerations involved in multi-cloud applications deployment migration decisions, Enterprise Strategy Group surveyed 350 IT professionals in North America (US and Canada) responsible for evaluating, purchasing, and managing applications for their organization.

    This study sought to answer the following questions:

    • How do organizations distribute IT budgets across application deployment locations, including on-premises, infrastructure-as-a-service (IaaS), software-as-a-service (SaaS), platform-as-a-service (PaaS), edge, and colocation?
    • Among users of public cloud services, how are IT budgets distributed between primary and secondary providers?
    • How do organizations expect their spending on application deployment locations to change in the next 24 months?
    • Do organizations have preferred cloud vendors they default to for application deployments, or do they choose providers based primarily on the application or cost?
    • What role do internal groups play in determining deployment plans and locations for new and existing applications?
    • What types of applications drive the use of one public cloud infrastructure provider over another?
    • What application attributes or requirements for new applications most influence the choice of provider?
    • What percentage of existing applications are strong, potential, or not candidates to move to public cloud services over the next five years? Which applications are not candidates, and why not?
    • What are organizations’ strategies for existing applications in terms of modernization and migration?
    • When and how are cloud cost optimization tools in the application deployment decision process?
    • Which factors influence decisions when evaluating the cost of cloud application deployments?
    • What is the adoption status of distributed applications in today’s IT environments?
    • How many inter-cloud application integrations do organizations currently manage?
    • What challenges do organizations encounter when monitoring, measuring, and ensuring SLA adherence for applications that rely on inter-cloud integrations?
    • What application types are unsuitable for use as distributed applications and inter-cloud integration?
    • Why do organizations use more than one public cloud infrastructure provider? What applications or application requirements lead to the use of secondary providers?
    • What KPIs are used to measure the value and effectiveness of application deployment locations?

    Survey participants represented a wide range of industries including manufacturing, technology, financial services, and retail/wholesale. For more details, please see the Research Methodology and Respondent Demographics sections of this report.

    (more…)

  • Managing the Endpoint Vulnerability Gap

    Requirements from widespread work-from-anywhere policies have escalated the need for endpoint management and security convergence. IT and security teams require new mechanisms capable of providing common visibility, assessment, mitigation of software and configuration vulnerabilities, threat prevention, and support for threat investigation and response activities.

    Learn more about these trends with the infographic, Managing the Endpoint Vulnerability Gap.

  • The Cloud Data Security Imperative

    Digital transformation initiatives and remote work have further accelerated the migration of data assets to cloud stores. However, organizations are finding that sensitive data is now distributed across multiple public clouds. The use of disparate controls has led to a lack of consistent visibility and control, putting cloud-resident data at risk of compromise and loss. TechTarget’s Enterprise Strategy Group recently surveyed IT, cybersecurity, and DevOps professionals in order to gain insights into these trends.

    Learn more about these trends with the infographic, The Cloud Data Security Imperative.

  • Cloud Entitlements and Posture Management Trends

    Organizations are moving applications to the cloud and embracing digital transformation strategies to speed development cycles and better serve employees, partners, and customers. However, the subsequent faster release cycles and broad internet exposure increase the number of potential security incidents caused by misconfigurations, so security teams are looking for efficient ways to drive actions that reduce those risks.

    Learn more about these trends with the infographic, Cloud Entitlements and Posture Management Trends.

  • Megatrends in the technology industry—highlighted by the need to address increased complexity vis-à-vis platform convergence and vendor consolidation while investing in digital transformation initiatives—set the stage for integrated partner solutions. While the demand for these solutions is strong, there can be challenges at every stage of the buyer’s journey.

    Learn more about these trends with the infographic, The Buyer’s Journey to Integrated Solutions from Strategic Partners.

  • Managing the Endpoint Vulnerability Gap

    Research Objectives

    Requirements from widespread work-from-anywhere policies have escalated the need for endpoint management and security convergence. IT and security teams need broad management, prevention, detection, and response capabilities that span endpoint devices and operating environments that are often outside of their control, which is driving many to desire convergence between management and security capabilities to simplify implementation, ongoing management, and risk mitigation.

    IT and security teams require new mechanisms capable of providing common visibility, assessment, mitigation of software and configuration vulnerabilities, threat prevention, and support for threat investigation and response activities. These management and security activities are deeply intertwined, requiring integrated workflows between IT and security teams.

    In order to gain further insights into these trends, TechTarget’s Enterprise Strategy Group surveyed 381 IT and cybersecurity decision makers involved with endpoint management and security technologies and processes at midmarket (100 to 999 employees) and enterprise (1,000 or more employees) organizations in North America (US and Canada).

    This study sought to answer the following questions:

    • Approximately what percentage of employees work remotely, in either a remote or home office?
    • On average, approximately how many endpoint devices does each employee in an organization interact with daily?
    • How do organizations characterize the state of endpoint security and management in terms of level of difficulty?
    • Approximately what percentage of organizations’ endpoints are actively monitored?
    • Approximately what percentage of total endpoints do organizations consider to be unmanaged or have only a limited ability to manage/secure?
    • Have organizations experienced some type of cyber-attack in which the attack itself started through an exploit of an unknown, unmanaged, or poorly managed endpoint?
    • How many different tools and technologies do organizations use for endpoint management and security?
    • Have organizations consolidated the teams or individuals responsible for endpoint management and endpoint security?
    • What has driven or is driving the consolidation of endpoint management and security? What are the biggest impediments for greater consolidation of endpoint management and security?
    • Do organizations use desktop or application virtualization? What percentage of total PCs/client access devices has been virtualized via desktop or application virtualization solutions, and how is this expected to change over the next three years?
    • What specific types of employees are the initial and/or primary users of desktop or application virtualization environments?
    • What actions do organizations believe would most improve their endpoint management and security?

    Survey participants represented a wide range of industries including manufacturing, technology, financial services, and retail/wholesale. For more details, please see the Research Methodology and Respondent Demographics sections of this report.

    Already an Enterprise Strategy Group client? Log in to read the full report.
    If you are not yet a Subscription Client but would like to learn more about accessing this report, please contact us.
  • Our personal and professional lives are reliant on technology. But the sensitive data we share and store online is more vulnerable to cyberthreats than ever before. From credit card numbers and medical records to private messages and intellectual property, encrypting data is essential to safeguard our information from prying eyes and unauthorized access. Without encryption, we risk exposing our most valuable assets to malicious actors who seek to exploit our online vulnerabilities.

    Read my blog to learn more about the coming encryption revolution.

  • Since returning from RSA Conference 2023, I’ve collected my thoughts from the massive sensory input that comes from this four-day, 625-vendor, 700-speaker cybersecurity conference. Upwards of 45,000 people attended this year’s RSA Conference—a massive increase over last year’s 26,000 attendees.

    Read my blog for my thoughts on RSAC 2023.