your123 - stock.adobe.com
Elastic Inc. isn't just about logs anymore, and it has a new leader with plans to expand its clout.
Abhishek Singh, appointed Elastic's general manager for observability in October, was hired away from competitor Datadog, where he served as vice president of product for a little over a year. Previously, he also worked as general manager for Amazon's X-Ray serverless observability offering from 2018 to 2020.
Singh's task now is to help Elastic break out of the niche it captured with its Elastic Stack in log analytics and raise its profile in the broader realm of observability, an evolution of IT monitoring that focuses on pinpointing the most essential and critical information about the state of computing systems. Observability has emerged into the enterprise mainstream over the last five years amid explosive data growth generated by cloud-native applications. Similarly, the Elastic Stack, originally a commercial version of open source Elasticsearch, Logstash and Kibana (ELK), has expanded beyond its log management roots to support AI-driven search analytics across multiple data types.
The challenge for Singh is that Elastic's most recent expansion into AI-driven search and observability remains less well known than competitors, including his most recent employer. Elastic launched its Elasticsearch Relevance Engine and semantic search support in June 2023; support for zero-instrumentation profiling and an Elastic AI Assistant for Observability in September; and a serverless infrastructure option in November. Gartner's 2023 Magic Quadrant report for application performance monitor (APM) and observability, published in July, put Elastic in the "Visionaries" category, while naming Datadog, Dynatrace, New Relic, Splunk and Honeycomb market leaders.
Singh sat down with TechTarget Editorial this month to discuss how he'll tackle Elastic's competitive challenges in his new role.
Editor's note: The following was edited for length and clarity.
You've been on both sides with Elastic vs. AWS. What's your view on that now?
Abhishek Singh: With OpenSearch vs. Elasticsearch, the differentiation now is the core vector search capabilities; Elastic's just far ahead. The Elasticsearch Relevance Engine has specific algorithms to find better relevance, and our RAG [retrieval-augmented generation] implementation connects with third-party LLM [large language model] providers. Elastic also provides … a fully packaged observability [product]. If you look at the Gartner [Magic Quadrant], our product vision is in alignment with some of the leaders. The reason why we're [among] the visionaries and not the leaders is execution. That's something that I intend to fix.
What's your agenda for fixing that?
Singh: There's a little bit of historical baggage. People think of ELK [Elastisearch, Logstash and Kibana] and they think of logging. But it has infrastructure monitoring, distributed tracing, profiling and real user monitoring. The ML [machine learning] and AI capabilities in the platform are beyond what I've seen any other vendors offer, including AWS. The No. 1 goal I have is just visibility and awareness that … it's no longer your grandpa and grandma's ELK stack.
Singh: We launched our observability AI assistant [in September]. Users can ask it to explain what signals in the UI mean and get a better understanding of them and ask if there's impact on other signals. For example, if you have a spike in your logs that results in an outage, you can ask if that impacted revenue. Because people store revenue data within Elastic, depending on the business context, we know how to tie that data back together.
Singh: There's two pieces. One is the fact that we have our new relevance engine that's able to get us better insights and our ML classifiers that we built. Two is the RAG model, which allows us to connect with an external third-party LLM. We can take the context, run the vector search, send the vector embeddings -- and only the vector embeddings -- to the LLMs, use those capabilities and then tie all of that back together for users. Private data stays private, but it allows you to use LLMs in context.
Some industry watchers are already talking about RAG as a temporary answer to improving the relevance of data for generative AI, but other RAG vendors also emphasize it as a data access control feature -- is that tied to RAG specifically, or could it outlive the RAG technology?
Singh: There's different viewpoints here, but two stand out. One is that you're going to use a third-party LLM and provide it with specific business context. That's the RAG approach. And the second view is that people are going to tune micro-LLMs or mini-LLMs specific to their business. I think we're far away today from the micro- and mini-LLMs just yet. When we get to a world where we can run those smaller LLMs in a way that makes financial business sense, you will see innovations from Elastic around that as well. Our relevance engine runs some very specific algorithms to find better relevance.
Is Elastic positioning itself as a vector database?
Singh: Elastic is not positioning itself as a vector database. We're positioning ourselves as a search analytics company. Search-powered observability is better than observability. Search-powered security is better than just plain security. The core of the business is about taking data and finding relevant bits of information in search.
Because of GenAI, there's growing interest in search. Constructing prompts is effectively a search technique. We plan to take our expertise in search and tie that with some of these external [models], and eventually, I suspect that you will be able to run models within an Elasticsearch context.
Observability has a growing data management problem. Some vendors are taking the data lakehouse or data lake approach, while others are taking the data pipeline approach or doing edge processing. What are Elastic's plans?
Abhishek SinghGeneral manager for observability, Elastic Inc.
Singh: Because Elastic started as a platform for unstructured data, there are core capabilities baked in around data management. You can see which data streams are consuming resources and who's using what data streams, which a lot of other platforms don't have. We have IAM [identity and access management] at the field level, and we're going to extend it to be role based. We can do things like cross cluster search, so users can create clusters based on business units and then tie them all together. As we think about observability and our business model, we don't need to sell more tracing, profiling, logs or infrastructure monitoring. We'll focus on business outcomes versus trying to sell you more logs, metrics or traces.
What does that mean, 'focusing on business outcomes?' Focusing on applications above the data layer?
Singh: What that means is collecting data from logs, metrics, traces, profiling, synthetics or real user monitoring and turning that into a persona-focused view -- looking at the services that are relevant to SREs, for example. Our goal is to collate all the data in Elasticsearch and then allow users to get value from it without saying, 'Hey, you got logs, but to get a service map, you need to get APM.' We can create metrics from logs, we can create logs from traces and we can create traces from logs. That will allow us to remove the stovepipes that have formed in the industry.
What about data storage costs, with the growing volumes of data that people collect?
Singh: One [answer] is, we've donated our Elastic common schema to OpenTelemetry [OTel]. As all the data is being collected, you need to have a schema; otherwise it's garbage in, garbage out. Second, today, the OTel collector operates in two modes: the standalone satellite mode, where you deploy per host, or the service mode, where you deploy it [centrally].
The service mode is painful, according to every customer that I've heard from. Every time you need to make changes, you need to do a deployment. It causes issues and drops in data. Elastic's had this thing named Fleet and Elastic Agent, and we're looking to bring some of those capabilities to OTel and make it make OTel sort of the de facto way to work with Elastic observability and hopefully help the broader community in the process. The cost for data to leave the customer's estate is where the cost is incurred. If we can run the OTel collector as a service within their estate and filter data, that fits the notion of a data pipeline.
Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.