Ronald Hudson - Fotolia
K2View takes aim at DataOps with new funding
K2View CEO and co-founder talks about how his company's micro-database fabric approach to user data aims to improve data management by consolidating data in one place.
Organizations typically store user data in many different places, often making it a challenge to get a complete view of all the data.
Among the myriad approaches for consolidating data is ingesting data into a data warehouse or data lake to bring different sources together. Startup K2View, based in Dallas and Tel Aviv, Israel, takes a different approach with its fabric platform that aims to unify all sources of data for a given user or entity. It's an approach that uses what the company calls micro-databases, in which each database includes all the data from different sources for the specific user.
On Aug. 11, K2View revealed that it raised $28 million to continue to build out and advance its technologies, which fit into a growing segment of the market commonly referred to as DataOps (Data Operations). In this Q&A, Achi Rotem, CEO and co-founder of K2View, discusses his views on DataOps and the challenges of data management at scale.
Why are you now raising money amid the disruption of the COVID-19 pandemic?
Achi Rotem: We didn't feel like we should take anyone's money before. We wanted to be absolutely sure there is a market and that we had reference-able customers that can say their business depends on the technology.
Regarding COVID, I think COVID helped us to be honest. Companies understand today better than ever how important it is to be able to move fast and change things quickly. With the architecture companies have today, where the data exists in hundreds of different applications, it's almost impossible to do.
I think now, they're coming to us and telling us that they want to provide a better experience to their customers. So I think, for us this pandemic, will help us long term. We have seen some delays on deals because customers have been a bit tighter on spending, but we did not have any cancellations.
How do you define DataOps?
Rotem: You need to be able to make data operational. That might sound simple, but it's not because today data is locked in specific technology and in a specific structure.
To make data operational and solve operational use cases, that's what DataOps is all about for us. With DataOps there is a huge opportunity to make all the data a company has operational for business use cases.
What is the K2View "micro-database" approach to DataOps all about?
Rotem: If you put all the data you have in a single technology like a single database data warehouse or data lake, you will not be able to do what we do. We create what we call the micro-database, but it's essentially a data lake for every customer you have.
So, if our customer has more than 100 million customers of their own, we have more than 100 million micro-databases. It allows you to have all the data associated with a business entity -- we call it digital entities -- in real time. Data for each entity is stored in a separate micro-database for that single customer.
We keep the context of the data together so we can go to one place and have all the data and only the data of that one single customer. So, when I want to ask a question about a customer, I can put the entire micro-database in system memory.
It's a SQL database like any other RDBMS [Relational Database Management System] with tables and indexes and the things you expect from a regular database. But it's a database that can be created in RAM and stores the data of only one digital entity.
How does K2View ingest and connect to data in the DataOps pipeline?
Rotem: When you configure our technology, every object can be virtualized; it can be a copy of the data, or it can be hybrid of both.
For one of our customers in the U.S., we are sitting on top of 609 systems and we’re getting 5 billion updates per day in real time from them. We need every update from 609 systems in real time to be inserted into the micro-database of each one of their customers. The 609 systems are in different data centers, some of them are on different clouds. We could not just take an ELT [Extract, Load, Transform] to copy all the data as it will never work.
We have to completely rethink data synchronization and ETL and everything we've done is about looking at data from the point of view of the business entity.
Editor's note: This interview has been edited for clarity and conciseness.