Manage Learn to apply best practices and optimize your operations.

Drastically simplify your IoT data pipeline

The IoT data pipeline is a simple concept. First, data is gathered from multiple sensors, then all data is transformed into a common format, followed by storing it in data lake storage. Once the data is stored in an accessible place, analysts can use it to come up with answers to business problems. Seems simple, but it is not always easy to execute. On average, the completion of a pipeline request ranges from weeks to months, resulting in loss of business and increased IT costs.

The most common pattern observed in IoT data pipelines is where enterprises use data lakes to store all their data. However, lack of data accessibility and increased complexity of these pipeline architectures rapidly turns data lakes into data swamps.

Organizations often react to data swamps by copying their data from data lakes into data warehouses using fragile and slow ETL jobs, doubling storage costs to maintain these copies of data. Then, data engineers must create cubes or BI extracts so analysts can work with it at interactive speeds. In this scenario, enterprises don’t have full control of their data. Analysts don’t know where their data is located, and there are dozens of data engineers trying to keep things running. What can we do to make things better?

Use a data lake engine to leverage the value of your IoT data

Data lake engines are software solutions or cloud services that provide analytical workloads and users with direct SQL access across a wide range of data sources — especially for data lake storage — through a unified set of APIs and data models. While it may sound somewhat hard to believe, data lake engines eliminate the need for data warehouses on top of your data lake storage because the data lake engine allows users’ BI and data science tools to talk directly to the data stored in the data lake. Data engineers and architects can leverage data lake engines to present BI analysts and data scientists with a self-service semantic layer that they can use to find datasets, write their own queries against them, and use them in their end user applications. This process is a whole lot simpler and faster since enterprises can eliminate complex ETL code as well as slow and costly data replication.

Key elements for simplifying your IoT pipeline

The basic pipeline scenario works well in small IoT environments where only hundreds of rows of measurements are captured daily, but in large industrial environments, where hundreds of gigabytes of data are generated every hour, the story is different. In such large-scale scenarios, we need to worry about how many copies of data there are, who has access to them, what managed services are required to maintain them and so on. Additionally, the traditional methods of accessing, curating, securing and accelerating data break down at scale.

Fortunately, there is an alternative: a governed, self-service environment where users can find and interact directly with their data, thus speeding time to insights and raising overall analytical productivity. Data lake engines provide ways for users to use standard SQL queries to search the data they want to work with without having to wait for IT to point them in the right direction. Additionally, data lake engines provide security measures that allow enterprises to have full control over what is happening to the data and who can access it, thus increasing trust in their data.

IoT data pipelines have to be fast. Otherwise, by the time the data is delivered to the analyst, it runs the risk of being considered obsolete. Data obsolescence is not the only issue; increased computing costs and the prospect of data scientists leaving because they don’t have data to analyze are also potential issues. Data lake engines provide efficient ways to accelerate queries, therefore reducing time to insights.

IoT data comes in many different shapes and formats. Because of this, the efficiency of a data pipeline is decreased due to the number of transformations that data has to go through before being put into a uniform format that can be analyzed. Take data curation, blending or enrichment as an example. This is a complicated process that entails using ETL code to extract fields from a source, then through other database connections, extract the fields that we want to blend from another source. Finally, the resulting dataset is copied into a third repository. Data lake engines alleviate the complexity of this scenario by allowing users to take advantage of a self-service environment, where they can integrate and curate multiple data sources without the need to move data from its original data lake storage source.

IoT data pipelines have been around for a while; as they age, they tend to be rigid, slow, and hard to maintain. Making use of them normally entails significant amounts of money and time. Simplifying these pipelines can drastically improve the productivity of data engineers and data analysts, making it easier for them to focus on gaining value from IoT data. By leveraging data lake engines, enterprises can embark on a more productive path where IoT data pipelines are reliable and fast, and IoT data is always accessible to analysts and data scientists. Data lake engines make it possible for data-driven decisions to be made on the spot, increasing business value, enhancing operations, and improving the quality of the products and services delivered by the enterprise.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

CIO
Security
Networking
Data Center
Data Management
Close