6 data preparation best practices for analytics applications data cleansing (data cleaning, data scrubbing)
X

Top 7 data preparation challenges and how to overcome them

Data preparation is a crucial but complex part of analytics applications. Don't let seven common challenges send your data prep processes off track.

The rise of self-service BI tools enabled people outside of IT to analyze data and create data visualizations and dashboards on their own. That was terrific when the data was ready for analysis, but it turned out that most of the effort in creating BI applications involved data preparation. It still does -- and numerous challenges complicate the data preparation process.

Increasingly, those challenges are faced by business analysts, data scientists, data engineers and other non-IT users. That's because software vendors have also developed self-service data preparation tools. Those tools enable BI users and data science teams to perform the required data preparation tasks for analytics and data visualization projects. But they don't eliminate data prep's inherent complexities.

Why is effective data preparation important?

In the modern enterprise, an explosion of data is available to analyze and act upon to improve business operations. Effective data management creates quality data for use in analytics applications and machine learning algorithms that can drive real-time decisions. Organizations gather raw data from various sources, both internal and external. That data can come in different formats and contain errors, typos and other data quality issues. Some of it might be irrelevant to the work at hand.

As a result, the data must be curated to achieve the levels of cleanliness, consistency, completeness, currency and context needed for the planned analytics uses. That makes proper data preparation crucial. Without clean data, BI and analytics initiatives are unlikely to produce the desired outcomes.

Data preparation has to be done within reasonable limits. As Winston Churchill said, "Perfection is the enemy of progress." The goal is to make the data fit for its intended purpose without getting stuck on analysis paralysis or endlessly striving to create perfect data. But it can't be neglected or left to chance.

Top 7 data preparation challenges

To succeed, it's important to understand the challenges that data preparation presents and how to overcome them. Many data preparation challenges could be bundled together under the data quality label, but it's useful to differentiate them into more specific issues to help identify, fix and manage the problems. With that in mind, here are seven challenges to be prepared for.

1. Inadequate or nonexistent data profiling

Data analysts and business users should never be surprised by the state of the data when doing analytics -- or, worse, have their decisions affected by faulty data that they were unaware of. Data profiling, one of the core steps in the data preparation process, should prevent that from happening. But there are different reasons why it might not do so, including the following scenarios:

  • The people who gather and prepare the data assume it's valid because it was already being used in reports or spreadsheets. As a result, they don't fully profile the data. However, unknown to them, things like SQL queries, views, custom code or macros are manipulating the data, which masks underlying problems in the data set.
  • Someone who collects a large volume of data only profiles a sample data set because of the time it takes to do the full one. However, data anomalies might not be picked up in the sample data.
  • Custom-coded SQL queries or spreadsheet functions used to profile data aren't comprehensive enough to find all the anomalies or other problems in the data.

How to overcome this challenge

Solid data profiling needs to be the starting point in the data preparation process. Data preparation tools can help with that: They include comprehensive data profiling functionality to examine the completeness, cleanliness and consistency of data sets in source systems and then, in target ones, as part of data curation. Done well, data profiling provides the information needed to identify and address many of the data issues listed in the subsequent challenges.

2. Missing or incomplete data

A common data quality issue is fields or attributes with missing values, such as nulls or blanks, zeros that represent a missing value rather than the number 0 or an entire field missing in a delimited file. The data preparation questions raised by these missing values are whether they indicate that there is an error in the data and, if they do, how should that error be handled. Can a valid value be substituted in? If not, should the record -- or row -- with the error be deleted or kept but flagged to show there's an error?

If they aren't addressed, missing values and other forms of incomplete data may adversely affect business decisions driven by analytics applications that use the data. They can also cause data load processes that aren't designed to handle such occurrences to fail. That often results in a scramble to figure out what went wrong and undermines confidence in the data preparation process itself. Errors resulting from missing or incomplete data can lead to data governance policy compliance issues.

How to overcome this challenge

First, you need to do data profiling to identify data that's missing or incomplete. Then, determine what should be done based on the planned use case for the data, and implement the agreed-upon error-handling processes, a task that can also be done with a data preparation tool.

3. Invalid data values

Invalid values are another common data quality issue. They include misspellings, other typos, duplicate entries and outliers, such as wrong dates or numbers that aren't reasonable given the data's context. These errors can occur in modern enterprise applications with data validation features and end up in curated data sets.

If the number of invalid values in a data set is small, they may not have a significant impact on analytics applications. But more frequent errors may result in faulty analysis of the data.

How to overcome this challenge

The tasks to find and fix invalid data are similar to the ones for handling missing values: Profile the data, determine what to do when errors are encountered and then implement functions to address them. In addition, data profiling should be done on an ongoing basis to identify new errors. This is a data preparation challenge where perfection is not likely to be attained -- some errors inevitably slip through. But the intent should be to do whatever it takes to keep them from adversely affecting analytics-driven decisions.

4. Name and address standardization

One more data quality issue that complicates data preparation is inconsistency in the names and addresses of people, businesses and places. This type of inconsistency involves legitimate variations of that data, not misspellings or missing values. But if not caught when preparing the data, such inconsistencies can prevent BI and analytics users from getting a complete view of customers, suppliers and other entities.

Name and address inconsistencies include the following:

  • A shortened first name or nickname versus a person's full name, such as Fred in one data field and Frederick in another.
  • Middle initial vs. middle name.
  • Differences in prefixes and suffixes, such as Ms. vs. Ms, Mr. vs. Mister or Ph.D. vs. PhD.
  • Spelled-out vs. abbreviated place data, such as Boulevard vs. Blvd, suite vs. ste or New York vs. NY.
  • Acronyms vs. full business names, such as BMW and Bayerische Motoren Werke for the German automaker.
  • Businesses alternately being listed with and without Inc., Co., Corp., LLC and other business suffixes at the end of their name.

How to overcome this challenge

The source data schemas must be examined to determine what name and address fields are included and then the data profiled to identify the scope of the inconsistencies. Once you've done that, there are three optimal ways to standardize the data:

  1. Create customized standardization processes using a data preparation tool's string-handling functionality.
  2. Use a data prep tool's prebuilt name and address standardization features.
  3. Use a tool from a software vendor that specializes in name and address standardization, ideally one that integrates with your data preparation tool.

5. Inconsistent data across enterprise systems

Inconsistent data also is often encountered when multiple data sources are needed for analytics. In this instance, the data might be correct within each source system, but the inconsistency becomes a problem when data from different sources is combined. It's a pervasive challenge for the people who do data preparation, especially in large enterprises.

How to overcome this challenge

When the data inconsistency is the result of an attribute such as an ID field having different data types or values in different systems, data conversions or cross-reference mapping can be used for a relatively easy fix. However, when it occurs because business rules or data definitions are different across the source systems, analysis must be done to determine data transformations that can be implemented while preparing the data.

6. Data enrichment

One of the key steps in creating the business context needed for analytics is enriching data. Data enrichment measures include the following:

  • Calculating business metrics and KPIs.
  • Filtering data based on business rules applicable to planned analytics.
  • Augmenting data with additional internal or external sources.
  • Deriving additional data in an existing data set.

But enriching data isn't an easy task. Deciding what needs to be done in a data set is often complicated, and the required data enrichment work can be a time-consuming procedure.

How to overcome this challenge

Data enrichment should start with a strong understanding of the business needs and goals for analytics applications. That makes it easier to identify the business metrics, KPIs, augmented data and other enrichments required to meet those needs and then to define things like filters, business rules and calculations to generate the enriched data.

7. Maintenance and expansion of data prep processes

Although data scientists and other analysts perform many ad hoc tasks, the more impactful data preparation work they do inevitably becomes a recurring process that then expands in scope as the resulting analytics becomes more and more valuable. But organizations often encounter problems with that, especially if they're using custom-coded data preparation methods.

For example, what happens and why in a data preparation process are typically known only by the person who created it if there's no documentation of the process or data lineage and where data is used. The dependency on such individuals requires them to spend increasingly more time on these processes and makes it hard to sustain the data preparation work when they leave the organization.

In addition, when changes or enhancements to a data preparation process are needed, bolting on new code makes the process more precarious and difficult to maintain.

How to overcome this challenge

Data preparation tools can help you avoid these traps and achieve long-term, sustained success in preparing data. They provide productivity and maintenance benefits, such as prebuilt connectors to data sources, collaboration capabilities, data lineage, where-used tracking and automated documentation, often with graphical workflows.

Final thoughts on data preparation and its challenges

To succeed at data preparation, it's imperative that you first understand what data is needed for an analytics application and the associated business context. Once the relevant data has been gathered from source systems, the key steps in preparing it include the following:

  • Data profiling to identify data quality and consistency issues.
  • Data cleansing to resolve those issues.
  • Data transformation and enrichment to apply the necessary business context for the analytics.

As you go through those steps, do what's appropriate and possible in a reasonable way, especially in cleansing the data. Keep in mind that perfection often isn't attainable or may not be worth the cost to achieve -- and it can be the enemy of progress on data preparation.

Editor's Note: This article was originally published in 2022. TechTarget editors revised it in December 2024 to improve the reader experience.

Rick Sherman, who died in January 2023, was founder and managing partner of Athena Solutions, a BI, data warehousing and data management consulting firm. He had more than 40 years of professional experience in those fields.

Next Steps

Data preparation in machine learning: Key steps

Dig Deeper on Business intelligence management