Evolving data integration strategies target new analytics needs
Article 1 of 3
Integration work helps make analytics data more accessible
In many organizations, collecting data isn't a challenge -- but analyzing that "jumbled heap" in their systems is, according to Rick Sherman, managing partner of consultancy Athena IT Solutions. All too often, business executives have access to "plenty of data, but not nearly enough information," Sherman wrote in an April 2017 blog post.
That's driving companies to adopt data integration strategies aimed at making data more accessible for BI and analytics uses. For example, Ebates runs Spark-based extract, transform and load (ETL) jobs to pull operational data on its cash-back shopping rewards program into a Hadoop data lake. It then combines different data sets for analysis in a data warehouse that's also part of the Hadoop system, with AtScale's data platform layered on top to streamline data access and delivery.
"I don't think a data analyst's time is well-spent filtering and preparing data," said Mark Stange-Tregear, the San Francisco company's vice president of analytics. "It's better to do filtered data sets upfront so analysts can get insights very, very quickly."
Atlanta-based Fidelity Bank is building out a Teradata-based data warehouse to integrate customer account data from various business systems for expanded reporting and analysis. Data analysts and business managers "are clamoring for a lot more analytics data" than they can get from the separate systems, said Eric Martz, a data architect and ETL developer at the bank.
Martz said the bank also plans to feed new self-service BI and predictive analytics applications from the data warehouse, which it's populating with the help of software from WhereScape that automates the development of scripts to load data into the environment.
For organizations taking similar steps, this handbook offers advice on implementing data integration strategies to support customer analytics and other BI and analytics applications.