Qlik adds trust score to aid data prep for AI development
By measuring dimensions such as diversity and timeliness, the vendor's new tool helps users understand if their data is properly prepared to inform advanced applications.
With trustworthy, high-quality data key to the successful development and deployment of AI tools, Qlik made a new AI Trust Score generally available.
Launched Tuesday and now part of the Qlik Talend Cloud data integration platform, the tool is designed to enable users to better understand whether their data is properly prepared to form the foundation of an AI model or application.
Qlik's AI Trust Score builds on the capabilities of the Trust Score that Talend released in 2020.
Talend's Trust Score, which Qlik inherited when it acquired Talend in May 2023, assesses the reliability of an organization's data by looking at traditional data quality dimensions such as completeness, discoverability and usage. The Qlik AI Trust Score adds dimensions that relate to AI and machine learning, including data diversity to avoid bias in training and timeliness to ensure its relevancy and accuracy.
In general, tools such as trust scores for AI are valuable because they help organizations easily identify potentially significant problems, such as bias and inaccuracy, before data is fed into models, according to Mike Leone, an analyst at Enterprise Strategy Group, now part of Omdia.
It directly addresses the fundamental blind spot many enterprises face when it comes to trusting AI, which is not knowing if their data is trustworthy or the best fit for an AI model.
Mike LeoneAnalyst, Enterprise Strategy Group
"By creating a common language and metrics around data readiness, it bridges the confidence gap between technical and business teams, ultimately accelerating responsible AI adoption while reducing risks," he said.
While SAS is one competitor that provides similar capabilities, tools that continuously monitor data's preparedness are not widely available, Leone continued. As a result, the AI Trust Score is a significant addition for Qlik users.
"It directly addresses the fundamental blind spot many enterprises face when it comes to trusting AI, which is not knowing if their data is trustworthy or the best fit for an AI model," Leone said.
Based in King of Prussia, Pa., Qlik is a longtime analytics vendor that in recent years has added data integration and AI development capabilities. The new AI Trust Score was first unveiled in July 2024 when Qlik Talend Cloud was made generally available.
Building trust
AI has been the dominant trend in data management and analytics since OpenAI's November 2022 launch of ChatGPT marked a significant improvement in generative AI (GenAI) technology.
With GenAI tools capable of making workers better informed and more efficient, many enterprises have boosted their investments in AI development. Meanwhile, given that data provides AI tools with their intelligence, data management and analytics vendors have responded by creating environments that make it easy for customers to combine their proprietary data with AI models to build AI tools that understand their unique characteristics.
For those AI tools to be of use, however, the data used to inform them needs to be high-quality and relevant. If it isn't AI-ready, the likelihood of AI hallucinations increases.
Qlik's AI Trust Score helps ensure that only AI-ready data is used to inform AI tools by grading an enterprise's data across a series of AI-specific dimensions and providing users with a single score that shows whether the data in question is trustworthy. In addition, when data has problems, the score shows where there are breakdowns so they can be addressed during development rather than after data has already been used to inform AI tools and is causing bias, model drift or incorrect outputs.
Customers expressing difficulty understanding whether their data was properly prepared for AI was the primary motivator for developing the AI Trust Score, according to Drew Clarke, Qlik's executive vice president of products and technology.
"As they began deploying GenAI in real-world workflows, they realized they had no reliable way to verify whether the underlying data was truly AI-ready," he said. "Combined with high failure rates and growing governance pressure, it became clear that teams needed more than traditional data quality. They needed a new kind of signal built specifically for AI risk and readiness."
Beyond assessing AI-specific dimensions, Qlik is providing Qlik Trust Score historicization as part of the AI Trust Score's launch so that users can monitor data quality trends over time. The feature is significant because it has the potential to help customers understand why an AI tool might be deteriorating, according to Leone.
"The Qlik Trust Score historicization can address the 'black box' component of AI performance degradation, " he said. "Customers have a way to understand why a model suddenly fails, by being able to trace issues directly to specific data quality shifts over time."
In addition, the vendor unveiled plans to add an AI-native data stewardship environment within Qlik Talend Cloud aimed at helping users detect and resolve issues earlier in the data lifecycle than the development stage.
The data stewardship environment is scheduled for general availability in the fall and includes automated rules and platform-wide governance capabilities aimed at making data quality remediation more effective.
Looking ahead
Beyond the data stewardship environment, Qlik plans to augment its AI Trust Score by assessing new dimensions such as security and large language model (LLM) readiness over the next six months, according to Clarke. In addition, adding capabilities that better support data engineering for agentic AI is a prominent part of the vendor's roadmap, he added.
"In the second half of 2025, we're focused on closing the gap between trusted data and responsible AI execution," Clarke said.
Qlik's focus on adding more capabilities to its AI Trust Score is wise, according to Leone. In particular, the vendor could extend the feature to incorporate more of the AI lifecycle, including MLOps and LLMOps, he said.
In addition, Qlik should do more to demonstrate the AI Trust Score's real-world value by showing how customers are using the tool to improve their AI tools, Leone continued.
"There's a gap in the market in this area," he said. "With the idea that AI requires not just initially high-quality data but consistently reliable data throughout its operational lifecycle, continuous monitoring of data quality is essential."
Eric Avidon is a senior news writer for Informa TechTarget and a journalist with more than 25 years of experience. He covers analytics and data management.