Data Quality: The Foundation for AI Growth

AI implementation often stalls due to poor data quality. Snowflake’s Martin Frederik emphasizes that a robust data strategy is crucial; AI is only as good as the data it uses. Successful AI projects require clear business alignment, addressing data challenges from the start, and viewing AI as an enabler, not the end goal. Key factors include accessible, governed, and centralized data platforms and breaking down data silos. The future lies in AI agents capable of reasoning across diverse data, empowering users and freeing data scientists for strategic tasks.

As businesses aggressively pursue AI implementation, a critical bottleneck is emerging: data quality. Many ambitious AI initiatives are stalling, failing to progress beyond the proof-of-concept phase due to insufficient or poorly managed data.

CNBC recently spoke with Martin Frederik, a regional leader at Snowflake, a prominent data cloud company, covering the Netherlands, Belgium, and Luxembourg, to understand how companies can translate these experiments into tangible revenue-generating assets.

“There’s no viable AI strategy without a robust data strategy,” Frederik emphasizes. “AI applications, agents, and models are only as potent as the data that fuels them. Without a unified and well-governed data infrastructure, even the most sophisticated models will encounter inherent limitations and, ultimately, underperform.”

Improving Data Quality: A Prerequisite for AI Project Success

The scenario is common: a promising AI proof-of-concept generates initial excitement but fails to evolve into a deployable solution that delivers business value. Frederik points out that this often stems from businesses prioritizing the technology itself over the underlying business objectives it should be serving. This perspective is echoed by industry analysts, who note that a significant percentage of AI projects falter due to a lack of clear business alignment and a failure to address the data challenges from the project’s inception.

Headshot of Martin Frederik, regional leader for the Netherlands, Belgium, and Luxembourg at AI data cloud giant Snowflake.

“AI should be viewed as the enabler, not the end goal. It’s the vehicle to achieving your core business ambitions,” Frederik explains.

When AI projects stagnate, the causes often include a misalignment with business requirements, internal communication breakdowns, and, most importantly, disorganized or unreliable data. While failure rates in AI projects can be discouragingly high, Frederik suggests viewing these challenges as an integral part of the maturation process – an iterative journey toward AI proficiency. This sentiment aligns with wider industry thinking; companies are increasingly aware that AI success is an emergent function, arising from a dedication to ongoing improvements and flexible adjustments as issues are exposed.

Recent Snowflake studies suggest the financial benefits of getting the data foundation correct are considerable. The research reveals that a substantial proportion of companies report a positive return on their AI investments. Frederik reiterates that this result is largely predicated on establishing and maintaining a “secure, governed, and centralized platform” from the outset.

Beyond Technology: The Human Element

Even with cutting-edge technology, an AI strategy is likely to fail if the company’s culture isn’t prepared. One of the primary obstacles is ensuring broad access to data, extending it beyond a small group of data scientists. Scaling AI requires a strong foundation within a company’s “people, processes, and technology.”

This approach necessitates the breaking down of departmental silos and enabling accessibility to high-quality data and AI tools company-wide.

“With the appropriate data governance framework, AI transitions from a siloed tool to a shared resource,” Frederik observes. The impact of a single source of truth is substantial: instead of debating the validity of differing data points, teams can drive more efficient and data-informed decision-making.

Toward Self-Reasoning AI

The latest advancement is the development of AI agents capable of processing and reasoning across multiple data types, regardless of structure or origin: whether it’s structured data in a database, or unstructured information in videos or emails. Given that the vast majority of a company’s data is unstructured, this advancement is a significant milestone. The promise of GenAI lies in its capacity to unlock this dark data, delivering business intelligence where it was previously inaccessible with traditional systems.

New tools are empowering users, regardless of their technical expertise, to ask complex questions in natural language and receive answers directly from the data.

Frederik identifies this as a step toward “goal-directed autonomy.” AI has historically functioned as a helpful, but directed, assistant. “Previously, you had to ask a question to receive an answer, or ask for code, to get a snippet,” he adds.

The next generation of AI operates differently. Users can assign an agent a complex objective, and the agent will independently determine the necessary steps, including code generation and information retrieval from other applications, to deliver a comprehensive solution. This enhanced automation will free up data scientists from “tedious data cleaning” and “repetitive model tuning.”

The net result is a strategic shift of intellectual resources, freeing up talented team members “from practitioner to strategist,” therefore allowing them to concentrate on high-impact activities and generate greater value for the business. This can only be beneficial for enterprises competing in an ever-shifting economy.

Original article, Author: Samuel Thompson. If you wish to reprint this article, please indicate the source:https://aicnbc.com/9811.html

Like (0)
Previous 2025年9月23日 pm5:36
Next 2025年9月23日 pm5:50

Related News