Published on 2024年10月8日
The debate over the role of AI governance in business success is not just about compliance or ethical concerns—it's a question of whether companies can realize the financial potential that AI promises to unlock. As AI moves from concepts and proofs-of-concept (POCs) to a core component of business operations, the real question for business leaders is not if AI governance matters, but how it directly influences their bottom line.
Unfortunately, many organizations underestimate the importance of AI-ready data and governance, assuming they can retrofit data and AI strategies once their AI/ML models are in place. This assumption leads to project delays, underperformance, and inflated costs, all of which can be avoided with proper governance from the outset.
This blog argues that robust AI governance should be a forethought, not an afterthought. AI governance, grounded in trustworthy data, is essential for ensuring AI's financial returns and long-term business viability. Business leaders who fail to prioritize this foundational step set themselves up for failure.
The argument for AI governance is not just theoretical—it's backed by measurable financial outcomes. Research by Gartner indicates that companies with mature data and AI governance frameworks experience a 21-49% improvement in financial performance, and if they are able to improve their data culture maturity, it could be as high as 54%. This correlation is not coincidental.
Reliable data, made possible by robust governance, drives faster decision-making, reduces risk, and allows AI to operate efficiently. For example, a company with a well-structured AI governance framework can develop trusted AI models faster, leading to better decision-making and ultimately, better financial outcomes.
GXS Bank is a digital bank based in Singapore launched in 2022. The bank leverages alternative data sources to offer financial products to those underserved by traditional banks. AI is a critical piece of their strategy. As Dr. Geraldine Wong, Chief Data Officer, explains, “There’s a lot of skepticism on what AI can do. We need to trust the data that goes into the AI models. If organizations and their customers are able to trust the data that the organization is using for such models, then I think that’s a good starting point to building that trust for AI governance or responsible AI.”
Businesses that neglect data governance often struggle to make their AI projects pay off. Without proper governance, poor data quality becomes a critical issue, directly impacting the performance of AI models. Inaccurate or inconsistent data leads to flawed predictions, errors, and costly delays, ultimately resulting in financial losses. According to Gartner, “By 2027, 60% of organizations will fail to realize the anticipated value of their AI use cases due to incohesive ethical governance frameworks.” This highlights the essential role data governance plays in ensuring AI success by maintaining data quality and ethical standards throughout the AI lifecycle.
AI and Gen AI initiatives have become pervasive in enterprises today. Yet, according to McKinsey, just “18 percent [of business leaders surveyed] say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.” This seems to imply that most organizations think that they can catch up on governance after their AI systems are operational.
However, leaders taking this approach put their companies at unnecessary risk. Without proper governance frameworks in place from the start, these companies encounter issues with data inconsistency, compliance failures, and decision-making errors that hinder AI’s effectiveness. Not only does this increase operational costs, but it also leads to missed opportunities that are critical in fast-moving markets.
One data governance leader shared, “We scratch our heads and say, ‘What data are you looking at? Where is this going? Who has access to this?’ By the time it gets to production, it’s sometimes too late, and we just have to make it work.”
Still not convinced of the value of AI governance? Just open your favorite news feed and look at the stiff penalties imposed on companies that are lax in their data, analytics, and AI governance efforts. For example, Citi Group was recently fined $136M for failing to address data management issues since 2020. Would a robust governance framework have helped prevent these penalties? Almost certainly.
CEOs and CIOs need to understand that AI governance is not just a technical issue; it’s a business issue. A lack of leadership in AI governance can derail the very initiatives designed to spur growth, productivity, and business value. When AI governance is treated as an afterthought, it results in wasted resources and missed opportunities. On the other hand, a well-structured governance framework ensures that AI investments are aligned with business goals, delivering measurable value and minimizing risks.
For leaders, the argument is simple: ignoring AI governance equates to missed financial opportunities. A proactive approach, where governance frameworks are aligned with organizational objectives from the outset, will result in stronger financial returns, better risk management, and a return on the bottom line.
Business leaders, particularly CEOs and CIOs, must take an active role in setting the vision for AI governance. This includes empowering CDAOs to enforce governance policies and making AI governance a continuous, evolving process that aligns with business objectives.
At the core of any successful AI initiative lies the quality of the data it uses. AI/ML models are only as good as the data they are fed, making clean, reliable data an essential component of AI success.
Program managers must also contend with rapidly increasing data volumes to feed such models. As one data governance leader of a large financial service firm put it, “What’s happening [with AI] is the amount of new datasets being produced to feed these models for training purposes, retraining, and fine-tuning is increasing rapidly. We are going to see new data sets emerge very quickly.”
When data is compromised—whether through errors, inconsistencies, anomalies, or missing values—AI/ML models fail to deliver accurate predictions, ultimately leading to poor business outcomes.
Despite this, some organizations believe that AI systems can compensate for poor data quality, assuming that the models can somehow "clean up" messy data on their own. While AI excels at identifying patterns, it cannot transform bad data into meaningful insights. Relying on flawed data often results in an underperforming AI/ML model that wastes time, resources, and most importantly, erodes trust in AI’s effectiveness within the organization.
To ensure that AI models perform optimally, organizations must prioritize data quality practices from the outset. This includes:
Data profiling: Use automated data profiling and cleansing tools to regularly check for and resolve data integrity issues. Continuous monitoring ensures that problems are identified and addressed before they cascade into larger issues that undermine AI outputs.
Making AI lineage traceable: AI practitioners need full visibility into the AI lifecycle, including end-to-end lineage from datasets to AI models, with cataloged datasets, LLM prompts, and output data in a single source. This enables organizations to safeguard compliance, build trust, and troubleshoot issues proactively.
Centralizing metadata: Tags, governance policies, and other DQ indicators should be accessible in a single catalog for AI practitioners to reference, as these are critical inputs for model development.
Some may argue that maintaining high-quality data is the sole responsibility of data or IT teams. They certainly have a role to play, but data quality must be a shared responsibility across all business units, creating accountability and ensuring that every team understands its role in safeguarding the integrity of the data.
In any AI deployment, the risks associated with data breaches and unauthorized access to sensitive information are significant. This is particularly true when dealing with personal or confidential data. A failure to properly secure this data can lead to severe consequences, including substantial fines and long-lasting reputational damage. For example, Meta was fined $1.3B for violating E.U. privacy rules, TMobile was fined $60M for unauthorized data access, and AT&T was fined $13M for a sensitive data leak through a vendor.,, Businesses that fail to prioritize security may find themselves facing legal penalties or losing the trust of their customers.
Where did these organizations go astray? Many leaders believe that an early focus on security can slow down AI innovation, creating friction in fast-moving projects. However, neglecting security from the start can lead to far more costly problems later. Delays caused by breaches, legal battles, or the loss of public trust tend to be much more expensive than investing in secure data practices upfront. To this end, many data leaders I’ve spoken with discuss the convergence of data and AI governance. As one data governance expert explained, “We specifically are not mandated to do AI governance. But the lines are going to blur pretty closely because all of the machine learning AI is very heavily data dependent.”
To mitigate these risks, organizations should implement key security measures from the beginning. Encryption, role-based access controls, data masking, and other safeguards help ensure that sensitive data is only accessible to those who are authorized. Additionally, aligning data policies with industry regulations and frameworks, such as GDPR, HIPAA, the EU AI Act, The Blueprint for an AI Bill of Rights, and others, can help avoid compliance-related issues. Real-time monitoring systems should also be in place to detect any breaches as they happen, enabling the organization to respond quickly and minimize the impact.
AI models, when not properly managed, can introduce significant ethical risks. Biases embedded in the data can lead to decisions that are unfair or discriminatory, which can harm an organization’s reputation and result in legal issues. The trust of stakeholders—whether they are customers, employees, or regulators—hinges on the responsible use of AI, and organizations must take proactive steps to manage the ethical dimensions of these systems.
There is a misconception that AI ethics can be addressed after the deployment of a model. Some organizations believe they can handle ethical concerns reactively, but this approach often proves costly. AI models learn directly from the data they are trained on, meaning any biases present in that data will be reflected and even magnified in the model’s outputs. Attempting to correct ethical issues after a model is in production is usually too late; by that point, the damage to the organization’s reputation or the risk of regulatory consequences may already have occurred. In other words, you need to bake AI ethics into your model development and deployment processes.
To address these challenges, companies should put governance structures in place to ensure ethical AI use from the start. One way to do this is by establishing an AI ethics board that can oversee the development and deployment of models, ensuring that ethical considerations are built into the process. Organizations should also implement transparent practices, allowing stakeholders to understand and trust how AI decisions are made—especially in industries where regulatory scrutiny is high. Finally, teams working with AI must be trained on the ethical implications of the technology to ensure that their practices align with the company’s values and broader societal expectations.
AI governance becomes most effective when it is integrated across all areas of an organization, bringing together data and AI teams, legal departments, and business units. This cross-functional collaboration ensures that governance policies are both comprehensive and aligned with the company's overall goals. When different departments work together, they can identify gaps or overlaps in governance, which helps prevent issues that may arise later in the AI development process.
Some might argue that involving too many teams in AI governance could slow down development and innovation. However, working in silos often results in more significant problems, such as inconsistencies in policy enforcement or incomplete governance coverage. These gaps can delay project timelines or, worse, cause AI models to fail entirely because critical considerations weren't addressed early on.
The use of technology like a data intelligence platform can greatly enhance the efficiency of AI governance by automating routine tasks like data monitoring and compliance enforcement. Tools such as AWS Glue and Amazon SageMaker allow organizations to handle these processes automatically, ensuring that data is secure and easily accessible without requiring extensive manual intervention.
Automation not only saves time, but also helps prevent human error, which can lead to costly mistakes. However, AWS Glue, Microsoft Purview, Snowflake Polaris, and Databricks Unity catalogs and governance platforms are limited to the environments they are born in, which is why we recommend a tool-agnostic data governance platform.
Some might question whether investing in advanced governance tools is necessary, believing that manual processes are sufficient. However, manual governance is prone to inefficiencies and mistakes, particularly as organizations scale with their data landscapes. In the long run, relying on outdated methods increases the risk of data breaches or compliance failures, both of which could be more expensive than the initial investment in automation tools.
For AI governance to succeed, it must be treated as more than just an IT or compliance responsibility. Business leaders need to establish a culture where governance is considered everyone's job, with a clear alignment between AI governance practices and the company’s broader business goals. When governance is tied to specific objectives, it becomes a tool that helps ensure AI projects contribute to overall business success.
Some believe that governance should remain the responsibility of data and IT teams. However, without company-wide ownership, governance often fails to support broader business strategies. This disconnect can lead to inefficiencies, misaligned priorities, and ultimately, underperforming AI initiatives. Taking a people-centric approach and ensuring that every part of the organization is invested in governance creates a cohesive system that supports sustainable, long-term AI growth.
To assess the effectiveness of AI governance, organizations must rely on specific key performance indicators (KPIs) that measure data quality, regulatory compliance, operational efficiency, and AI model performance. These metrics help determine whether governance efforts are producing the desired outcomes.
Bias detection: Disparate impact ratio, fairness score, demographic parity.
Fairness & transparency: Explainability score, interpretability rate, stakeholder feedback.
Regulatory compliance: Audit frequency, compliance incidents, adherence to regulations (e.g., GDPR).
AI Performance:
Model accuracy: Precision, recall, F1 score.
Efficiency: Processing time per transaction, throughput rate, system resource utilization.
Response time: Latency, time-to-decision, API call response times.
LLM-specific: Perplexity, inference speed, token efficiency.
Data quality:
Accuracy: Error rate, false data points, deviation from ground truth.
Completeness: Missing data percentage, null values count, record coverage.
Consistency: Inconsistent data occurrences, data duplication, and reconciliation errors.
Security & Privacy:
Data protection: Encryption level, security incident rate, vulnerability detection rate.
Privacy compliance: Number of privacy breaches, user consent records, privacy audit results.
Access control: Unauthorized access attempts, role-based access updates, privileged access review frequency.
Admittedly, it initially seems difficult to measure the success of AI governance. However, by focusing on clear and specific metrics such as data accuracy and process efficiency, organizations can make the impact of their governance efforts more transparent and actionable. These metrics provide tangible evidence of governance improvements and help guide future decision-making.
Alation provides the tools and support needed to turn AI governance from a theoretical concept into a practical, value-driven reality.
Alation helps companies:
Find and understand AI-ready data with intelligent search & discovery
Record, document, and collaborate with AI models with AI model documentation and model products
Improve AI safety with AI governance policies, lineage, and auditability
With Alation, data leaders get a collaborative hub to surface the most accurate, trusted datasets for AI models. As a repository of metadata, the platform is ideally suited for AI model documentation and auditable AI lineage, helping to mitigate risk. In this way, business leaders who partner with Alation are better equipped to navigate the complexities of AI governance, ensuring that their AI investments deliver real-world financial returns while maintaining security, compliance, and ethical standards.
For CEOs, CDAOs, and governance teams, AI governance is critical for realizing financial gains and minimizing risks. Failing to prioritize governance, as seen with Citigroup’s $136 million fine for unresolved data issues and T-Mobile’s $60 million penalty for unauthorized data access, can lead to costly setbacks. Leaders must prioritize data quality, security, and ethics from the start to minimize risks and drive better decision-making. By embedding governance into all functions and leveraging the right tools, organizations can achieve stronger returns on AI investments. Ultimately, leadership commitment to governance ensures long-term success and resilience in an increasingly AI-driven world.
Business leaders who fail to prioritize AI governance are setting themselves up for long-term underperformance. Now is the time to take AI governance seriously, ensuring that AI delivers real value without unnecessary risks or inefficiencies.
Organizations aiming to harness generative AI face two key obstacles: a lack of expertise and the need to ensure accurate outputs. Watch this webinar, Building Trust in AI: Best Practices for AI Governance, from IDC's Stewart Bond, to learn how to prepare your AI initiatives for success.
Read the press release on our new AI governance solution.
Curious to see how Alation can help you implement governance for AI? Book a demo with us today.
Krishnan, Akash, Brian Foster, Kalpana Tokas, and Saul Judah. 2024. “Data Governance and Management Investments Boost Financial Performance.” Gartner Inc. https://www.gartner.com/document-reader/document/5405063.
James, Sarah, and Alan D. Duncan. 2024. “Over 100 Data, Analytics and AI Predictions Through 2030.” Gartner Inc. https://www.gartner.com/document-reader/document/5519695.
Singla, Alex, Alexander Sukharevsky, Lareina Yee, Michael Chui, and Bryce Hall. 2024. “The state of AI in early 2024: Gen AI adoption spikes and starts to generate value.” McKinsey & Company. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai#/.
Schroeder, Pete, and Tatiana Bautzer. 2024. “US regulators fine Citi $136 million for failing to fix longstanding data issues.” Reuters. https://www.reuters.com/business/finance/us-bank-regulators-fine-citi-136-million-failing-address-longstanding-data-2024-07-10/.
Satariano, Adam. 2023. “Meta Fined $1.3 Billion for Violating E.U. Data Privacy Rules (Published 2023).” The New York Times. https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html.
Alper, Alexandra, and Eric Beech. 2024. “US fines T-Mobile $60 million over unauthorized data access.” Reuters. https://www.reuters.com/business/media-telecom/us-committee-slaps-60-million-fine-t-mobile-over-unauthorized-data-access-2024-08-14/.
Brodkin, Jon. 2024. “AT&T fined $13M for data breach after giving customer bill info to vendor.” ArsTechnica. https://arstechnica.com/tech-policy/2024/09/att-fined-13m-for-data-breach-after-giving-customer-bill-info-to-vendor/.