Published on 2024年12月9日
Artificial intelligence (AI) is reshaping industries, driving innovation, and opening up new economic opportunities. According to a 2023 report by McKinsey, generative AI has the potential to add between $2.6 trillion and $4.4 trillion in value to the global economy annually.
How can organizations seize this massive opportunity? Innovation demands a framework to ensure consistent processes and practices. Increasingly, leaders are turning to AI governance to deliver that framework.
In a recent webinar, leading industry analyst Stewart Bond revealed how such leaders should approach AI governance as a foundation for their AI initiatives. This blog details some of the key takeaways from that webinar for people exploring actionable strategies for advancing AI governance. Let’s dive in!
AI has rapidly progressed from experimentation to deployment, with an estimated $19.4 trillion global economic impact by 2030.
Yet, many organizations face hurdles in scaling AI responsibly. As Bond highlighted, “Breaking down the barriers requires a structured and agile approach to AI projects.”
Key governance challenges identified in the webinar include:
Data quality and transparency: Concerns about data accuracy, toxicity, and intellectual property (IP) rights are widespread.
Compliance and privacy: Especially in regulated industries, ensuring AI models adhere to policies is paramount.
Skills and collaboration: Lack of AI expertise and disjointed collaboration between IT and business units can impede progress.
David Chao, CMO of Alation, underscored the importance of transparency, compliance and privacy, particularly when it comes to using customer data, noting, “That's when the need to access data is so pertinent, but at the same time so fraught because of the perceived risks around compliance and leakage of data, as as well as around fairness of how that model is being trained.” AI governance can mitigate much of that risk, ensuring compliance while enabling innovation.
IDC’s research reveals that a unified governance model is essential for fostering transparency, accountability, and fairness in AI. Bond stressed that the business and AI strategies must intersect; these are not mutually exclusive. “At the heart of the governance is your AI technology architecture, which consists of your data, your applications, platforms, and infrastructure,” he shared. “Your governance journey can start anywhere in the framework, but many start with data and model transparency.
Bond emphasized the importance of balance across the framework. “You need to balance that execution and what you're doing in these different areas to really help you achieve AI that's transparent and explainable, accountable, and reliable [so that it] supports, diversity and fairness. You can't just focus on the infrastructure. You can't just focus on the data. You can't just focus on the applications. You need to bring all of those things together to really pull that out.
Bond presented a unified governance framework encompassing:
Data Governance: Managing data quality, lineage, and security.
Model Governance: Documenting AI models with attributes like training data provenance, intended use, and limitations.
Stakeholder Collaboration: Bridging business, legal, and IT teams to align governance strategies with organizational goals.
As Bond put it, “AI governance is multifaceted, encompassing not only the technical governance elements, but also the ethics principles, compliance, and other stakeholder concerns.”
Organizations excelling in AI governance exhibit the following practices:
AI governance starts with high-quality, trustworthy data. Organizations that score high in data quality, cataloging, and governance are more likely to have AI applications in production.
What are these organizations doing to support these disciplines? Most of them are:
Cataloging data assets with metadata.
Monitoring data quality and flagging anomalies.
Ensuring privacy compliance with secure data policies.
Developing model cards—centralized documentation for AI models—is becoming a best practice. These cards record numerous attributes, some of which include:
Training data sources and methodologies
Model limitations and sensitivity considerations
Ownership and licensing details
Salima Mangalji of Alation emphasized the value of a data catalog in being able to record, document, and collaborate with AI models through use of AI model cards, “fostering collaboration through conversations and also sharing these models,” she said.
Breaking down silos between teams is vital for embedding governance across the AI lifecycle. Bond emphasized that IDC research has found that the most successful organizations are those where business and IT teams work hand-in-hand. Creating a governance council with representatives from IT, legal, risk, and business units ensures alignment on goals and compliance.
AI governance is not just a technical necessity but a strategic enabler for organizations looking to unlock AI's full potential while ensuring transparency, accountability, and compliance. As Stewart Bond highlighted, achieving this requires a unified approach to governance—one that integrates data, models, and collaboration across teams.
From investing in data intelligence and model documentation to fostering cross-functional collaboration, these practices empower organizations to mitigate risks and drive innovation responsibly.
Gain the knowledge you need to build trust in AI and transform your AI initiatives with confidence. To explore these strategies in greater depth and hear more insights from industry leaders, we invite you to watch the full webinar featuring Stewart Bond.