Published on November 5, 2024
This is part two in a two-part series exploring AI governance for business succes. Read part one here.
As AI systems take on greater decision-making authority within organizations, a natural question arises: Who is truly in control—your people or the AI? In other words: Do you have a grip on your AI – or is it running on auto-pilot? While AI brings enormous potential for innovation and productivity gains, its power to act autonomously also increases risks. To counteract these risks, business leaders must implement a governance framework that ensures human oversight in the form of accountability, transparency, and alignment with both business objectives and ethical considerations. Without this, AI systems can evolve in ways that are unpredictable and damaging.
AI governance cannot function without clear accountability structures, and it’s vital to define who is responsible for their design, deployment, and oversight. The absence of clear accountability can lead to negative consequences, especially when AI decisions impact customers, business operations, or compliance with regulations.
Many organizations assume that the responsibility for AI governance lies solely with the technical teams—data scientists, engineers, or IT departments. However, relegating AI governance to technical teams ignores broader organizational risks. Decisions made by AI systems extend beyond technology. As such, accountability should be embedded at multiple levels of the organization, from data scientists and technical experts to C-suite executives who need to ensure that AI aligns with overall corporate strategy and governance.1
Without defined roles and accountability, when something goes wrong—whether it’s a biased hiring algorithm or a misstep in financial forecasting—there’s no clear answer to the question: Who is responsible? Establishing a governance structure that spans IT, legal, risk management, business, data science, AI/ML, and compliance teams ensures that there are clear lines of responsibility and that all stakeholders understand their roles in managing AI systems. Companies should establish AI governance councils that regularly review AI decisions for bias, fairness, and compliance.
The NIST Framework outlines 19 different governance principles, with section 2 speaking directly to accountability2:
GOVERN 2:
Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.
GOVERN 2.1: Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.
GOVERN 2.2: The organization’s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.
GOVERN 2.3: Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.
By clarifying roles, responsibilities, and communication around AI, offering AI training, and gaining executive support, AI leaders can create an essential framework for accountability in AI development and deployment.
In industries like finance, healthcare, and law, the ability to explain and interpret AI-driven decisions is a legal and ethical necessity. Regulatory bodies are increasingly demanding that organizations be able to audit and explain the inner workings of AI systems, especially when these systems influence high-stakes decisions that affect individuals' rights and opportunities.
The challenge is that most AI/ML models are inherently difficult or literally impossible to interpret. How interpretable can a 50B or 70B parameter model ever truly be? A single decision may be the result of thousands of variables and billions of interactions, making it nearly impossible to provide a clear explanation without appropriate governance structures in place.
For these reasons, leaders must invest in mechanisms that provide visibility into AI decision-making processes. This can include documentation of assumptions, algorithms, data, clear model development protocols, and performance audits that ensure the system operates as intended. Failing to ensure transparency could leave organizations exposed to regulatory scrutiny, fines, or a loss of customer trust.
To improve transparency, companies can take the following actions:
Document AI models' decision-making processes. This includes logging the algorithms used, key assumptions, data sources, and how the model’s output is generated.
Implement explainability tools: Use tools such as LIME (Local Interpretable Model-Agnostic Explanations), SHAP (SHapley Additive exPlanations), BLEU (bilingual evaluation understudy), Prometheus, BLEURT, and other metrics to help interpret and evaluate complex AI models.
Perform regular performance audits and monitoring: Conduct regular reviews of how AI systems perform in real-world conditions and check for model drift or changes in decision patterns.
Transparency is essential for building trust and meeting regulatory requirements when deploying AI systems, especially in industries where decisions can have significant impacts. By documenting AI decision-making processes, using explainability tools, and conducting regular performance audits, organizations can illuminate the black box of AI, ensuring their models are not only compliant but also accountable and understandable.
As mentioned in part one, AI’s ability to make decisions at scale introduces the risk of amplifying biases present in the data it processes, which is an ever-present risk. Addressing these biases requires governance structures that proactively detect, mitigate, and remove bias throughout the AI lifecycle.
Specific activities to mitigate bias in AI models include:
Conduct bias audits and fairness assessments: Regular audits of AI models are essential to detect and address bias. These audits will help identify potential unfair treatment across different demographic groups and protected classes. Model developers can run fairness assessments during the development and deployment phases to ensure that the model does not disproportionately affect any particular group.3
Use diverse and representative datasets: Unrepresentative training data is one of the most common sources of bias in AI models. To mitigate this, data scientists can use synthetic data (assuming the additional data cannot be collected) to add data from underrepresented groups and increase data diversity.
Incorporate bias mitigation techniques: ML engineers can consider techniques such as re-weighting the importance of data from underrepresented groups or using fairness-enhancing algorithms can adjust model predictions to prevent biased outcomes.
Achieving fairness in AI isn't just about compliance—it's about building systems that reflect the diversity of the world they serve. By regularly auditing for bias, applying mitigation techniques, and prioritizing contrasting, representative datasets, organizations can create AI models that make more equitable decisions. After all, the more diverse the data, the better equipped AI is to understand and serve a wide range of perspectives, ensuring fairness and inclusivity at scale.
Since AI models are designed to learn and adapt over time, this introduces a new layer of risk: model drift or data drift. As AI systems encounter new data, they may evolve in ways that stray from their original purpose or have suboptimal performance. For example, did you see the car dealer’s chatbot discounting a Chevy to $1?4
Source: X post from Chris Bakke (click image to see source).
Assuming you don’t want clever prompt engineers outwitting your AI chatbots, leaders must ask: What actions can we take to better monitor AI systems?
Implement continuous monitoring and real-time audits: Businesses should deploy systems that enable real-time monitoring of AI performance to ensure that models function as intended over time. Setting up real-time audit mechanisms ensures that any potential problems—such as deteriorating accuracy or emerging biases—are flagged and addressed before they cause harm. Some systems can do this out of the box.
Establish post-deployment monitoring protocols: AI systems should be monitored both during the development phase and deployment. These monitoring protocols should track AI system performance in real-world conditions. This involves regularly assessing AI outcomes, checking for any new biases, and verifying that the AI system continues to align with business goals and ethical standards.
Leverage automated tools for performance and compliance monitoring: Companies should adopt automated monitoring tools to regularly evaluate the performance, fairness, and compliance of AI systems. These tools can help track key performance indicators (KPIs), identify any shifting patterns, and ensure that AI models are operating within the desired thresholds.
Keeping AI systems on track requires constant vigilance, which can be machine-monitored but human verified. As models evolve and encounter new data, they risk drifting away from their original purpose. By implementing continuous monitoring, post-deployment protocols, and automated performance tools, businesses can catch issues early, ensuring that their AI remains accurate, fair, and aligned with their goals.
Implementing a robust AI governance framework helps organizations benefit from AI while minimizing risks. This means embedding accountability across all levels, investing in transparency mechanisms, mitigating biases, and establishing continuous monitoring processes. A well-governed AI ecosystem not only protects the business from regulatory scrutiny and ethical missteps but also aligns AI systems with long-term business objectives.
Ultimately, AI governance is about ensuring that AI remains a tool, not a risk. With the right structures in place, businesses can maintain control over their AI, allowing them to innovate confidently while safeguarding stakeholder trust. Leaders who prioritize governance will not only manage risks effectively but also position their organizations for sustainable success in an AI-driven world.
Curious to learn how a data catalog can help you deliver AI governance? Book a demo with us to learn more.
“Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities.” 2021. Government Accountability Office. https://www.gao.gov/products/gao-21-519sp.
“Artificial Intelligence Risk Management Framework (AI RMF 1.0).” 2023. NIST Technical Series Publications. https://doi.org/10.6028/NIST.AI.100-1.
“Introduction to model evaluation for fairness | Vertex AI.” n.d. Google Cloud. Accessed October 3, 2024. https://cloud.google.com/vertex-ai/docs/evaluation/intro-evaluation-fairness.
Masse, Bryson. 2023. “A Chevy for $1? Car dealer chatbots show perils of AI for customer service.” VentureBeat. https://venturebeat.com/ai/a-chevy-for-1-car-dealer-chatbots-show-perils-of-ai-for-customer-service/.