Published on February 13, 2025
The rapid rise of AI is ushering in a new era of business transformation—one filled with both unprecedented opportunities and significant risks. As AI increasingly influences decision-making, companies must navigate a shifting regulatory landscape while ensuring that AI systems are developed and deployed responsibly.
With AI models learning from vast amounts of data, the ethical challenges surrounding privacy, bias, security, and compliance are more pressing than ever. Meanwhile, evolving AI regulations remain complex and often unclear, leaving organizations struggling to define best practices. AI leaders, data scientists, and governance professionals face a pivotal moment: how can they establish ethical AI frameworks and governance models that protect both businesses and individuals in this era of constant change?
Wendy Turner-Williams, a leading expert in data governance and AI ethics, founded TheAssociation.AI to address these challenges. Her mission: to create a neutral, collaborative space where professionals across AI, data privacy, security, and governance can share knowledge, develop standards, and implement ethical AI practices at scale.
In a recent conversation with Satyen Sangani, CEO of Alation, Turner-Williams shared best practices for ethical AI compliance. Below are the key takeaways from their discussion. Some quotes have been edited for clarity.
With great power comes great responsibility. As a new generation of AI creators sets out to build powerful models, how can they ensure they do so responsibly?
According to Turner-Williams, it begins with data collection. Responsible data collection practices entail:
Ensuring data privacy and obtaining proper user consent
Avoiding datasets that perpetuate bias or discrimination
Sourcing diverse and representative data to enhance fairness
Implementing data protection mechanisms to safeguard sensitive information
AI systems are fundamentally driven by data, so ethical considerations must be embedded from the very first stage. Ensuring privacy and obtaining proper consent from individuals is crucial to avoid violating personal rights or exploiting sensitive information.
Furthermore, the training data used for AI models can propagate societal biases and discriminatory patterns if not carefully curated from diverse sources. As highlighted by the LaMar Institute, "Ethical use of AI training data minimizes bias, ensures data protection, and promotes fairness.” By prioritizing ethical data collection practices from the outset, organizations can lay a solid foundation for developing trustworthy and responsible AI solutions.
With AI regulations like GDPR, APRA CPS 230, and evolving U.S. policies, organizations need robust AI governance frameworks to mitigate risk and ensure compliance. But many businesses lack clear guidelines on how to govern AI responsibly.
Turner-Williams emphasized the need for “data categories in addition to classifications” and “clear rules related to who owns that data and who doesn’t.” Establishing comprehensive data catalogs with detailed metadata and lineage tracking allows organizations to understand their data assets, identify potential risks, and configure appropriate controls and access policies.
By implementing strong data governance frameworks, organizations can ensure compliance with regulations like GDPR, which mandate granular data handling practices for responsible AI use. AI itself can assist data leaders in managing data efficiently, with emerging tools that enhance data quality, security, and compliance, further enabling ethical AI development and deployment.
For AI models to be trusted and reliable, ethical principles must be embedded within AI pipelines. Without transparency, AI systems risk amplifying biases, exposing sensitive data, or making unreliable decisions.
Best practices for ethical AI engineering include:
Data lineage tracking – Knowing where AI training data comes from and how it’s used improves accountability.
Bias detection and correction – Automated tools can flag imbalanced or discriminatory training data before it impacts model predictions.
Explainability frameworks – AI decision-making should be auditable and understandable by humans, preventing "black box" models from causing harm.
Ethical filtering mechanisms – Ensuring AI models do not consume harmful or manipulated data safeguards fairness.
As Turner-Williams explains, “You can’t take a siloed approach to AI. We have to bring together expertise across engineering, governance, and ethics to implement responsible AI.”
Ethical AI development doesn’t stop after deployment. AI systems must be continuously monitored for emerging ethical issues like bias, privacy violations, or potential harms. Turner-Williams underscores the importance of understanding data ownership and necessity for business operations:
“To do something like the right to be forgotten, you not only have to understand if you own the data or not, you have to also understand if that data is required for your business.”
She illustrates this with an example: under GDPR, an IP address belongs to the customer, yet businesses often rely on IP addresses for authentication, security, and trust. Companies must define data usage at a granular level, track lineage through systems, and determine whether compliance requirements like the right to be forgotten can be implemented. Continuous monitoring ensures that ethical principles remain upheld as AI evolves.
Turner-Williams highlights these other key challenges:
Lack of standardization – AI laws vary across countries and industries, making compliance complex.
Unclear enforcement mechanisms – Organizations don’t always know when they’re non-compliant or what the penalties are.
Resource constraints – Smaller companies lack AI compliance teams, making risk management difficult.
To address these issues, TheAssociation.AI is working to establish practitioner-driven AI standards that guide organizations through these challenges. Turner-Williams argues for a unified AI policy approach to reduce ambiguity and streamline AI compliance efforts.
AI governance is no longer just a compliance concern—it has become a strategic business imperative. As organizations accelerate AI adoption, they must rethink who owns AI governance and how it aligns with business strategy.
According to Wendy Turner-Williams, the role of the Chief Data Officer (CDO) is still evolving, with significant variations across organizations:
“There isn’t clear clarity from company to company on what it means to be a CDO... A CDO sits right in the middle of business and technology. You need to be a mix of both.”
Unlike the Chief Information Officer (CIO), who focuses on infrastructure and technology, the CDO is responsible for ensuring that data is accessible, high-quality, and strategically leveraged for AI-driven decision-making. As AI governance expands, CDOs are taking on greater influence over AI risk, security, and compliance, shaping policies that balance innovation with regulatory requirements.
Looking ahead, Turner-Williams predicts a shift in C-suite reporting structures, where CISOs (Chief Information Security Officers), CPOs (Chief Privacy Officers), and even CIOs may ultimately report to the CDO—a reflection of data’s increasing centrality to business success.
As AI regulation evolves and governance, security, and ethics become deeply intertwined with corporate strategy, organizations will need strong leadership at the CDO level to navigate risk, ensure compliance, and drive responsible AI innovation. The companies that embrace a proactive approach to AI governance—led by forward-thinking CDOs—will be best positioned to build trust, mitigate risk, and harness AI’s full potential for competitive advantage.
As AI reshapes industries, organizations must take control of AI ethics, compliance, and governance—before regulators do it for them. From ethical data collection and responsible engineering to navigating AI regulations and leadership shifts, companies must proactively establish AI best practices.
Wendy Turner-Williams and TheAssociation.AI are leading the charge, offering a collaborative framework for ethical AI governance. By aligning AI strategy with trust, accountability, and regulatory foresight, businesses can embrace AI innovation responsibly—without sacrificing compliance or customer trust.
Curious to learn how a data catalog can help you drive ethical AI? Book a demo to learn more.