AI Governance: Reimagining AI as Co-Pilot
As we delve deeper into the realm of AI, increasingly complex challenges and risks become more apparent. Furthermore, in a recent Guardian report, researchers demonstrated that AI could even eavesdrop on typing sounds and identify which keys people pressed for passwords! Consequently, how can we ensure responsible AI use when it becomes more powerful and ubiquitous? In this article, we reimagine AI governance and propose that AI can play a more prominent role as our co-pilot to pave the way for trustworthy AI.
The Pressing Need for Ethical AI Governance
Manifold risks are associated with AI, particularly generative AI. To list the most commonly addressed concerns, generative AI can produce inaccurate or even fabricated responses. Additionally, it struggles with algorithmic bias that reflects and reinforces existing societal bias.
For example, we may all have heard about AI recruitment tools that can disadvantage female candidates, or certain facial detection systems that work more accurately on Caucasian faces. Moreover, AI-generated works face the challenge of violating copyrights and data privacy. Most seriously, generative AI’s widespread availability substantially escalates risks of malicious use for cyber attacks and fraud.
Current Initiatives and Regulatory Frameworks
To mitigate and prevent risks, governments and AI industry leaders worldwide are rolling out AI governance initiatives and policies. In June, the EU made the first move and implemented the AI Act, which is deemed the world’s first comprehensive AI law. Subsequently, in July, seven leading AI companies, including Amazon, OpenAI and Meta, followed suit and introduced voluntary safeguards at the White House.
Furthermore, in the upcoming fall, we expect the new international standard ISO/IEC 42001 on AI Management Systems to be published. Here in Singapore, where WIZ.AI is headquartered, the government takes a balanced approach to facilitate innovation while safeguarding consumer interests.
WIZ.AI’s Commitment to Responsible AI Practices
The government regularly facilitates dialogues and collaborations among the public sector, industry and academia. In a recent discussion where WIZ.AI participated, we exemplified our commitment to responsible AI use. Notably, our company is ISO 27001 certified, which is a globally recognized standard for information security management systems (ISMS). This certification demonstrates our commitment to the highest levels of data security and privacy.
In addition, we are also certified by SOC2 Type II Report, an independent audit report that provides assurance on service organization controls. It evaluates the design and operating effectiveness of the organization’s controls related to security, availability, processing integrity, confidentiality, and privacy.
Responsible AI Governance: Current Industry Practices
Besides broader initiatives mentioned, diverse AI governance practices exist from companies that develop and deploy AI systems. These practices encompass several key areas:
Incorporating AI Governance Guardrails
Organizations develop compliance processes for risk assessment and review. Additionally, they retain proprietary data within secure or localized infrastructures, under direct organizational monitoring. Furthermore, system users receive different roles and data access permissions based on their responsibilities.
Companies leverage privacy enhancing technologies (PETs) to derive insights from consumer datasets while safeguarding personal data confidentiality. Moreover, intelligence collection about ongoing breaches and threats becomes increasingly important. Organizations eliminate toxic data that contains hate, abuse, profanity content, as well as those with private information and license constraints, during the data filtering stage.
Testing systems for AIGC tools’ safety, explainability as well as their robustness against security attacks represents another crucial practice.
Uplifting AI Transparency Standards
Organizations document the data pile used for training, as well as how models are trained and validated, thereby increasing explainability in AI decision-making. Additionally, they label content that’s generated by AI, or clearly inform users that they are interacting with AI systems.
Human Validation in AI Governance
Companies loop humans to validate AI outputs before public distribution. Furthermore, they create feedback loops for users to identify issues with AI system functionality and behaviors. Ongoing monitoring by humans helps detect unexpected behaviors, flaws or vulnerabilities after model deployment.
AI as Co-Pilot in Modern AI Governance Systems
After examining current initiatives and practices for responsible AI use, we wonder if we are being too human-centric sometimes, and consequently overlook the role AI can play in its own governance? For starters, Generative AI already assists with risk mitigation. Companies have been utilizing generative AI to “red team” AI solutions and generate edge cases to test system robustness.
Real-World AI Governance Applications
Real-life examples demonstrate that generative AI goes beyond laboratory testing. China’s Ant Group introduced an LLMs-enabled AI assistant for anti-fraud education, which resulted in a 10% reduction in reported fraud cases. Similarly, Waabi, a Toronto-based startup, uses generative AI to automate vehicle safety. This approach helps the company operate at only 5% of the previous cost.
WIZ.AI leverages LLMs to detect sensitive content for better compliance and help maintain sensitive content errors at <5% level. Generative AI already takes up a role as an assistant mitigating risks at a lower cost, but what if it takes a more proactive role and becomes a co-pilot to achieve responsible AI more efficiently?
WIZ.AI’s Co-Pilot Experiment in AI Governance
At WIZ.AI, we decided to test the above hypothesis. We trained one LLM to monitor the other LLM for bias, violence, and profanity content in workplace chats. The monitoring LLM successfully flagged many issues based on the given prompts, without further human involvement.
However, we still haven’t fully realized the co-pilot status because loopholes still exist that need humans to check the root causes and tweak the prompts accordingly. There are even cases where humans cannot yet identify the root cause leading to AI’s obvious false judgments.
Nevertheless, with further exploration and development of our LLM technology, we can envision a future where human-AI co-pilot in AI governance can be achieved. This will happen when humans can understand generative AI’s decision-making process on a granular level.
Forward-Looking Vision: Universal Responsible AI Governance Code
By then, well-trained AI models will “tutor” other models on workplace’s “responsible code of conduct” independently with zero mistakes. Humans will only need to validate the learning results without too much involvement in tedious work, such as fixing errors and digging into root causes. Consequently, they can shift attention to more important work.
The Future of Universal AI Governance Standards
Taking an even more forward-looking perspective, a universal “responsible code of conduct” module can be developed and by-default plugged into every newly launched LLM or their much more intelligent “offspring.” There can even be specific versions by region, by country or by individual organizations building on top of the foundational module.
This process ensures AI systems are “aware” of their responsibilities and ethical guidelines upfront and ultimately liberates human labor. Naturally, humans will still be in the loop, judging AIs’ “performance” after their onboarding, reviewing and iterating the “responsible code of conduct” module on an ongoing basis.
Looking ahead, we see a world where human-AI collaboration permeates every possible aspect, thereby ushering in an era of ideal co-existence between humanity and technology.
Ready to explore how AI governance can transform your organization? Discover how WIZ.AI’s responsible AI solutions can help you implement effective AI governance practices.
Book a Demo