With generative AI gaining more and more popularity, AI bias has become a major concern. We’ve noticed high-profile cases of algorithms exhibiting stereotypical thinking such as on race and gender. However, biases can manifest in subtle ways beyond the obvious. In this article, we would like to firstly discuss two less visible biases, then delve into how we can avoid these biases by embracing inclusive datasets and hyper-localized and personalized solutions.
Uncover the overlooked biases
This bias occurs when lacking diversity to represent all users. It stems from overlooking outliers and minorities. For example, mainstream large language models (LLMs) today are predominantly pre-trained from English text, limiting AI advancements to English-speaking environments. And even in the English-speaking world, an AI system that is trained on US English data may possibly fail to recognize slang words or phrases in the UK, Australia or Singapore.
This bias occurs when AI algorithms are built based on popular choices only. Popular choices from the majority become the only visible choices. This risk excluding people who make less mainstream choices. For instance, an e-commerce platform’s AI algorithms might only recommend bestsellers, neglecting niche interests.
Crafting responsible and inclusive AI
To mitigate the biases, we need broader sourcing and sampling to capture true diversity, embrace localized and personalized solutions that avoid majority defaults. At WIZ.AI, we are committed to making AI inclusive for all, which is an indispensable part for us practicing responsible AI.
Hyper-localized solutions for diverse consumers
With an established and growing presence across the Southeast Asian markets, the challenge for WIZ.AI from day one is that the region is fragmented both geographically and linguistically. To help enterprises and their consumers easily and inclusively access our generative AI-powered customer engagement solutions requires multilingual capabilities. We embed hyper localization in our products’ DNA, such as in our flagship Talkbot. The smart bots can understand multiple languages and even dialects, including Bahasa Indonesia, Thai, Taglog, Singlish, etc. This ensures our Talkbots to have barrier-free conversations with diverse consumers in the region.
Our hyper-localization effort includes training localized LLMs too. We just launched LLM for Bahasa Indonesia, and our LLM for Thai is under training. During LLM training, we fed the model diverse real-world customer conversation calls, reflecting local culture, dialect and language contexts in actual applications and scenarios and minimizing selection bias.
An interesting finding revealed by our R&D team, LLMs’ learning ability, actually made the training easier and faster than traditional AI models’ training. In reality, being inclusive is very difficult. There can always be minority and missing cases. For example, one of our enterprise clients offers hundreds of cigarette brands. For local Indonesians, they can have multiple ways to refer to a single one of these brands. In the past with traditional AI training, we had to always feed all varieties of naming a brand during the training stage for the system to understand the saying in actual conversation. However with LLM, if we find out a new nick name locals give to a brand, we can inform LLM that “new nick name = X brand” in a prompt, the LLM can instantly pick up this new knowledge without the need to be trained all over again.
Capturing cultural sensitivity
Besides languages, we are also taking local cultural norms into consideration. When we first launched our Talkbot in Indonesia for debt collection use cases, we had a straightforward and honest tone for the Talkbot. For example, our bot may simply tell Indonesian consumers that if they are late for repayment, they can be disqualified for BNPL(buy now, pay later) service or their account on the e-commerce platform will be shut down. As people in Indonesia don’t consider this direct approach to be rude.
Later, when we launched the product in Thailand, we strategically revised the conversation tone to be more polite and friendly. When Thai users may be late for their repayment, our bot may simply remind them that there can be negative consequences for their credit. This is because Thai people tend to place a high value on courtesy and avoid confrontation in daily interactions.
Never let any user feel excluded
When we feel products are not right for us or simply a bit “off”, it could indicate overlooked perspectives in product design. At WIZ.AI, we try our best to avoid these “off” moments and design our products to cater for diverse users.
Inclusion of vulnerable groups
On the back-end, there is no human training required for our Talkbot, neither are customers forced to adopt digital self-service solutions on the front-end. The Talkbots work well on various telecommunication mediums like traditional landline telephones, analogue phones, and smartphones, ensuring that all segments of our enterprise clients’ customer base are effectively covered.
For example, one of our enterprise clients in the Philippines has hundreds of mom-and-pop stores as their distributors across the country. These store owners, most being elderly people, usually have difficulty using the online booking system to order items on smart phones. Some of them don’t even have a smart phone. In the past, the enterprise client had to send human agents to each of these stores every week to collect orders in person. Now with our smart Talkbots, the client can give automated calls to these shop owners over telephones for order collections, saving vast human effort on a weekly basis.
Inclusive knowledge base
To create an inclusive knowledge base to train AI, WIZ.AI’s experts used to devote days to studying real-life customer service recordings, and brainstorm possible conversation scenarios in particular use cases. Considering users are returning products as an example, how many different ways of expressions might they have? Now with LLMs’ empowerment, we get to generate hundreds of possible expressions and conversation scenarios in seconds, enabling a truly inclusive knowledge base!
Prior to deployment, the WIZ.AI team always conducts extensive user testing to gather feedback. The goal is to create a solution that is effective, inclusive and enjoyable across customer segments for our enterprise clients, providing a memorable and positive experience.
Building an inclusive culture
We believe products reflect who builds them. Having broad representation enables holistic perspectives. With this in mind, WIZ.AI hires local talents in our technology, product management as well as customer experience teams across the ASEAN region, including Singapore, Indonesia, Philippines, Thailand and Malaysia.
Towards a future with democratized AI access
At WIZ.AI, we are continuously innovating to democratize AI access and make AI solutions inclusive for all. As we enter the era of AGI, we remain committed to developing AGI solutions for enterprise users worldwide and expanding AI access for our customers. Only through embracing diversity can AI enable a fair and equitable future.
In the near term, our LLM for Bahasa Indonesia will soon open for tests and is considered to be open source in the future. Further developments will focus on feeding even more diverse types of data, including Indonesia’s many local dialects and everyday slang.
Want to explore how WIZ.AI’s generative-AI powered, hyper-localized and omnichannel solutions can empower your customer engagement at scale? Talk to our experts today.
Joyce Chou, Roger Ibars, Oscar Murillo. Microsoft. In Pursuit of Inclusive AI. Retrieved from: https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWEmS3
Google. Responsible AI practices. Retrieved from: https://ai.google/responsibility/responsible-ai-practices/
Personal Data Protection Commission (PDPC) of Singapore, Compendium of Use Cases: Practical Illustrations of the Model AI Governance Framework. Retrieved from: https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGAIGovUseCases.pdf