Specific Issues for Discussion:
In this tutorial session, we will discuss the challenges with fairness in AI systems and cover key components of the principle of fairness (equality, bias and non-discrimination, inclusivity, and reliability). These components are relevant globally, but are interpreted differently across jurisdictions. For eg. caste may not be relevant in the US or Europe but is a key aspect of non-discrimination in India. Understanding these components is essential to develop and deploy AI systems that uphold ethical principles and prevent potential harm, while enabling continued innovation.
We will discuss how ‘fairness’ is subjective in nature and must be tailored to specific regional and social contexts. Fairness in AI and related ethical norms and regulatory frameworks haves often been examined and developed with primary focus on the US and Europe. However, there are unique socio-cultural contexts across the APAC region that affect the notion of ‘fairness’ , making it ineffective to adopt existing metrics of fairness and apply them universally. Hence, in our tutorial, we will discuss what fairness entails in India and Singapore to showcase how the concept varies even within various jurisdictions across Asia.
Further, we will discuss case studies like the biassed AI job recommendation system in Indonesia to illustrate the complexities of fairness in AI. This will be followed by a simulation exercise with participant engagement to showcase how different biases creep into AI models and have discriminatory effects in society. The tutorial will end with an open discussion to gain perspectives from the participants on fairness metrics in their own countries and analyse how fairness as a concept differs based on socio-cultural contexts.
Recently, through a collaborative regional dialogue with SMU, we brought together diverse stakeholders from the APAC region to discuss the multifaceted concept of fairness in AI. This session aims to leverage the learnings from the dialogue.