The functional definition of Artificial Intelligence (AI) is simply a software that performs a task we have traditionally associated with human intelligence. However, AI has become emblematic of a new age of technology that is developing rapidly and is continuously improving its capabilities. The AI likely to be available in a decade’s time is difficult to fathom.
In uncharted territory it can become overwhelming for policymakers who feel like they need the answers to new questions on complex topics that few people in the world are experts in. Is this the reality that faces us today? Of course, AI does raise unique questions about data use, but it also embodies many long-standing societal questions. For example, debates over privacy, morality, and legality are core issues in modern society and extend far beyond AI. You could say that AI poses not entirely new questions, but new challenges to very old questions.
From an international perspective, unregulated AI risks reproducing the biases and inequalities that are present in the international system. To name a few: the prioritisation of the English language and the hegemony of Western knowledge, both perpetuating the other. Karen Hao’s 2022 article in the MIT Technology Review explores a radical alternative to the AI industry which is dominated by Big Tech and Big Data.
Large language models, like ChatGPT, work by returning the words most likely to answer a given prompt and require an incredible amount of data to run well. These models are only potentially profitable for Big Tech if there are a lot of speakers and therefore potential users. This means that AI, and the internet more generally, has been associated with accelerating language loss because lots of AI tools are only available in English. Therefore, people are coerced into speaking these dominant languages in order to utilise these new tools. Mahelona (part of Te Hiku Media, a non-profit utilising AI to revitalise the Māori language), refers to data as a ‘frontier of colonization’ by contributing to language loss, not dissimilar to how colonial and assimilation policies operated in the past (ibid).
This example stands to highlight the exclusionary direction AI is already heading in and calls for more open-access and diverse AI tools. It is clear that AI should serve to empower local communities and be conscious of the unwanted homogeneity of many technical tools. However, AI is likely to remain a profit-driven enterprise given the large imbalance in resources put into emerging AI which results in private companies, industry experts, and powerful individuals having a great deal more power than states in regulating the use of AI.
Governments should move quickly to create a consensus on the ethical and legal limits of AI technology and collaborate globally to face new challenges to long-standing issues over privacy and justice.
This article was originally featured on Chatham House’s Common Futures Conversations
platform. For more information, click here.
Image: Mojahid Mottakin via Unsplash