AI regulation downloading: the scope and limitations of the Bletchley Declaration

By Alexandra Thurston

Rishi Sunak led the UK Government in hosting an inaugural international summit on AI safety resulting in the Bletchley Declaration. The summit was attended by 27 countries, with China, India, and the European Union in attendance. Also present were numerous representatives and CEOs from AI companies, including Elon Musk. 

The UK Government threw the net wide as to what the intentions and aims of the summit were in a statement, with goals ranging from “establishing shared agreement and responsibility on the risks [and] opportunities” of AI, as well as a commitment for greater “international collaboration on frontier AI safety” all the way to enhancing “greater scientific collaboration” regarding AI dissemination. While these cliché statements do have a succinct and satisfying quixotical ring to them, what they usually lack is substance which the public should be attentive to. 

The UK Government threw the net wide as to what the intentions and aims of the summit were

AI is the fastest growing industry and has the greatest potential for change, which has hopes to transform our lives for the better. Companies such as DeepMind, a UK-based business, can predict almost all protein structures known to science. Such capabilities will accelerate breakthroughs in scientific research and medicines for combating malaria, antibiotics resistance, and plastic waste. AI is an invaluable tool for humanity and has great potential, but we must remember that fears over AI, which are still important, can mostly be directed over who is using it and how they are using it, rather than the technology itself. 

Regulation on AI use, practice, and development is a welcome progression by many; however, it is important to be critically assess the foundation on which these safe guarding regulations are built as they set the tone for future development. The risks associated with AI can be damaging and dangerous, and range in degrees of severity, with concerns about bias, potential job impacts, deep fakes, disinformation, online harms, and financial crimes. These risks can affect a person’s physical and mental health, they can infringe upon the privacy of individuals, and could undermine human rights.

The summit is a response to extensive public concern on matters proposed in the AI Regulation White Paper (white papers are policy documents that set out their proposals for future legislation). Within the AI Regulation White Paper, which will form the basis for any future legal action, there is concerningly frequent rhetoric suggesting that this summit is less a safe-guarding technique and more about political and national self-promotion. 

The summit is a response to extensive public concern on matters proposed in the AI Regulation White Paper

Out of the eighteen executive summary points, only one sentence was dedicated to concerns of AI’s harm to the public that was not related to economic output. The document obstinately reiterates that the main crux of this issue is getting the public to trust AI, as this “can accelerate the adoption of AI across the UK to maximise the economic and social benefits.” However, perhaps the main goal of this drive is revealed a sentence later: to “maintain the UK’s position as a global AI leader”. The latter gives a sense that public safety is being side-lined for governmental ambitions. 

The immense focus on economic growth within AI regulation and safeguarding is worrying. It would appear on the basis of this White Paper that regulation and future legislation are far too closely intertwined with economic ambitions. The emphasis on AI regulation as a means to political and economic gain risks solely representing those with vested interests, while the average citizen with concerns over privacy and other civil liberties remains unrepresented. While this first step to regulating the potential dangers of AI is applaudable by the UK Government, greater steps need to be taken to prioritise the safety of the population. 

Image: Number 10 via Flickr

Leave a Reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.