Separator

UK's AI Summit Urges AI Policies to be Beyond Techno-Libertarianism

Separator

img

The UK AI summit opened a door for the US and the UK to align on policy and move beyond the techno- libertarianism that characterized the early days of AI policymaking in both countries and begin to develop solutions to the challenges of AI, but there are challenges ahead. At the AI Summit in the UK many participants raised both positives and possibly warning measures that should be addressed for future policy frameworks.

The UK Prime Minister, Rishi Sunak hosted the UK AI Summit with representatives and companies from 28 countries, including the US and China, as well as the EU. He brought up the old-time companionship of the UK and the US describing them as the world’s foremost AI democratic powers. For years until the European Union stepped in, the UK and the US worked in tandem nailing standards to govern artificial intelligence by supporting Organization for Economic Cooperation and Development (OECD) AI Principles of 2019, the first global AI policy framework, and the Global Partnership on AI. But both nations started to take a back seat after the EU took matters in its hand with the EU Artificial Intelligence Act.

Main Matters Discussed Prior and Expected from the Summit

Just a few discussions before the summit, the matters revolved around AI’s role and how the UK should include civil society while ensuring fairness is maintained during any agenda talks holding human rights and democratic values at the centre of any supposed launch of an international regulation.

Does Inclusivity Involve Everyone?

When speaking of inclusivity, the UK Prime Minister has already stepped in hot waters for holding and making preliminarily announcements that included tech giant CEOs or representatives only, leaving the academics and civil society out of the conversation. Even the White House held a similar meeting with tech giants with the US President Joe Biden, but the Biden administration is said to have asked civil society groups and labour leaders’ input immediately afterwards. To the groups that were left out during the aforementioned discussions, the SAFE Innovation Act, was said to have included opinions from civil society groups, labour leaders, practitioners and researchers to ensure that the AI Safety Forum serves justice to those especially marginalized groups who are also prone to being impacted by AI systems.

AI Safety Agenda Should not turn a Blind Eye on AI Fairness Agenda

The next matter at prime focus was about how the AI Safety agenda should not turn a blind eye on AI fairness agenda. While Sunak is said to have underscored the need for a safe and reliable development of AI under an international framework, Biden also denoted the same saying companies should not deploy AI systems that otherwise mean unsafe. Both individuals do imply eradicating risk but it’s equally important to ensure that AI systems treat people fairly by making these systems accountable, adverse decisions are contestable and that transparency is meaningful.

Caution Ahead!

Now while the authorities try to address existential risk and if they happen to step the accelerator, then there is a high chance of jumping the signal and warning signs on AI’s impact on housing, credit, employment, education, and criminal justice along the way.

Human Rights and Democratic Values Should Remain as Prime Pillars Always

Then the third is about holding human rights and democratic values at prominence in the summit. The reason is that even though there are many AI policies challenges, the solutions proposed to them do not favour or agree with democratic outcomes. ID requirements for users of AI are one example that shows that countries and companies want to ensure safety and security by asking for biometric data from users when AI systems are unregulated. This means that countries should place a strong value on human dignity and autonomy by opting for systems that are less intensive on data.

Did the Outcomes of the Summit Discuss these Matters? Let’s Read Below

As AI continues to grow, countries around the world are trying to get ahead of the wind and establish some ground rules. The Summit resulted in some important declarations and initiatives that give us a regard into the future of AI governance.

The Bletchley Declaration on AI safety Hailed and Criticized

The Bletchley Declaration on AI safety was inked by representatives and companies of 28 countries, including the US, China, and the EU to attack the pitfalls of frontier AI models, the large language models developed by companies similar as OpenAI. It warns frontier AI, the most sophisticated technology form used in generative models  similar as ChatGPT, has the eventuality for serious, indeed  disastrous,  detriment, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.

It is regarded as a world first agreement on artificial intelligence (AI), as a corner achievement while also being blamed by experts that it doesn't go that far enough.

 

Advices and Warnings

Tesla and SpaceX CEO, Elon Musk raised fears of AI getting out of human control. He reiterated those enterprises at the summit, describing advanced AI as “one of the biggest pitfalls to humanity” given its potential to become far more intelligent than people.

European Commission chief Ursula von der Leyen advised AI came with pitfalls and openings and nudged for a system of objective scientific checks and balances, with an independent scientific community, and for AI safety norms that are accepted worldwide.  

US Vice President Kamala Harris said that action was needed now to address “the full spectrum” of AI risks and not just “existential” fears about threats of cyber-attacks or the development of bioweapons.  

Britain’s King Charles III said AI was 'one of the greatest technological leaps in human history endeavour, saying it could  quicken our  trip towards net zero and realize a new age of potentially  endless clean green energy ”. But he advised “We must work together on combating its significant pitfalls too”.

Meta's president of global affairs Nick Clegg said there was ‘moral panic’ over new technologies, indicating government regulations could face counter reaction from tech companies.  

Mark Surman, president and executive director of the Mozilla Foundation linked to browser Firefox, also raised concerns that the summit was a world- stage platform for private companies to push their interests.  

New Announcements

The UK announced it'll invest £ 225 million (€ 257 million) in a new AI supercomputer, called Isambard- AI after the 19th- century British engineer Isambard Brunel.   It'll be built at The University of Bristol, in southern England, and the UK government said it would be 10 times faster than the UK’s current quickest machine.   Alongside another recently announced UK supercomputer called Dawn, the government hopes both will achieve advancements in fusion energy, health care and climate modelling. Both computers aim to be up and running next summer.