
ARTIFICIAL INTELLIGENCE
Promises and challenges of Artificial Intelligence post COVID-19
16:55 – 17:10 CEST – 6 May 2021
Conversations between
Madeleine de Cock Buning, Professor, School of Transnational Governance, EUI
Mariya Gabriel, Commissioner for Innovation, Research, Culture, Education and Youth, European Commission
The development of Artificial Intelligence (AI) boosts global economic growth and can provide unprecedented opportunities for sustainable digital development worldwide. However, AI can also bring risks or negative consequences for citizens. Concerns about AI disrupting social interactions, iscriminating or replacing humans at work are still widespread. AI governance requires careful steering between stimulating innovation and building societal trust. In the post COVID- 19 recovery, there is an important window of opportunity to shape the development of human-centric AI.
On 21 April 2021, the European Commission proposed new rules and actions to turn Europe into the global hub for trustworthy Artificial Intelligence (AI). The combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States aims to guarantee fundamental rights of citizens and businesses, while strengthening AI uptake, investment and innovation. New rules on Machinery adapt safety rules to further help increase users’ trust. In this session we will debate the recently announced EU approach towards becoming a global hub for trustworthy AI in the context of highly competitive innovation in AI in China and the US.
Development: competitive innovation in human centric AI
17:15 – 17:50 CEST – 6 May 2021
Moderator
Madeleine de Cock Buning, Professor, School of Transnational Governance, EUI
Speakers
Dominik Bösl, Professor, Hochschule der Bayerischen Wirtschaft and Founder and Chairman, Robotic and A.I Governance Foundation
Joanna Bryson, Professor of Ethics and Technology, Hertie School
Mariya Gabriel, Commissioner for Innovation, Research, Culture, Education and Youth, European Commission
Andrea Renda, Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy, Centre for European Policy Studies
Andrew Wyckoff, Director, Directorate for Science, Technology and Innovation, Organisation for Economic Co-operation and Development
While Europe continues to lag behind the US and China in the AI-development race, the EU ramps up its efforts to establish a framework on AI. To be globally competitive and accelerate investments in AI, the EC aims in its recent proposals to coordinate innovation across Member States by funding for public-private partnerships and research through the Digital Europe and Horizon Europe programmes, as well as through the Recovery and Resilience Facility that foresees 20% digital expenditure. Furthermore the EC aims to Foster AI excellence ‘from the lab to the market’, i.e. by creating testing and experimentation facilities and digital innovation hubs and by building strategic leadership on AI in high-impact sectors such as sustainable production and health care. Eliminating hurdles by bringing greater
legal clarity, simplifying the administrative burden and costs for companies in the new Machinery Regulation should help safe integration of AI in consumer products ranging from lawnmowers to 3D printers.
While the US and China obviously strive to maintain competitive advantage in global AI leadership, will this recent EC Proposal be able to guarantee competitive European innovation in research and
deployment of human centric AI? Are the obligations placed on the providers of AI that are aimed at safety of citizens and protection of human rights indeed proportionate and predictable or will they potentially hinder innovation? Will a common European approach accelerate the uptake of human centric AI throughout different sectors? What else is needed?
ARTIFICIAL INTELLIGENCE
17:55 – 18:05 CEST – 6 May 2021
Video overview
State of tech by Paul Verschure, Research Professor, Catalan Institute of Advanced Studies, Institute of Bioengineering of Catalunya
Deployment: fundamental rights and biases in AI
18:05 – 18:50 CEST – 6 May 2021
Moderator
Madeleine de Cock Buning, Professor, School of Transnational Governance, EUI
Speakers
Urs Gasser, Executive Director, Berkman Klein Center for Internet & Society, Harvard University, and Professor of Practice, Harvard Law
Miguel Poiares Maduro, School of Transnational Governance, EUI
Sandra Wachter, Associate Professor and Senior Research Fellow, Oxford Internet Institute, University of Oxford
Given the major impact that AI has on society, fundamental rights including human dignity and privacy protection are increasingly central to its deployment. Public and private organisations that use AI systems play a key role in ensuring that the systems they use and the products and services they offer meet appropriate standards of transparency, non-discrimination and fairness. The recently proposed EU legal framework on AI follows a risk-based approach. It defines four future proof risk levels from unacceptable risks and high risks to limited risks and minimal risks.
- Unacceptable risks are posed by those systems that form a clear threat to the safety, livelihoods and rights of people (e.g. social scoring by governments) and are therefore banned.
- High-risk are those systems that are part of critical infrastructures (e.g. transport), that could put the life and health of citizens at risk; access to education (e.g. scoring of exams); Employment (e.g. CV-sorting software for recruitment); Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan); Law enforcement (e.g. evaluation of evidence); Migration, asylum and border control management (e.g. verification of authenticity of travel documents/remote biometric identification);
High-risk AI systems will be subject to strict obligations before they can be put on the market e.g. adequate risk assessment and mitigation systems; high quality of the datasets feeding the system to minimise discriminatory outcomes; Logging of activity to ensure traceability of results and appropriate human oversight.
- Limited risk systems such as chatbots have specific transparency obligations: since, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
- For AI systems with Minimal risk levels – the vast majority – there is free use (eg AI-enabled video games or spam filters. The Regulation does not intervene as these AI systems represent no risk for citizens’ rights or safety. Where are the biggest vulnerabilities for citizens when it comes to AI and fundamental rights? Will this recent EC Proposal for a Regulation on Artificial Intelligence be able to (re)build consumers’ trust? How can we build a flexible transnational regulatory framework respectful for fundamental rights and public values? While different countries are increasingly looking to the adoption of regulation to ensure ‘trustworthy’ AI as a tool to shape AI deployment by stakeholders, what transnational effort is required to steer global collaboration towards responsible uses of AI whilst avoiding competitive disadvantage?