EU’s Report on AI Summit 2018

AI Europe / Stakeholder summit

A European Strategy for Artificial Intelligence



The first Stakeholder Summit on Artificial Intelligence, organised jointly by the European Economic and Social Committee (EESC) and the European Commission, concludedthat the EU must ensure that artificial intelligence is safe, unbiased and in line with European values. The event, which soughtto discuss the next steps to advance the EU strategy on artificial intelligence, took place on 18 June in Brussels, at the EESC’sheadquarters, where I took part as a stakeholder from Univversity of Bucharest.

In two plenary sessions and three parallel working groups, experts, speakers, panellists and participants were invited to consider the three pillars of the EU strategy on AI:

  • Legal and ethical challenges
  • The socio-economic impact
  • Industrial competitiveness

This report gathers the main points raised by the speakers and participants, givingan overview of issuesrelated to the use of Artificial Intelligence in Europe and throughout the world.

The EESC thanks all the conference attendees for their enthusiastic participation.


  • Ariane Rodert, President of the Section for the Single Market, Production and Consumption, EESC
  • Catelijne Muller, President of the EESC Thematic Study Group on AI
  • Eric Horvitz, Technical Fellow & Director, Microsoft Research Labs
  • Aimee van Wynsberghe, Assistant Professor of Ethics and Robotics, TUDelft
  • Thiébaut Weber, Confederal Secretary, ETUC
  • Sara Conejo Cervantes, Artificial Intelligence Task Force, Teens in AI
  • Elena Sinel, Founder, Acorn Aspirations, Teens in AI
  • Mariya Gabriel, EU Commissioner for the Digital Economy and Society
  • Matthias Spielkamp, Executive Director, AlgorithmWatch
  • Clara Neppel, Senior Director, IEEE Europe
  • Mark Coeckelbergh, Professor of Philosophy of Media and Technology, University of Vienna
  • Ute Meyenberg, Vice-president, EUROCADRES
  • Lazaros Tossounidis, Strategic Advisor, CEDEFOP
  • Robert Went, Economist, Dutch Scientific Council for Government Policy (WRR)
  • Nicholas Hodac, Government and Regulatory Affairs, IBM
  • KarinePerset, Internet Economist, OECD
  • PreetamMaloor, Strategy and Policy Advisor, ITU
  • Mady Delvaux, Member of the European Parliament
  • Bjoern Juretzki, DG CONNECT, European Commission
  • Kasia Jurczak, Member of Commissioner Thyssen’s Cabinet, European Commission
  • Mario Mariniello, Digital Adviser, European Political Strategy Centre, European CommissionLennekeHoedemaker
  • Panel moderators:
  • Moderator
  • Ulrich Samm, EESCMember
  • LennekeHoedemaker, professional moderator
  • Indrė Vareikytė, EESCMember







                      Ariane Rodert, President of the Section for the Single Market, Production and         Consumption, EESC

Catelijne Muller, President of the EESC Thematic Study Group on AI


9.45           KEYNOTE SPEECH


Eric Horvitz, Technical Fellow & Director, Microsoft Research Labs



Mobilising Europe around the EU Strategy on AI


Aimee van Wynsberghe, Assistant Professor of Ethics and Robotics, TUDelft

Thiébaut Weber, Confederal Secretary, ETUC

Sara Conejo Cervantes, Artificial Intelligence Task Force, Teens in AI &

Elena Sinel, Founder, Acorn Aspirations, Teens in AI




Mariya Gabriel, EU Commissioner for Digital Economy and Society


11.15         COFFEE BREAK


11.15         PRESS POINT


                      Mariya Gabriel, EU Commissioner for the Digital Economy and Society

Mady Delvaux, Member of the European Parliament

Ariane Rodert, President of the Section for the Single Market, Production and         Consumption, EESC

                      Catelijne Muller, President of the EESC Thematic Study Group on AI





How to achieve responsible innovation and use of artificial intelligence


Chair and moderator: Ulrich Samm, EESC member


                      Co-referent: Matthias Spielkamp, Executive Director, AlgorithmWatch

                Co-referent: Clara Neppel, Senior Director,IEEE Europe

Co-referent: Mark Coeckelbergh, Professor of Philosophy of Media and Technology, University of Vienna

Co-referent: Aimee van Wynsberghe, Assistant Professor of Ethics and Robotics, TUDelft



AI & labour, education and life-long learning




Co-referent: Ute Meyenberg, Vice-president, EUROCADRES

Co-referent: Lazaros Tossounidis, Strategic Advisor, CEDEFOP

Co-referent: Robert Went, Economist, Dutch Scientific Council for Government

Policy (WRR)



AI for the Benefit of Humankind


Chair and moderator: Indrė Vareikytė, EESC member


Co-referent: Elena Sinel, Founder, Acorn Aspirations, Teens in AI

Co-referent: Nicholas Hodac, Government and Regulatory Affairs, IBM

Co-referent: KarinePerset, Internet Economist, OECD

                      Co-referent: PreetamMaloor, Strategy and Policy Advisor, ITU


13.00        LUNCH




In the afternoon plenary session,the ‘rapporteurs’ of the working groups will present each group’s recommendations to the EU Institutions and the members of the other working groups.


Mady Delvaux, Member of the European Parliament

                      Bjoern Juretzki, DG CONNECT, European Commission

Kasia Jurczak, Member of Commissioner Thyssen’s Cabinet, European Commission

Mario Mariniello, Digital Adviser, European Political Strategy Centre, European Commission

15.45        CONCLUSIONS


      Catelijne Muller, President of the Thematic Study Group on AI, EESC






Ariane Rodert, President of the Section for the Single Market, Production and Consumption, EESC

Ariane Rodert presented the general workings of the EESC and the Section for the Single Market, Production and Consumption and spoke aboutthe main work done on the topic of AI. She statedthat the new economic models are one of the EESC’s priorities. There is a focus on finding synergies between these models: the social, circular and sharing economies and blockchain,to name but a few.She informed participants that the EESC has also set up a temporary study group on AI, with a mandate of 2.5 years.

Catelijne Muller, President of the EESCTemporary Study Group on AI

Catelijne Mullerpresented her work as the EESC rapporteur on the own-initiative opinion on AI adopted in May 2017, the main aim of which was to put AI’s impact on our society on the EU agenda. Ms Muller statedthat AI holds great promisefor addressing societal issues but also raises challenges around privacy, security, labour, ethics, etc. Many of the opportunities and challenges highlighted in this opinion were also reflected in the recent Commission’s communication on AI.In this context, the EESC has long advocated a human-in-command approach, which means that people remain at the heart of developments in Artificial Intelligence and decide on “if, when and how they want to use these technologies in their daily lives”. She proposed that all participants address these societal challenges.



Eric Horvitz, Technical Fellow & Director, Microsoft Research Labs


According to Eric Horvitz, “Artificial Intelligence is the study of computational mechanisms underlying thought and intelligent behaviour, both human and machine”. He gave a brief overview of the historical context and described the “golden pipeline”, which came into force with new decision-making processes: Data -> Predictions -> Decisions. It is not about automation, but about the humans inside who work with these systems. By way of example, he described how Microsoft was working on predicting outcomes such as readmissions toUS hospitals that might have been preventable.

He stressed that we arenow at a pivotal moment with human-level speech and object recognition and new work on language comprehension. For example, 5 years ago, translation barely workedand today there are products such as Skype speech-to-speech translation running across many languages. On the horizon, there will be human hands sensed in real time. Microsoft Research Cambridge Lab is working on the recognition of human hand gestures at a distance.According to Eric Horvitz, “it is an ethical imperative that Artificial Intelligence methods available today are pressed into service where lives are being lost by default in the current world”. About two and a half years ago, a study showed that the third most common cause of death in hospitals in the US is due to preventable human error, just after heart disease and cancer as a killer.Why is AI not pressed into service in a value-directed way to prevent human error? An estimated 250 000 deaths in the US are due to a preventable medical error.

Today, there is anopen-world AI challengeto introducing a new system (solution), mainly because of the network (replication) effect.One system has big – albeit subtle –implications, because it can be replicated across the planetfor millions of users. The challenge is not only about AI but also about how we distribute these tools today. Open-world AI challenges are linked to capabilities (how do we counter the blind-spots and biases in AI systems?), values(who should be the decision-making agency and is the value aligned with human values?) and misuse(threats to human rights, legal and ethical challenges, privacy challenges, potential harm, issues of exclusion and challenges to our attention and persuasion).The direction we should take is to advance the science of trustworthy AI. We should think about introducing phased trials (as is done for clinical trials), reporting standards,disclosure of risks (fail-safe design) and others.

Eric Horvitz’s concern when it comes to biases and fairness (for example,on AI assisting judges), is the homogenous effect, where the systems will affect many judges across the land and embed biases that might be part of the way data is collected and observed. He also mentioned some potential misuses of AI: human rights violations, risk of death, serious injury, denial of consequential resources and services, manipulation of attention, beliefs, cognition, etc.Serious corporate responsibility is required. Two years ago, Microsoft set up the Aether Committee (AI, Ethics and Effects in Engineering & Research), comprising 7 working groups that serve as cross-company cauldrons for discussion and policy development, as well as technical development:Sensitive uses; Bias and fairness; Engineering practices; Reliability and safety; Human-AI interaction and collaboration; Intelligibility and transparency and Human attention and cognition.One of the most interesting working groups is that on Sensitive uses, which is committed to human rights, based on the UN Declaration of Human Rights, the UN guiding principles on business and human rights, and the Microsoft Global Human Rights Statement. Its aim is to ensure the dignity of every individual, freedom from discrimination, freedom from invasions of privacy, freedom of expression and freedom of association. On the basis of case studies, it develops principles. For example,how can AI be used to enhance public safety?On community responsibility, there is a Partnership on AI to benefit people and society, started by big US tech companies. This is a balanced group thatincludes civil society, open AI foundations, economists, academics, AI scientists and small companies from all around the world, working on topics around best practices for AI, as well as important and interesting sub-challenges.

To conclude, Eric Horvitzstressed the absolute need to pursue principles of intelligence, because digitisation and computation algorithms help human beings to understand the nature of intelligence, as well as giving them new capabilities. We have to harness these technologies to adjust to social challenges. We have to continue identifying and addressing the costs, influences and ethical issues, as well as human rights challenges,associated with these technologies and finally to collaborate widely on technology and society with multiple stakeholders.


Aimee van Wynsberghe, Assistant Professor of Ethics and Robotics, TUDelft

Aimee van Wynsberghe’steaser question was: “Who is responsible, for what and when?” The ethical questions related to AI can fundamentally change human relations. Ethics is not only about painting a dystopian picture but is also a study of the good life and how can we achieve it. Can we use AI and robotics to achieve this good vision of life? We should consider the types of question we need to deal with –what issues are AI makersfacing? What questions are AI usersasking?Could AI ever be held accountable for the consequences of decisions that it makes?

When it comes to AI makers, we have issues such asthe training data thatis used, risk of biases, checks and how can we test the systems?From the user’s perspective, we first need to categorise users – it isnot only companies that are purchasing AI but also regular citizens. Companies will be interested in reliability, data protection but users also want to know how much trust they can place in a system (for example, should we trust the system or the people who are making the system?). We have also to consider this idea of excessive trust – we tend to believe that technology is neutral and more powerful and smarter than humans. Education for users is essential. Not only through schools teaching about AI,but also for individuals who will be introduced to AI in their professions. We need to bridge the gap between people who are working in a technical sphere or at academic level and those who will ultimately use this technology.

Aimee van Wynsberghe ended her speech by encouragingthose present to participate in the working group on ethics by asking the question – “What should robots and AI do for us?”How should we make systems so that they fulfil this vision of a good life? What should the defining features of AI be 10 years from now?


Thiébaut Weber, Confederal Secretary, ETUC

Thiébaut Weber informed participants that his organisation, ETUC, is working on a strategy for the fourth industrial revolution and the future of work. He stated that we live in a society that has limitations when it comes to natural resources. Not all countries have the same means to invest in technology linked to AI and robotics. We should take these inequalities into account. When speaking about AI and robotics, it is from today’s perspective. Public investment is crucial. Legislators will have to assess existing legislation and think about new legislation to make sure it is up to date. Negotiations with employers are essential – how AI and robotics are implemented in the workplace.

Thiébaut Weber presented four main ideas for discussion in the workshop. The first addressed the role of all stakeholders, social dialogue and how the latter can help shape this concept of work 4.0. We should anticipate job losses and discuss it now. We have to take action to deal with this transition and prepare for new jobs that will be created. ETUC is calling for aEuropean transition fund, a kind of European globalisation fund “plus”, not only to repairthe damage done by globalisation but also to help to anticipate these changes in the sectors affected. Co-design strategies need to be discussed. What do wedo with the new productivity gains and how do we manage work-sharing and a better work/life balance? Countries with better results (in terms of wages, working conditions, etc.) are those with better social dialogue.

We should invest in collaborative, inclusive AI and robotics. Robots should support workers and not replace them (for example,exo-skeletons in building sectors to decrease the impact of arduous work). Thiébaut Weber gavethe example of e-health and pointed out that, in some countries,where there are insufficient nurses,remote monitoring could be a solution. The AI we want to develop should help combat climate change, manage energy andwater resources and fight discrimination. The human-in-command approach advocated by the EESC and by the ETUC should become a reality. We need to find the right balance between big data and data protection. Education and training will be a very important element. Low-skilled workers should also have real access to training policies. Everywhere in Europe and in every kind of company (big companies, SMEs, etc.), people have a right to be trained.

Thiébaut Weber concluded by saying that we can still use our human intelligence to shape the future of work where technology will serve us.


Sara Conejo Cervantes, Artificial Intelligence Task Force, Teens in AI & Elena Sinel, Founder, Acorn Aspirations, Teens in AI

Elena Sinel presented her company, Acorn Aspirations, and itslatest initiative– Teens in AI.This initiative has generated interest in many countries (South Africa, Japan, Nigeria and China, for example), by looking at how we can empower young people to create AI and understand how it works. Teens are invited to learn through hackathons, bootcamps and accelerators and to explore different topics, such as ethics in AI, human-centred AI, reinforcement learningand many others. They are also invited to find practical solutions to issues such as achieving the UN sustainable development goals through the use of AI technology. One of the winning ideas of the last hackathon was,for example, the”footprint”, a tool that collects all the data about your carbon footprint and gives you recommendations (all based on machine learning).

According to Sara Conejo,the technology ofthe future will be centred mainly on AI and it is very important to get young people on board. She described the project on which she has beenworking for the last 6 months and which is about learning robots playing different mobile games via machine learning. She also wrote a report forthe UK’s House of Lords on AI and how it can be applied in everyday society.

The last working group on Industrial competitiveness and the benefits of AI for society should discuss how tobring together people that normally would not find each other, with the aim of solving major societal challenges. The purpose is to bring together not only stakeholders but also people who are facing these problems in their everyday lives. It is important to discuss these topics now, as everyone in the world will be affected by AI. AI has the potential to advance humanity, but it also may pose societal risks. People from across the world should be empowered to debate how AI is affecting them and to solve problems they identify in their local communities through Responsible & Ethical AI.

Some practical actions include developing a framework that can be used in any school in any country, developing an online mentoring platform that will allow any young person to connect with experts, mentors and other young people across the world, and running AI hackathons, bootcampsand accelerator programmes across the world with the aim of promoting a healthy debate and of empowering communities to develop Responsible AI. Above all, more young people should have the opportunity to be part of conversations aboutAI.


Mariya Gabriel, EU Commissioner for the Digital Economy and Society

Artificial Intelligence carries great potential benefits in most areas of our lives, from healthcare to transport and climate change, and the world has entered a new era of technological change. Europe needs to lead this revolution, by building on its world-class research innovation communities and its strong industry. However, huge investment and joint efforts are needed. At the heart of EU’s AI strategy, there is an ambitious investment plan – EUR 20 billion of investment in AI in the next 3 years (from both the public and private sectors) and then EUR 20 billion per year in the following decade.

The public-sector investment aimsto strengthen research and innovation, to upgrade AI research infrastructure, to develop AI applications in key sectors, to improve access to data and to facilitate industrial testing and uptake. The European Commission will facilitate access to the relevant AI resources, including knowledge, data repositories, computing power, tools and algorithms, by supporting the development of an AI on-demand platform, which will be launched next January. The aim is to equip digital hubs, so that they are in a good position to help non-tech companies and public administrations to understand how AI can transform their business models, to test integrate AI solution and to access the necessary skills.Bridging the skills gaps is one of the most important topics for the Commissioner personally. We need more data scientists, engineers and philosophers who understand AI. All citizens should benefit from technological advances and be able to participate in a trustworthy AI-powered economy. Many of the initiatives on skills proposed by the Commission should help to increase the number of experts in the digital sphere and to equip all workers with the right skills for the evolving labour market.

Under the next MFF, the Commission plans tofurther increase its investments in AI, mainly through two programmes:the proposal for a new digital Europe programme andthe proposal for the Horizon Europe programme. The Structural Funds will also complement these investments. For the first time, the MFF covers the new Digital Europe programme,with a budget of EUR 9 billion, and this will support industrial deployment and strengthen Europe’s strategic digital capacities. AI is one of the main areas covered, with a budget of EUR 2.5 billion, and the funding will specifically target testing and experimentation facilities, which the digital innovation hubs will make available to enterprises and public administrations. The Digital Europe programme will also support the development of advanced digital skills (EUR 700 million).

High-performance computingis a key technology for the digital transformation of our industry and society and a strategic asset for competing in the data economy. Europe should be a super-computing power that achievesits economic potential. The goal is to make the EU one of the top three super-computing powers in the world, by having its own super-computer machine with European software by 2023. The European Commission is working closely with many European countries to establish a Euro HPC Joint Undertaking.The Euro HPC will acquire world-class super-computers, which will be available to European users from academia, industry, SMEs and the public sector (seven Member States were involved last year, nowthere are 20 of them).

We need to have a European cybersecurity approach in order to build secure AI systems and defend  existing systems from AI-enabled massive attacks (massive personal data theft andepidemic-scale spread of malware, for example). Most Member States have expressed their support for the European AI strategy and have signed declarations of cooperation on AI. The next item on the agenda is the coordinated plan on AI, which should be agreed by the end of this year. Europe must ensure that technology is developed in line with the EU’s values and fundamental rights, with safety and ethical principles such as accountability and transparency. We have led the way with the GDPR and now we have to take the lead towards responsible AI.

In order to ensure the responsible development of AI, the European Commission has appointed 52 world-class experts to the new High-Level Group on AI. This group is tasked with coordinating work on ethical guidelines for AI, and with making strategic recommendations to the European Commission on the ethical and legal implications of AI, as well as on policy options for the future. Wider discussions will take place through the AI Alliance and a dedicated platform has been launched to encourage the participation of all the stakeholders concerned, from different branches of industry and civil society.

The Commissioner concluded her speech by stressing that all stakeholders’ participation must be ensured in order to seize the new opportunities, while tackling the new challenges brought about by the AI-powered world. Europe has to be a leader in the new technological revolution, and shape it through the participation of its societies and citizens.


The aim of the three parallel working groups was to engage the participating stakeholders around the three pillars of the European AI Strategy. The goal was to dive deeper into these specific pillars and come up with practical recommendations or strategies.


How to achieve responsible innovation and use of artificial intelligence

This working group was moderated by EESC Member Ulrich Samm, and explored possible ways of achieving responsible innovation, deployment and use with regard toAI. Speakers and participants addressed challenges such as the ethics, bias, safety, cyber-security, privacy, transparency and accountability of AI systems.


Clara Neppel, Senior Director, IEEE Europe

Clara Neppelspoke aboutthe work done some time ago by her organisation on the ethics of autonomous and intelligent systems and mentioned initiatives on blockchains, quantum computing and many others.

She presented threedimensions of the ethical landscape:

  • a code of ethics for software developers – professional guidelines
  • behavioural impacts – impacts on business, research, etc.
  • ethical concerns of the technology itself – the technological impact


An ethically aligned design initiative was launched last year, with contributionsfrom over 1 000 global experts, and presented recommendations on legal aspects, autonomous weapons and effective computing.

A step in the right direction to solve the challenges related to the ethics of AI would be to start working on standards. Thirteenstandards are already available, for example, on how to incorporate values into the system design, consideration of algorithmic bias, employer data governance, personal AI agents and a wellbeing metric for AI.Courses for businesses are also available (covering issues such asthe economic benefits of including ethics at the system design stage). There are certain focus areas: one concerns the impact of technology on individuals and society (AI systems must reflect our values). Another one is dependability (that is to say, any services delivered must be trusted).

In 10 years,AI should aim mainly to advance humankindand also the environment,making the most of technology, while respecting individuals and their autonomy.


Aimee van Wynsberghe, Assistant Professor of Ethics and Robotics, TUDelft

Aimee van Wynsberghe focused on the unpredictability of AI and asked whether uses of AI should berestricted. For example, can AI be used in healthcare, in interactions with children? Are we allowed to bring it to developing countries? We are also starting to understand that our own cultural biases are beginningto find their way into the output of an algorithm. Should we be allowed to use AI when there are real consequences for individuals or should we stop using it until we work out how to verify and test for these biases and then move forward?

The role of the media in this conversation is very important. How can we reduce the sensationalisedpicture of AI? Language in this context is crucial.

Mark Coeckelbergh, Professor of Philosophy of Media and Technology, University of Vienna

According to Mark Coeckelbergh,it is important to invest not only intechnologies but also inpeople. People’s lives should be made the number one priority and indications for technologies only afterwards. How can we connect the different people who have astake in this and guarantee that everybody is heard? The linebetween AI and other smart technologies is unclearand a discussion that is broader in scope than justAI in a narrow sense is thereforeneeded.

Citizens should understand what AI is and what AI will do. Education is very important in this context. This means education not only for the labour market but also in a wider sense,as we also deal with technologies in our daily lives. The best way to ensure that ethics is part of AI is to ensure that AI works together with humans and that humansremain in charge of the technology.

Matthias Spielkamp, Executive Director, AlgorithmWatch

Matthias Spielkamp presented AlgorithmWatch, a civil society organisation. He stated that terminology is very important and that we should use terms such as automated decision-making or support when speaking about AI. Sometimes,less technologically advanced systems can have a big impact on human behaviour and on the options available to humans. A project looking into dominant credit-scoring companies is currently running in Germany. Matthias Spielkamppointed out that we are not talking about AI ethics but about ethical approaches to AI, with philosophical questions, inthat real decisions in a human sense cannot be made by machines (since they lack autonomy, free will and intention). It is very important to have an interdisciplinary approach, involving different type of stakeholders.


AI & labour, education and life-long learning

This working group was moderated by LennekeHoedemaker, a professional moderator, and explored possible labour-market and workplace strategies to ensure that AI is implemented in a way that augments humans rather than replaces them. Speakers and participants also discussed life-long learning strategies and how we can educate our children for a world with AI.

Ute Meyenberg, vice-president, EUROCADRES

Ute Meyenbergstated that changes in the labour market should be anticipated. Work has already changed and all sectors are impacted (especially banking, manufacturing,the health sector, energy and mining, retail and public services) but our mind-setshave changed in recent years and we are awarethat these changes are happening. AI is generally considered to be a competitiveness factor in the workplace. Without AI,we cannot be competitive. Perceptionsvary, however, depending on the interest group. For example, directors and managers are 70% positive about AI, as they think it is good for their jobs, whileonly 40% of employees think that AI is positive for their jobs. The perception of AI by civil society focuses mainly on cybersecurity, data protection, ethics, regulation, governance and accountability.

We have seen a skills shift and we will need more creative skills, high cognitive ability and social intelligence. An ongoing learning process and continuous training are essential. In-house training,in its various forms, is very important. Having time for training is the key. We should also guarantee equal access to training for high- and low-skilled workers. The gender gap should be also addressed.

Lazaros Tossounidis, Strategic Advisor, CEDEFOP

Lazaros Tossounidisstated that the ultimate goal is the cohesion policies. Social cohesion is needed to achieve growth. To achieve growth, we need to invest massively in IT and to support education. Our duty is to influence public authorities and EU authorities and, as the Commissioner mentionedin her speech, transparency and accountability are essential.

The global situation is hugely complex. Change is happening very fast andthe EU has been continuously lagging behind for the past 25 years. We do not currently control what is happening and do not know how to change our curriculum and education. This requires a lot of funding and stakeholders and social partners have a big responsibility.

Productivity gains should be reinvested in training and upskilling. Raising awareness is also a crucial point. Howpeople will react to the current global situation is very difficult to predict and will determine the way forward.

Robert Went, Economist, Dutch Scientific Council for Government Policy (WRR)

Robert Wentemphasised that jobs are bundles of tasks and nobody is doing only one thing in his or her job. Therefore, the chance that your job will be fullyautomated is almost zero.

Studies differ widely in their predictions: according to McKinsey, for example, less than 5% of jobs will totally disappear in the coming 20 years. According to an OECD study, it will be 11.4% in the Netherlands. A recent study in the American Economic Review concluded that on average, a job entails22 to 30 tasks, and parts of these tasks can be taken over by robots, but it will never be all of the tasks at the same time. In the Netherlands, this fear about losing jobs is now changing.

It is essential to communicate with people before designing new machines. Machines designed for elderly people can make them even lonelier, for example. There is a need to carry out research before designing an AI machine. People and machines need to work together, not as adversaries.

There was a contract in the cleaning industry in the Netherlands in 2017 to design robots that are good for both the workers and the company. Robots should increase ownership of work. If you want to use AI for some tasks, you need to re-engineer everything around it and you have to ask the people who will work with it how to do this in the best way. If you do this from the outside, it will fail and you will lose people’s support.The quality of a job is essential and one characteristic of a “good” job is that you are not treated as an extension of a machine.


AI for the Benefit of Humankind

This working group was moderated by EESC Member Indrė Vareikytė, and explored the opportunities forusing AI to solve major societal challenges such as healthcare, climate change, disaster relief, poverty, inequality, etc. in order to tap the (hidden) opportunities of AI and to bring together people that normally would not find each other.

Elena Sinel, Founder, Acorn Aspirations, Teens in AI

Elena Sinelmade the pointthat everyone in the world will be affected by AI and that people should have the opportunity to discuss how AI is affecting them and solve the problems they identify in their communities through responsible and ethical AI. Young people should contribute to any AI initiatives. It is crucial to adapt the curriculum and to move towards problem-solving and project-based learning.

Actions to be put in place include developing a framework that could be used in any school and any country (human-centred design to identify problems, encourage responsible code, with AI and machine-learning enabling problem-solving). An online mentoring platform that allows young people to connect with expert mentors and other young people across the world should be put in place. We also have to teach children how to solve the problems the world is facing and not only how to code.


Nicholas Hodac, Government and Regulatory Affairs, IBM

As a starting point, Nicholas Hodacstated that all jobs will be affected by AI. There is a need for an international framework for AI. Companies need to realise that this is their responsibility. IBM is focusing on augmented intelligence, not man versus machine but man and machine. The purpose of the AI system the company is building needs to be clear. The company has to be able to explain its algorithm or the product cannot be placed on the market.

The fundamental problem in Europe is the educational system, which is static. In the US, IBM have launched what they call P-TECH schools(pathways to technology schools),which are targeting young student like Sara Conejo. The same should be done in Europe.

AI can better predict national catastrophes. Companies should be more proactive and responsible with regard tohow to approach AI. If we succeed in addressing all the challenges, we can build a form of AI that will help humans be better humans.

KarinePerset, Internet Economist, OECD

KarinePersetreferred to a number of benefits to the economy and society. Many OECD countries are facing rising inequalities and therefore the benefits of AI should be broadly shared and benefit all people. She gave the example of the space sector, where AI can preview and analyse satellite data faster thanhumans and can help scientists. This has a huge impact on education, because human scientists need to be able to conceptualise problems and to provide feedback for algorithms.

The OECD has developed guiding principles for AI in society, building on many principles that already exist. They are also looking into measuring AI literacy skills, investment in AI start-ups (investment doubled between 2016 and 2017), focus areasfor AI scientific publications, AI patenting activity and many others.

PreetamMaloor, Strategy and Policy Advisor, ITU

PreetamMaloorspoke aboutthe AI for Good global summit organised by the ITU last May. He emphasised that AI can accelerate progress towards the sustainable development goals (SDGs). A multi-stakeholder approach should be adopted right from the start (in other words, not only involving AI scientists). The summit focused on topics such as AI and satellite imagery (for example, aproject on predicting deforestation), AI and health (image-based malnutrition detection and AI-based snake identification), AI and smart cities (using AI tools to help the diversity of the cities thrive), trust in AI (open-source the challenge of engineering and earning trust for AI for good, inclusiveness and fairness of data sets).



The conclusions from the first working group were presented by Aimee van Wynsberghe.

The following topics were discussed:

1.What are we talking about?

  • AI cannot be isolated from other technological developments (big data, robotics, IoT, etc.)

2.Who are we talking about?

  • consumersare at the centre of the discussions (how can we protect them?)
  • the focus was given to human rights issues
  • use of these technologies in a humanitarian context


  • how to provide education that provides new skills for the younger generation
  • education in new skills for the working population

4.   The role of ethics?

  • definition of ethics(European ethics, ethics of political or religious leaders?)
  • can this be achieved by a code of ethics or ethics standards alone, through a dialogue among individuals or a participation-designed process?

5.   Language and the role of the media

  • difference between automated decision-making versus automated decision support rather than using the term AI
  • the role of the media in this discussion is very important
  • The following reactions were given by Mady Delvaux, Member of the European Parliament:
  • as ethics is difficult to define, there has to be large-scale participation with a long-term perspective – new practical questions will arise
  • skills committees with specialists/users are needed in various institutions
  • individuals will have more responsibilities than before and in this regard, education is essential – how should we teach and train teachers?
  • ethical guidelines should be at the forefront of all this – they will determine standards, certification, etc.The following reactions were given by Bjoern Juretzki, DG CONNECT, European Commission:

continued dialogue on ethics when discovering new issues is essential with a multi-stakeholder. Approach the language used when speaking about AI must be chosencarefully:AI is probably not the best term, but is more understandable than automated decision-making, for example.

The following topics were raised in the discussion with the audience:

  • the human-in-command approach is also about making the decisions taken by algorithms easier to explain
  • it is important to acknowledge the limits of technology and of human beings
  • research on AI should be public
  • healthcare is one of the priorities of the Commission (it will provide funding for hospitals of the future that use tested new technologies, for example, )


The conclusions from the second working group were presented by EESC Member Laure Batut.

The main ideas were divided into three topics:

  1. Psychological context
  • AI stresses people, the definition is not clear, it is too complex and the changes are taking place too rapidly
  • the dialogue concerning robots has not yet truly started
  • AI is not an end in itself, but should be a vector to guarantee better jobs


  • humans are underestimated – some human tasks are very complex and robots are not able to fulfil them yet – playing the piano, for example
  • the configuration of our livesin society and in work is going to change
  • the place of humans in the workplace needs to be redefined– maximum security should be safeguarded
  • State governance also needs to be redefined
  • is the collaborative economy our future or a tool for more slavery?

3.Proposals for actions

  • The key word is training, and people’s training should start now
  • social dialogue needs to be deepened when it concerns where and how to train employees
  • public authorities will have missions to fulfil, such as reducing disparities in the level of development in digital skills in different Member States
  • new methods should be found to fight against the gender gap, language problems and the education gap
  • new methods should be also found for how to earn money, how to tax earnings and how to redistribute wealth
  • AI is too important to leave it in the hands of big companies
  • The following reactions were given by Mady Delvaux, Member of the European Parliament:
  • if the interaction between human and AI is intelligent, the results can be impressive
  • we should pool all the ideas on AI at the European level and not leave them in the hands of particular Member States
  • big multinational companies are doing tasks which were in the past reserved for Member States
  • it is very difficult for teachers to prepare children for jobs that do not yet exist
  • lifelong learning is the key – a good diploma is no longer enough
  • the most difficult workers to train are those without basic education The following reactions were given by Kasia Jurczak, Member of Commissioner Thyssen’s Cabinet, European Commission:
  • we have to prepare the opportunities and the discussion on what this European model should look like
  • older workers, particularly when it comes to digital skills, need to be prepared for the transition
  • up skilling pathways that encourage Member States to give second-chance opportunities to adults
  • the question of training the self-employed – who will pay for this?
  • businesses should offer learning opportunities – dialogue with businesses is very important for adapting curricula
  • AI will be mainstreamed (see, for example, the skills needed in the agricultural sector and other fields)
  • the benefits that AI can bring to the workplace – offering an adapted workplace for people with disabilities
  • AI is also used in skills forecasting – a CEDEFOP project with is currently trawling through vacancies available on websites to determine skill shortages
  • The following topics were raised by the other speakers and the audience:
  • we have to be careful to not to overestimate the impact of technologies – Germany, for example, has the highest robot density per 10000 manufacturing employees but also the lowest rates of unemployment
  • we underestimate the human dimension – humans are much more capable than we sometimes think
  • interactive interfaces with AI technologies should be built
  • AI should help us to replace humans in dirty, boring and dangerous jobs (now robots are able to do jobs such as inspecting grids, milking cows, etc.)
  • AI will change the market much more than we think and jobs are more likely to change than to be taken over completely
  • how do we keep autonomy in jobs?
  • another big question is: how can AI reshape the relationship between employer and employees? – AI empowers everyone, but not to the same extent or at the same pace


The conclusions from the third working group were presented by EESC member Indrė Vareikytė.

The speakers and participants of this working group agreed that education should be the top priority when it comes to AI. The following remarks were made:

  • a shift in educational systems is needed – it should not be so much about how to teach children to code but more about teaching them how to think, how to solve problems through AI
  • curricula are outdated and need to be improved: they have not prepared young people for today’s world
  • greater emphasis should be placed on creativity, critical thinking and on adaptive and life-long learning
  • companies should be encouraged to invest in human capital from a critical age
  • young voices are under-represented in AI-related debates
  • we have to embrace AI from the ethical point of view and guidelines should be reached at international level
  • a strong focus should be on the responsibility of companies – if you are not able to explain your algorithm, you should not be putting the product on the market
  • more good storytelling is needed
  • the key to progress is to trust AI
  • there is a lack of customers with an advanced understanding of AI
  • there are great talents and companies in Europe and we have a real opportunity to lead this topic
  • The following reactions were given by Mady Delvaux, Member of the European Parliament:
  • Europe is performing very well in many areas –high-quality research, a competitive industry, the best position in robotics
  • if we want to be successful, we need to invest in human resources
  • the quality of data is a problem – programmers are usually young white men and not older black women, and this creates biases – this might be very difficult to change, except with education and training
  • The following reactions were given by Mario Mariniello, Digital Adviser, European Political Strategy Centre at the European Commission:
  • the mindset of engineers needs to change
  • education should reach everybody, not only the “smart” part of the population
  • the rise in populism is also due to uncertainty about what the future holds
  • the retraining of poorer workers is essential
  • Member States that have national strategies in place should also support the Commission’s work– a European strategy is more powerful
  • standards based on privacy by default or transparency are needed
  • the approach to competition policy should be rethought to better capture potential competitors such as small companies(competitors of the future that are not currently subject to the control of antitrust authorities) – to block acquisitions that might be harmful in the future
  • The following topics were raised by other speakers and the audience:
  • practical aspects are missing from the education systems
  • in the short-term, we should also focus on how to make European industry more competitive
  • GDPR is a blessing for European industry, which has to be responsible
  • European AI is not lagging behind the US – in many ways it is far ahead – but it lacks demanding customers in Europe
  • an EU AI industry is needed to make Europe strong and responsible
  • storytelling is essential and there are many topics to put forward
  • a good example of vocational and education training can be seen in the Basque country, where training is technology-based and linked to businesses
  • the question of legal personality was mentioned –the product liability directive should be amended in this context
  • the preventive role of our liability laws is very important – they prevent us from doing harm