The term “artificial intelligence” is coined by John McCarthy, Marvin Lee Minsky, Nathaniel Rochester and Claude Elwood Shannon in a conference paper proposal. The conference, which took place a year later, is largely considered the birthplace of AI as a research field.
Stanley Kubrick’s movie “2001: A Space Odyssey” features HAL 9000, a sentient supercomputer endowed with a highly developed AI. HAL 9000 is a dependable crew member of “Discovery One” on its mission to Jupiter, and regularly displays humanlike qualities in his exchanges with human counterparts like Dr. David “Dave” Bowman.
Decades before Big Tech companies, Ernst Dickmanns and a team of German engineers develop the first autonomous vehicle. Their self-driving Mercedes-Benz van can navigate traffic at speeds approaching 90 kilometers per hour. Today, thousands of self-driving vehicles are being tested on roads in cities and regions worldwide. Forecasts predict that 800,000 autonomous vehicles will be produced in 2030.
Alice, the first chatbot to use natural language processing is developed by Richard Wallace. It builds on the work of Joseph Weizenbaum who created the very first chatbot ELIZA as early as 1965. Alice’s and Eliza’s legacies live on in today’s Siri and Alexa. The number of digital voice assistants is predicted to reach 8.4 billion in 2024 – higher than the entire human population.
AlphaGo, an AI system developed by Google’s (now Alphabet’s) Deep Mind, defeats a human champion of Go, a complex board game. This is a decade earlier than predicted. Go had long been considered a difficult challenge for AI as it requires intuition, creativity, and strategic thinking, in other words, abilities typically associated with the human brain.
AI is projected to be better than humans at translating languages by 2024, writing school essays by 2026, driving trucks by 2027, selling goods by 2031, writing bestselling books by 2049, and conducting surgery by 2053. Even though the development of Artificial General Intelligence (AGI) is not just around the corner, machines will increasingly outperform humans in various tasks in the coming years.
If we are to shape the future positively, we have to think about the future. This is all the more important when it comes to AI, which, as a general-purpose technology, will increasingly affect each and every aspect of our lives. AI has the potential to create vast benefits for the well-being, prosperity, and security of our societies. At the same time, however, it poses significant ethical and regulatory challenges for governments, companies, and citizens alike.
This is why we hosted a workshop on the future of AI governance at the 2021 Paris Peace Forum. With the help of different foresight methods, we challenged our participants to connect future developments to decisions that are being made today. Coming from various geographical and professional backgrounds, participants created four “artefacts from the future” – objects that may exist in the year 2040 and that represent four AI governance challenges that the group identified as most pressing.
How should platforms determine what information is provided to users? Is misinformation / disinformation flagged and removed, assigned a warning label, deprioritized in ranking, demonetized, debunked, or something else entirely? What socio-technical process is used to identify misinformation and disinformation? Together, these questions constitute a new form of governance over speech and the exchange of ideas – one that is mediated by policies, technologies, incentives, and other affordances controlled by private platforms rather than governments. It thus requires different sectors to work together.
In 2040, an app called Veraphoria informs its users when they are exposed to fake content and hence helps distinguish true from false. The World Truth Council has been established to determine true and false. For standards to be trustworthy and strong enough, relevant stakeholders cooperate in the Council, and the realities of existing power dynamics are acknowledged and navigated to prevent the institution from reinforcing inequalities.
Even with intensive measures to ensure the safety and security of technologies in 2040, we can expect an increase in the number of cyber incidents, hackings and the publication of fake content. Apps like Veraphoria can help protect against such malicious activities.
A call for professional responsibility stems from the fact that the actions of individuals involved in the design, development, and deployment of AI-based systems have a direct impact on the ethics of the respective system. Multistakeholder collaboration, as embodied in the World Truth Council that authenticated Veraphoria, encourages these individuals to consult and work with relevant stakeholder groups.
With AI technologies prevailing in more and more areas of our daily lives, it is essential that AI-powered systems are run in line with social norms and benefit humans. Veraphoria assumes that citizens have an inherent desire for knowledge and to be exposed to the truth, and therefore supports them by identifying fake content.
Invest in new and existing multistakeholder governance mechanisms to build capacity, exchange information, and provide incentives to tech platforms to do the same.
States outsource more and more functions to AI systems, and this potentially includes sensitive areas like border management, healthcare, and the provision of social services. For this to happen in a democratic system rather than some future dystopia, the people would need to place a high level of trust in technologies. Therefore, transparent and accountable systems are needed so that citizens understand the basis of decisions that impact them, and, when things go wrong, effective systems for grievance and remedy are readily available.
The CitiCred bracelet was adopted by a country named Artifiland in 2040 to manage its borders. Artifiland’s citizens use the bracelet instead of a passport; it also includes a social credit system. Bracelets have “trust beads”, with different colours indicating each citizen’s level of trustworthiness. However, Artifiland is facing the emergence of “deviant” CitiCred bracelets, which are sold on the black market and allow people to go off the grid. The bracelet symbolizes the fact that technology can both be beneficial and harmful depending on what use is made of it and what agreement has been made between a government and the population.
Who is responsible for damage caused by deviant CitiCred bracelets? As states and companies will be delegating an increasing number of sensitive tasks to AI-powered technologies, policymakers will have to develop strong legal frameworks if such technologies are to be accepted by democratic societies.
Transparency, defined as the potential oversight of AI operations, and explainability, in other words, the translation of technical aspects into intelligible formats, are essential for trust in AI and other technologies. If Artifiland (or any other country) were to outsource border management and other functions to an AI system, it would have to ensure that the underlying technology has a high degree of transparency and explainability to garner the support of its citizens.
AI systems must be safe and secure, thus do no harm to users and be protected against abuse by malicious and unauthorized parties. The CitiCred bracelet contains sensitive information and gives a lot of control to whoever can hack it or create a deviant version. Artifiland therefore constantly monitors and updates its guidelines to ensure safety and security.
Work towards standards (nationally, regionally, internationally) that ensure that AI systems are designed in a transparent manner and that accountability is clear to increase trust in technologies.
A core function of AI is to increase efficiency and save costs. This means that AI is often deployed in less resource-rich settings. The communities impacted by technology often have little influence over a given system’s design and implementation, a problem multiplied by the fact that the system may not operate equally well across contexts due to biased or unrepresentative training data. Additionally, the interests and incentives of data controllers (companies, states) and data subjects (citizens) are not always aligned – resulting in potential privacy harms, lack of transparency, lack of individual and community control over data, and lack of access to the monetary benefits accrued from the use of one’s data.
In 2040, you may find this invitation to a Data Potluck just outside your home. It invites you to imagine an alternate future where local communities have the authority and technical capacity to collect their own data and build their own AI systems. The need for citizen empowerment is thus at the heart of the Data Potluck invitation. Members of the community are meant to bring their medical history data. Upon arrival, they have to say “grassroots” at the door as a control mechanism. This artefact comes from a future where communities have ownership of their data.
For technologies to serve the needs of the people, it is essential that humans remain in control over the design and implementation of AI systems. An important prerequisite for this is an increased data and technology literacy on the side of decision-makers and citizens. Local events like the Data Potluck are a means to this end.
The right to privacy is enshrined in the Universal Declaration of Human Rights. AI technologies that run on huge amounts of data are used in surveillance, healthcare and social systems, thereby raising privacy concerns. Sharing and processing data locally, as in the Data Potluck, promises to safeguard privacy and reflect community interests.
Algorithmic bias is a key challenge for any technology running on big data. It runs the risk of aggravating existing imbalances and inequalities, especially to the detriment of minorities or underrepresented communities. Outsourcing data collection to communities may ensure that all societal groups are represented equally.
Think about ways to ensure inclusive AI governance and consider bottom-up approaches, in particular.
Since the 2010s, the world has seen many states and international organisations publish their visions for (global) regulation and standards in the development and application of AI systems. While many initiatives have been led by industrialised countries, comprehensive worldwide regulation would ideally be based on broad input. In addition, in a democratically governed system, citizens would have ownership of their data and influence over how their data are used.
The Global Citizen app imagines a future in which citizens are regularly and actively consulted about the uses of AI so that their experiences and perspectives shape governance of the technology. The app was designed in 2040 to empower people whose lives are heavily affected by AI systems. By using this app, users can view the data that AI technologies use for decision-making and are given the opportunity to amend their personal data and correct the shortfalls of AI technologies.
If technologies are to serve the needs of the people, it is essential that humans remain in control over the design and implementation of AI systems. The Global Citizen app accounts for this requirement by allowing all stakeholders to participate in AI governance, for instance by amending their personal data.
Transparency, defined as the potential oversight of AI operations, and explainability, in other words, the translation of technical aspects into intelligible formats, are essential for trust in AI technologies. As AI is increasingly replacing (or complementing) humans in decision-making processes (justice, critical infrastructures, self-driving vehicles etc.), people must be able to understand how these decisions are made.
Who is accountable for the decisions taken by an algorithm? The Global Citizen app enables citizens to give feedback and complain about AI-made decisions affecting them. However, it comes with extensive terms and conditions. This captures some of the implications and pitfalls of the 2040 context identified by the participants (exacerbated inequality, lack of inclusivity, distrust).
Expert voices often predominate in policy circles. Consider mechanisms to engage people who are impacted by technology, even if they have not taken part in its development.
We hope you enjoyed your journey into the future. As you have witnessed first-hand, there are many challenges ahead for AI governance. But 2040 is not as far away as you may think. Many of the questions raised by the artefacts have to be answered by policymakers today.
Now that you have looked into the future, what do you see as the most pressing issue for AI governance?
To contribute to the debate on improving the global governance of AI, Körber-Stiftung hosted a digital workshop during the 2021 Paris Peace Forum. The two-day workshop explored future trends in global AI governance in policy areas such as healthcare, critical infrastructure, border management, elections, and autonomous weapons in the context of horizontal trends like bias, privacy, cyber security, North-South divides, and the geopolitical implications of emerging tech.
Youtube Embed / Privacy Notice
By clicking the "Show Video" button you are informed and you agree that a Youtube embed via iframe will be loaded and therefore cookies from Google Fonts (fonts.gstatic.com, fonts.googleapis.com) will be set, which we are unable to prevent. You should not click the button if you do not want this and you won't be able to see the video here. You can visit the Google privacy policy for further information at https://policies.google.com/privacy?hl=en.
Show VideoDuring three consecutive sessions, over 50 participants from diverse geographical and professional backgrounds employed various futures approaches including the Futures Wheel and the creation of artefacts from the future. The sessions were facilitated by experts from the School of International Futures.
In Session 1, participants used Futures Wheels to explore the first, second, and even third order impacts of provocations crafted to stimulate critical thinking on how various applications of AI can be used (and abused) by an array of actors. As a method, Futures Wheels provide a structured approach for exploring cascading effects across different fields (social, economic, etc.).
In Session 2, participants created a design brief that served as inspiration for two designers to render two-dimensional artefacts from the future.
Session 3 challenged the participants to imagine the impact of the four artefacts for policy across three time horizons: one year, five years, and in 2040. The session focused on the risks, resources, and relationships inherent to and emanating from the artefacts with an eye towards how different actors, communities, and stakeholders can and might be engaged further on the futures of global governance for AI.
The final presentation brought together participants from all sessions to reflect on lessons learned and insights from the products as well as the process itself.
Youtube Embed / Privacy Notice
By clicking the "Show Video" button you are informed and you agree that a Youtube embed via iframe will be loaded and therefore cookies from Google Fonts (fonts.gstatic.com, fonts.googleapis.com) will be set, which we are unable to prevent. You should not click the button if you do not want this and you won't be able to see the video here. You can visit the Google privacy policy for further information at https://policies.google.com/privacy?hl=en.
Show VideoJessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, Madhulika Srikumar:
“Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.”,
Papers.ssrn.com, Rochester NY, January 15, 2020.
Roxana Radu:
“Steering the Governance of Artificial Intelligence: National Strategies in Perspective.”,
Policy and Society, 40 (2): 178–93, 2021.
Angela Daly, Thilo Hagendorff, Li Hui, Monique Mann, Vidushi Marda, Ben Wagner, Wei Wang, Saskia Witteborn:
“Artificial Intelligence Governance and Ethics: Global Perspectives.”,
2019.
Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Alex Engler, Rosanna Fanni:
“Strengthening International Cooperation on AI: Progress Report.”,
Washington DC: Brookings Institution, 2021.
Cynthia Rudin, Joanna Radin:
“Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition.”,
Harvard Data Science Review 1, no. 2, 2019.
Ada Lovelace Institute, AI Council:
“Exploring Legal Mechanisms for Data Stewardship.”,
London: Ada Lovelace Institute, 2021.
Salome Viljoen:
“Democratic Data: A Relational Theory for Data Governance.”,
SSRN Electronic Journal, 2020.
Carlos Ignacio Gutierrez, Gary Marchant:
“Soft Law 2.0: Incorporating Incentives and Implementation Mechanisms into the Governance of Artificial Intelligence.”,
OECD.AI Policy Observatory, July 13, 2021.
Körber-Stiftung is a private foundation that takes on current social challenges in areas of activities comprising Innovation, International Dialogue and Vibrant Civil Society. Inaugurated in 1959 by the entrepreneur and initiator Kurt A. Körber, the foundation is now nationally and internationally active from its sites in Hamburg and Berlin.
The Berkman Klein Center is a research center at Harvard Law School. Its mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions.
In a world requiring more collective action, the Paris Peace Forum is a platform open to all seeking to develop coordination, rules, and capacities that answer global problems. Year-round support activities and an annual event in November help better organize our planet by convening the world, boosting projects, and incubating initiatives.
1956: The Force Awakens, Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence, 1956.
Photo: Margaret Minsky, photo by courtesy of the Minsky Family, Source
1968: “I’m Afraid, Dave”, HAL 9000, 2001: A SPACE ODYSSEY, 2 April 1968.
Photo: AA Film Archive/MGM, Source
1986: Cars Without Drivers, Research project PROMETHEUS (1986 to 1994), test vehicle based on a Mercedes-Benz van. VITA sub-project - a precursor to DISTRONIC PLUS and the automatic PRE-SAFE® brake, 5 September 2006.
Photo: Mercedes-Benz Group, Source
1995: Chatbots Speaking Up, Old and dirty CRT computer monitor, 13 November 2016.
Photo: Norasit Kaewsai via iStock, Source
2016: Machine Beats Human, SKOREA-SCIENCE-COMPUTERS-AI, 13 March 2016.
Photo: ED JONES/AFP via Getty Images, Source
2022 and Onwards – Will AI Take Over?, Blurred photo with daily life of city square, 8 April 2018.
Photo: Lina Moiseienko via iStock, Source
Veraphoria, Dominique Vassie, November 2021/July 2022. Find out about her work at www.dominiquevassie.com. Background: Blank billboard in a subway station wall, 30 January 2012.
Photo: sorendls via iStock, Source
CitiCred Bracelet, Dominique Vassie, November 2021/July 2022. Find out about her work at www.dominiquevassie.com. Background: Frau Hand mit Smartwatch am Handgelenk, 12 October 2020.
Photo: sorendls via iStock, Source
Data Potluck, Laura Dudek, November 2021/July 2022. Find out about her work at laura-dudek.com.
Photo: Yonghyun Lee via Unsplash, Source
Global Citizen, Laura Dudek, November 2021/July 2022. Find out about her work at laura-dudek.com. Background: Top-Ansicht Mockup-Bild von der Hand eines Mannes hält weißes Handy mit blankem Desktop-Bildschirm auf dem Oberschenkel, 12 March 2019.
Photo: Farknot_Architect via iStock, Source