Loading

Artificial intelligence
is developing rapidly.

12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314
1950
Old black and white photograph from the 50s of a group of younger computer scientists, sitting together casually in the gras and smiling.

1956: The Force Awakens

The term “artificial intelligence” is coined by John McCarthy, Marvin Lee Minsky, Nathaniel Rochester and Claude Elwood Shannon in a conference paper proposal. The conference, which took place a year later, is largely considered the birthplace of AI as a research field.

1960

1968: “I’m Afraid, Dave”

Stanley Kubrick’s movie “2001: A Space Odyssey” features HAL 9000, a sentient supercomputer endowed with a highly developed AI. HAL 9000 is a dependable crew member of “Discovery One” on its mission to Jupiter, and regularly displays humanlike qualities in his exchanges with human counterparts like Dr. David “Dave” Bowman.

Stylization of the red user interface light of the HAL - computer, from the Stanley Kubrik movie: 2001 - A Space Odyssey.
1980
Stylization of a futuristic prototype 80s car cockpit with clunky electronics in plastics housing and blue wireframed regions of computerized parts.

1986: Cars Without Drivers

Decades before Big Tech companies, Ernst Dickmanns and a team of German engineers develop the first autonomous vehicle. Their self-driving Mercedes-Benz van can navigate traffic at speeds approaching 90 kilometers per hour. Today, thousands of self-driving vehicles are being tested on roads in cities and regions worldwide. Forecasts predict that 800,000 autonomous vehicles will be produced in 2030.

1990

1995: Chatbots Speaking Up

Alice, the first chatbot to use natural language processing is developed by Richard Wallace. It builds on the work of Joseph Weizenbaum who created the very first chatbot ELIZA as early as 1965. Alice’s and Eliza’s legacies live on in today’s Siri and Alexa. The number of digital voice assistants is predicted to reach 8.4 billion in 2024 – higher than the entire human population.

90s 17-inch computer screen with blue wireframe visualization of a chatbot speaking out of it: Hallo!
2010
A Go gameboard with with blue visualization of dots of possible next moves of the stones by an AI.

2016: Machine Beats Human

AlphaGo, an AI system developed by Google’s (now Alphabet’s) Deep Mind, defeats a human champion of Go, a complex board game. This is a decade earlier than predicted. Go had long been considered a difficult challenge for AI as it requires intuition, creativity, and strategic thinking, in other words, abilities typically associated with the human brain.

2022

2022 and Beyond: Will AI Take Over?

AI is projected to be better than humans at translating languages by 2024, writing school essays by 2026, driving trucks by 2027, selling goods by 2031, writing bestselling books by 2049, and conducting surgery by 2053. Even though the development of Artificial General Intelligence (AGI) is not just around the corner, machines will increasingly outperform humans in various tasks in the coming years.

A still CCTV image of people walking in the city at a public place with visualized blue wireframed tracking of them.

We don’t know what
lies ahead.

But now is the time
to set rules.

12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314
12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314

It’s time
for AI
Governance.

2040

Thinking Ahead with
Artefacts from the Future

If we are to shape the future positively, we have to think about the future. This is all the more important when it comes to AI, which, as a general-purpose technology, will increasingly affect each and every aspect of our lives. AI has the potential to create vast benefits for the well-being, prosperity, and security of our societies. At the same time, however, it poses significant ethical and regulatory challenges for governments, companies, and citizens alike.

This is why we hosted a workshop on the future of AI governance at the 2021 Paris Peace Forum. With the help of different foresight methods, we challenged our participants to connect future developments to decisions that are being made today. Coming from various geographical and professional backgrounds, participants created four “artefacts from the future” – objects that may exist in the year 2040 and that represent four AI governance challenges that the group identified as most pressing.

Four Issues for the Future of AI Governance

01

Multistakeholder Cooperation

02

Transparency, Accountability, and Trust

03

Cooperative Data Governance

04

Regulations and Standards
Styled artistic animation of randomly appearing dots that give a sense of a computing process.

01

Multi
Stakeholder
Cooperation

How should platforms determine what information is provided to users? Is misinformation / disinformation flagged and removed, assigned a warning label, deprioritized in ranking, demonetized, debunked, or something else entirely? What socio-technical process is used to identify misinformation and disinformation? Together, these questions constitute a new form of governance over speech and the exchange of ideas – one that is mediated by policies, technologies, incentives, and other affordances controlled by private platforms rather than governments. It thus requires different sectors to work together.

What can be done?

Invest in new and existing multistakeholder governance mechanisms to build capacity, exchange information, and provide incentives to tech platforms to do the same.

Styled artistic animation of randomly appearing dots that give a sense of a computing process.

02

Transparency,
Accountability,
and Trust

States outsource more and more functions to AI systems, and this potentially includes sensitive areas like border management, healthcare, and the provision of social services. For this to happen in a democratic system rather than some future dystopia, the people would need to place a high level of trust in technologies. Therefore, transparent and accountable systems are needed so that citizens understand the basis of decisions that impact them, and, when things go wrong, effective systems for grievance and remedy are readily available.

What can be done?

Work towards standards (nationally, regionally, internationally) that ensure that AI systems are designed in a transparent manner and that accountability is clear to increase trust in technologies.

Styled artistic animation of randomly appearing dots that give a sense of a computing process.

03

Cooperative
Data
Governance

A core function of AI is to increase efficiency and save costs. This means that AI is often deployed in less resource-rich settings. The communities impacted by technology often have little influence over a given system’s design and implementation, a problem multiplied by the fact that the system may not operate equally well across contexts due to biased or unrepresentative training data. Additionally, the interests and incentives of data controllers (companies, states) and data subjects (citizens) are not always aligned – resulting in potential privacy harms, lack of transparency, lack of individual and community control over data, and lack of access to the monetary benefits accrued from the use of one’s data.

What can be done?

Think about ways to ensure inclusive AI governance and consider bottom-up approaches, in particular.

Styled artistic animation of randomly appearing dots that give a sense of a computing process.

04

Regulations
and
Standards

Since the 2010s, the world has seen many states and international organisations publish their visions for (global) regulation and standards in the development and application of AI systems. While many initiatives have been led by industrialised countries, comprehensive worldwide regulation would ideally be based on broad input. In addition, in a democratically governed system, citizens would have ownership of their data and influence over how their data are used.

What can be done?

Expert voices often predominate in policy circles. Consider mechanisms to engage people who are impacted by technology, even if they have not taken part in its development.

2040

2030

2022

12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314
12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314

Back to
Present.

Welcome back to the 2022s!

We hope you enjoyed your journey into the future. As you have witnessed first-hand, there are many challenges ahead for AI governance. But 2040 is not as far away as you may think. Many of the questions raised by the artefacts have to be answered by policymakers today.

Now that you have looked into the future, what do you see as the most pressing issue for AI governance?

What should be the priority of decision makers today?

1.
Multistakeholder Cooperation
2.
Transparency, Accountability, and Trust
3.
Cooperative Data Governance
4.
Regulations and Standards
12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314 12345678 91011121314

Policy Recommendations

Based on the discussions taking place during the Paris Peace Forum and the ideas emanating from them, we identified four key issues for decision-makers to attend to: Multistakeholder Cooperation; Transparency, Accountability, and Trust; Cooperative Data Governance; and Regulations and Standards. All these aspects require immediate attention and forward-looking policy action to ensure that technology serves the people. More specifically, we recommend regulators and decision-makers consider the following things.

Multistakeholder engagement and cooperation is essential to robust AI governance now and will continue to be so in the future. Effectively governing digital global goods, as embodied in the Veraphoria artefact, cannot be achieved by states alone but depends upon various actors joining forces. But all engagement is not created equally: success requires meaningful participation by all material stakeholders on the issue under consideration. In structuring such fora, decision-makers should consider three things.

First, they should capitalize on existing channels. To avoid diverting precious time and resources, policymakers should prioritize AI governance concerns in organizations and efforts in which they already participate, rather than devoting time and resources to establishing new platforms for engagement, like the World Truth Council. One strong existing example of this is the Globalpolicy.ai initiative, through which a number of intergovernmental organizations with AI mandates are collaborating to share information and resources.

Second, they should take robust, respectful inclusion seriously. Just as policymakers take advantage of existing networks, it is crucial to consider voices that are not yet adequately represented in the AI governance conversation. This is particularly apposite as regards historically marginalized communities, with whom some policymakers may not have strong existing relationships. Stakeholder outreach should be conducted respectfully, avoiding tokenism and remaining mindful of any interests the relevant communities have already communicated through organizations like the Indigenous Protocol and Artificial Intelligence Working Group.

Finally, timing is crucial. Multistakeholder cooperation may be advantageous throughout the development and implementation of policy solutions, from identifying and prioritizing issues through deployments and later amendments. Consultations should be structured appropriately for each stage of the process to ensure inputs that are offered can be thoroughly considered and addressed.

A high degree of transparency about the processes leading to decisions is essential for trust in AI systems. The degree of required transparency is contingent on the use-case and the severity of the consequences of a mistake. But understanding a decision is also highly context-dependent. An explanation suitable for an expert in machine learning may be unsuitable for the implementing domain expert (e.g. an immigration or customs official) or the impacted layperson. Information that is sufficient for achieving understanding in an urgent setting (e.g. a medical emergency) may be insufficient in another (e.g. avoiding discrimination in a border management system; cf. the CitiCred Bracelet). For this reason, policymakers should expect providers and users of high-risk AI systems to produce information that meets the needs of multiple audiences, may vary depending on the situation, and enables scrutiny across multiple dimensions.

Some of the most advanced AI and machine learning techniques do not produce interpretable explanations for the generation of a particular decision. Whether such systems should be used in high-risk settings at all is a matter of ongoing public debate. The European Commission’s proposed AI Act requires high-risk AI systems to enable users and human overseers to interpret their outputs. It is still unclear if this requirement categorically excludes the use of uninterpretable techniques, or if the construction of post-hoc models that explain the behavior of the underlying model will be accepted as sufficient. As AI systems are potentially beneficial to citizens in high-stakes contexts, policymakers should carefully consider whether the benefits of utilizing techniques that are not fully interpretable outweigh the risks in some instances.

The digital economy uses interconnected data in ways that often have societal effects, but data is governed as though it were a solely personal matter. It is the aggregation of data and its connection to other data that enables new insights and predictions to be made with artificial intelligence, machine learning, and statistical models. The laws and regulations that control the capture and use of data preserve the rights of individual data subjects, address direct or indirect privacy or other harms that have accrued to individuals, and mediate the relationship between data subjects and data processors. But the societal impacts of these technologies are often diffuse, occurring outside the boundaries of the relationship between the data subject and the data controller. Crucially, tremendous value is derived from the relationships of data to other data: observations from a small sample enable inferences about whole populations, the addition of new data from others can yield insights about a data subject that they themselves were unaware of, and data voluntarily provided by an individual may impact the interests of another person who is not a data subject (Viljoen, 2021). Going beyond individual protections and addressing the cumulative effects of the data economy is a crucial challenge for the current generation of policymakers.

Governments, companies, and citizens have – as embodied in the Data Potluck artefact – already begun to develop alternatives to top-down data governance models through data cooperatives, trusts, and other mechanisms. These nascent efforts vary in form and purpose, but enable greater participation by individuals and groups in decisions concerning their data (Ada Lovelace Institute, 2021). Policymakers should invest in supporting such efforts within their regions or subject area remits in order to foster a rich ecosystem of complementary approaches to data governance. Promising avenues include supporting academic research in this area, establishing regulatory sandboxes, and collaborating internationally on the development of technical and governance mechanisms.

As regulatory bodies across the globe begin to take on the challenge of AI, procurement is a natural starting point and a powerful lever for policymakers to pull. Governments possess considerable purchasing power – even if the AI applications they are requisitioning are less complex than the Global Citizens artefact – which translates to influence. Numerous jurisdictions, beginning with Canada in 2019, have already sought to employ procurement regulations in the governance of AI technologies. There is robust guidance available from the World Economic Forum and the European Union, and further information may be available through the OECD.AI Policy Observatory. One advantage of procurement as a regulatory strategy is that it is equally viable for regional and municipal government entities. In structuring AI procurement policies, decision-makers should consider at least three things.

First, the public sector has considerable standard-setting power. It can thus model best practices for private entities purchasing similar systems, such as conducting impact assessments prior to acquiring an AI-based product.

Second, government procurement policies shape products; and they can influence the market for AI solutions in several ways. For example, they can be an engine of transparency. Canada’s regulation requires the government to publish custom source code it acquires; this supports auditability and increases the knowledge base available to advance innovation. More generally, if governments use their purchasing power to insist on rights-protective, responsible technological solutions, vendors are incentivized to build products that comply with those requirements, and other downstream buyers benefit from the same changes.

Lastly, public actors should be mindful of their broader influence on other stakeholders. As policymakers begin to see domestic benefits from responsible AI procurement policies, they may also consider encouraging allies and partners to adopt them as well, for instance in the context of regional fora, negotiations, and trade agreements.

Share this Site
 



“Digital Futures: Co-Designing AI Governance”

To contribute to the debate on improving the global governance of AI, Körber-Stiftung hosted a digital workshop during the 2021 Paris Peace Forum. The two-day workshop explored future trends in global AI governance in policy areas such as healthcare, critical infrastructure, border management, elections, and autonomous weapons in the context of horizontal trends like bias, privacy, cyber security, North-South divides, and the geopolitical implications of emerging tech.

Youtube Embed / Privacy Notice

By clicking the "Show Video" button you are informed and you agree that a Youtube embed via iframe will be loaded and therefore cookies from Google Fonts (fonts.gstatic.com, fonts.googleapis.com) will be set, which we are unable to prevent. You should not click the button if you do not want this and you won't be able to see the video here. You can visit the Google privacy policy for further information at https://policies.google.com/privacy?hl=en.

Show Video
Körber-Stiftung’s Ronja Scheler discussing the format and essence of the workshop with Trisha Shetty and Justin Vaisse.

During three consecutive sessions, over 50 participants from diverse geographical and professional backgrounds employed various futures approaches including the Futures Wheel and the creation of artefacts from the future. The sessions were facilitated by experts from the School of International Futures.

In Session 1, participants used Futures Wheels to explore the first, second, and even third order impacts of provocations crafted to stimulate critical thinking on how various applications of AI can be used (and abused) by an array of actors. As a method, Futures Wheels provide a structured approach for exploring cascading effects across different fields (social, economic, etc.).

In Session 2, participants created a design brief that served as inspiration for two designers to render two-dimensional artefacts from the future.

Session 3 challenged the participants to imagine the impact of the four artefacts for policy across three time horizons: one year, five years, and in 2040. The session focused on the risks, resources, and relationships inherent to and emanating from the artefacts with an eye towards how different actors, communities, and stakeholders can and might be engaged further on the futures of global governance for AI.

The final presentation brought together participants from all sessions to reflect on lessons learned and insights from the products as well as the process itself.

Youtube Embed / Privacy Notice

By clicking the "Show Video" button you are informed and you agree that a Youtube embed via iframe will be loaded and therefore cookies from Google Fonts (fonts.gstatic.com, fonts.googleapis.com) will be set, which we are unable to prevent. You should not click the button if you do not want this and you won't be able to see the video here. You can visit the Google privacy policy for further information at https://policies.google.com/privacy?hl=en.

Show Video
What does our digital future look like? Conclusions from the Foresight Workshop.

Doaa Abu Elyounes, Consultant, Bioethics and Ethics of Science Section, UNESCO, Paris
Tunde Adegbola, Executive Director, African Languages Technology Initiative, Ibadan
Carolina Aguerre, Director, Center for Technology and Society Studies (CETyS), University of San Andres, Buenos Aires
Urvashi Aneja, Associate Fellow, Asia-Pacific Programme, Chatham House, Goa
Dirk Aßmann, Division Manager, German Corporation for International Cooperation (GIZ), Frankfurt
Shahar Avin, Senior Research Associate, Centre for the Study of Existential Risk, Cambridge
Anna Bacciarelli, Program Officer, Open Society Foundations, London
Marco-Alexander Breit, Head of Artificial Intelligence Task Force, German Federal Ministry for Economic Affairs and Energy, Berlin
Joanna Bryson, Professor of Ethics and Technology, Hertie School of Governance, Berlin
Marcela Capaja, Senior Specialist for Strategic Futures, Natural England, York (Facilitator)
Mojca Cargo, Senior Manager, Public Sector Engagement, GSMA, London
Duncan Cass-Beggs, Counsellor for Strategic Foresight, Organisation for Economic Co-operation and Development, Paris
Lucie Courtade, Programme Manager International Affairs, Körber-Stiftung, Berlin
Veronika Datzer, Mercator Fellow, European Commission, Brussels
Keefer Denney-Turner, Research Associate, CyberPeace Institute, Geneva
Antoine Doucet, Professor, University of La Rochelle, La Rochelle
Laura Dudek, Graduate Student, Royal College of Art, London (Designer)
Alex Engler, Fellow - Governance Studies, The Brookings Institution, Washington D.C.
Ilaria Fevola, Legal Officer, Article 19, London
Sophie-Charlotte Fischer, PhD Candidate, Center for Security Studies, ETH Zürich, Zurich
Jessica Fjeld, Assistant Director, Cyberlaw Clinic, Berkman Klein Center, Harvard Law School, Cambridge, MA
Frank Gavin, Professor and Director, Henry A. Kissinger Center for Global Affairs, The Johns Hopkins University - Paul H. Nitze School of Advanced International Studies (SAIS), Washington DC
Nico Geide, Deputy Head of Unit, Digital Transformation and Mobility, Federal Foreign Office, Berlin
Daniel Giorev, Head of Unit, Sustainable Development Policy and Global Partnerships with the UN and IFIs, European Commission, Brussels
Constanza Gomez Mont, Founder and Chief Executive Officer, C Minds, San Francisco
Martin Hullin, Chief of Operations and Technology Officer, Datasphere Initiative, Internet & Jurisdiction Policy Network, Paris
Anushka Jain, Associate Counsel: Surveillance & Transparency, Internet Freedom Foundation, New Delhi
Amy Johnson, Affiliate, Berkman Klein Center, Harvard Law School, Cambridge, MA
Caitlin Kraft-Buchman, CEO and Founder, Women at the Table, Alliance for Inclusive Algorithms, Geneva
Robert Kirkpatrick, Executive Director, UN Global Pulse, New York
Jean Koïvogui, CEO and Founder, TECHNATIUM, Massy
Manuel Lafont Rapnouil, Director, Policy Planning, French Ministry for Europe and Foreign Affairs, Paris
Stephanie Laulhe Shaelou, Professor of European Law and Reform, Chair of Research and Innovation, University of Central Lancashire Cyprus, Larnaka
Sriganesh Lokanathan, Data Innovation & Policy Lead, UN Pulse Lab, Jakarta
Roman Mazur, Director and Head of the AI Institute, Polish University Abroad, London
Nora Müller, Executive Director International Affairs, Körber-Stiftung, Berlin
Maricela Muñoz, Government Fellow, Geneva Center for Security Policy, Geneva
Adam Nagy, Project Coordinator, Cyberlaw Clinic, Berkman Klein Center, Harvard Law School, Cambridge, MA
Elina Noor, Director, Political-Security Affairs and Deputy Director, Asia Society Policy Institute, Washington, D.C.
Angela Paulk, Consultant/ Faculty, DGA ltd / Columbia University, London
KarinePerset, Head of Unit, OECD Artificial Intelligence Policy Observatory, Organisation for Economic Co-operation and Development, Paris
Golestan Radwan, Advisor for Artificial Intelligence, Egyptian Ministry of Communications & Information Technology, Cairo
Yamunna Rao, Project Manager, Global Solutions Initiative Foundation, Berlin
Megan Roberts, Director of Policy Planning, UN Foundation, Washington D.C.
Fatima Roumate, Associate Professor for International Economic Law, Mohammed V University, Rabat
Ronja Scheler, Programme Director International Affairs, Körber-Stiftung, Berlin
Johann Schutte, Foresight Specialist, School of International Futures, Cape Town (Facilitator)
John Sweeney, Transformative Foresight Lead, School of International Futures, Kuşadası (Facilitator)
Jun-E Tan, Independent Policy Researcher, Kuala Lumpur
Jutta Treviranus, Director and Professor at Inclusive Design Research Centre, OCAD University, Toronto
Cat Tully, Managing Director, School of International Futures, London
Anthony Ukwenya, Medical doctor, Teaching Hospital, Zaria
Phumzile Van Damme, Head, South African Local Government Elections Anti-Disinformation Project, Oslo; Munich Young Leader 2021
Arianna Vannini, Chief Economist and Principal Advisor, DG International Partnership, European Commission, Brussels
Dominique Vassie, Freelance Artist, London (Designer)
Lucio, Vinhas De Souza, Policy advisor, STRATPOL Division, European External Action Service, Brussels
April Ward, Policy Consultant and PhD Candidate, University of Lincoln, Lincoln (Facilitator)
Bruce Watson, Research Professor, Centre for AI Research (CAIR); Fellow, School for Data Science and Computational Thinking, Stellenbosch
Astrid Ziebarth, Senior Fellow Tech & Society, German Marshall Fund of the United States, Berlin
General

Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, Madhulika Srikumar:
“Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.”,
Papers.ssrn.com, Rochester NY, January 15, 2020.

Roxana Radu:
“Steering the Governance of Artificial Intelligence: National Strategies in Perspective.”,
Policy and Society, 40 (2): 178–93, 2021.

Angela Daly, Thilo Hagendorff, Li Hui, Monique Mann, Vidushi Marda, Ben Wagner, Wei Wang, Saskia Witteborn:
“Artificial Intelligence Governance and Ethics: Global Perspectives.”,
2019.

‌Multistakeholder Cooperation

Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Alex Engler, Rosanna Fanni:
“Strengthening International Cooperation on AI: Progress Report.”,
Washington DC: Brookings Institution, 2021.

‌‌Transparency, Accountability, and Trust

Cynthia Rudin, Joanna Radin:
“Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition.”,
Harvard Data Science Review 1, no. 2, 2019.

‌‌Cooperative Data Governance

Ada Lovelace Institute, AI Council:
“Exploring Legal Mechanisms for Data Stewardship.”,
London: Ada Lovelace Institute, 2021.

Salome Viljoen:
“Democratic Data: A Relational Theory for Data Governance.”,
SSRN Electronic Journal, 2020.

‌‌‌Regulations and Standards

Carlos Ignacio Gutierrez, Gary Marchant:
“Soft Law 2.0: Incorporating Incentives and Implementation Mechanisms into the Governance of Artificial Intelligence.”,
OECD.AI Policy Observatory, July 13, 2021.

Körber-Stiftung is a private foundation that takes on current social challenges in areas of activities comprising Innovation, International Dialogue and Vibrant Civil Society. Inaugurated in 1959 by the entrepreneur and initiator Kurt A. Körber, the foundation is now nationally and internationally active from its sites in Hamburg and Berlin.

The Berkman Klein Center is a research center at Harvard Law School. Its mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions.

In a world requiring more collective action, the Paris Peace Forum is a platform open to all seeking to develop coordination, rules, and capacities that answer global problems. Year-round support activities and an annual event in November help better organize our planet by convening the world, boosting projects, and incubating initiatives.

Introduction

1956: The Force Awakens, Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence, 1956.
Photo: Margaret Minsky, photo by courtesy of the Minsky Family, Source

1968: “I’m Afraid, Dave”, HAL 9000, 2001: A SPACE ODYSSEY, 2 April 1968.
Photo: AA Film Archive/MGM, Source

1986: Cars Without Drivers, Research project PROMETHEUS (1986 to 1994), test vehicle based on a Mercedes-Benz van. VITA sub-project - a precursor to DISTRONIC PLUS and the automatic PRE-SAFE® brake, 5 September 2006.
Photo: Mercedes-Benz Group, Source

1995: Chatbots Speaking Up, Old and dirty CRT computer monitor, 13 November 2016.
Photo: Norasit Kaewsai via iStock, Source

2016: Machine Beats Human, SKOREA-SCIENCE-COMPUTERS-AI, 13 March 2016.
Photo: ED JONES/AFP via Getty Images, Source

2022 and Onwards – Will AI Take Over?, Blurred photo with daily life of city square, 8 April 2018.
Photo: Lina Moiseienko via iStock, Source

Issue 01

Veraphoria, Dominique Vassie, November 2021/July 2022. Find out about her work at www.dominiquevassie.com. Background: Blank billboard in a subway station wall, 30 January 2012.
Photo: sorendls via iStock, Source

Issue 02

CitiCred Bracelet, Dominique Vassie, November 2021/July 2022. Find out about her work at www.dominiquevassie.com. Background: Frau Hand mit Smartwatch am Handgelenk, 12 October 2020.
Photo: sorendls via iStock, Source

Issue 03

Data Potluck, Laura Dudek, November 2021/July 2022. Find out about her work at laura-dudek.com.
Photo: Yonghyun Lee via Unsplash, Source

Issue 04

Global Citizen, Laura Dudek, November 2021/July 2022. Find out about her work at laura-dudek.com. Background: Top-Ansicht Mockup-Bild von der Hand eines Mannes hält weißes Handy mit blankem Desktop-Bildschirm auf dem Oberschenkel, 12 March 2019.
Photo: Farknot_Architect via iStock, Source

Share this site
About

Körber-Stiftung is a private foundation that takes on current social challenges in areas of activities comprising Innovation, International Dialogue and Vibrant Civil Society. Inaugurated in 1959 by the entrepreneur and initiator Kurt A. Körber, the foundation is now nationally and internationally active from its sites in Hamburg and Berlin.

The Berkman Klein Center is a research center at Harvard Law School. Its mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions.

In a world requiring more collective action, the Paris Peace Forum is a platform open to all seeking to develop coordination, rules, and capacities that answer global problems. Year-round support activities and an annual event in November help better organize our planet by convening the world, boosting projects, and incubating initiatives.

Content and Editors

Lucie Courtade, Ronja Scheler (Körber-Stiftung), Jessica Fjeld, Adam Nagy (Berkman Klein Center)

Concept, Development & Design

Wigwam eG

Disclaimer

Körber-Stiftung, as the content provider, is responsible for its own content which it makes available in accordance with § 5 Para. 1 of the German Interstate Agreement on Media Services. A distinction must be made between links to content maintained by other providers and the foundation’s own content. Körber-Stiftung is responsible for this external content only if it has positive knowledge of such content (i.e. also of illegal or criminal content) and it is technically possible and reasonable to prevent its use (§ 5 Para. 2 German Interstate Agreement on Media Services). No such illegal content was known to Körber-Stiftung at the time of creating the links.

All content is provided for general information only. No liability is accepted either for the correctness or for the completeness of the content. Any use of the information provided is at the user’s own risk. Körber-Stiftung – subject to mandatory statutory provisions – assumes no liability for this. This applies in particular to compensation for damages.

All copy, images and graphics as well as the layout of this website are subject to copyright. The information made accessible here may be copied, forwarded or reproduced only with acknowledgement of copyright (Körber-Stiftung). The unauthorized use of individual content or complete pages will result in both prosecution and civil proceedings.