Coping with AI Disruption

Coping with AI Disruption

The dominant discourse on artificial intelligence disruption is still largely framed in technical terms, focusing on safety, control and alignment. However, this perspective underestimates the deeper transformation currently unfolding: AI is not merely a technological innovation but a profound reconfiguration of social order, governance structures and human self-understanding.

Prof. Dr. David Krieger, co-director of the IKF, argues in this essay that coping with AI disruption requires abandoning control-oriented paradigms in favour of a socio-technical understanding of AI as a distributed network of human and non-human actors. Instead of seeking mastery over AI systems, societies must develop procedural frameworks that enable ongoing negotiation, accountability and integration within complex and evolving networks.

"Coping with AI disruption does not mean understanding every algorithm, but demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures."

 

David J. Krieger, philosopher, social scientist and co-director of the Institute for Communication and Leadership IKF in Lucerne, Switzerland, said, “The typical framing of AI disruption discourse is as a technical problem, asking us, ‘How do we make AI systems safe, controllable or value-compliant?’ This overlooks the fact that AI is primarily a societal and cultural challenge that requires new forms of social organization, governance, responsibility and human self-understanding."

AI disruption cannot be solved in the traditional sense. Coping with AI means learning to live with non-humans as social partners, distributed agency and post-human network norms.

Societies must replace the dream of control, autonomy and individuality with social practices of ongoing integration grounded in procedural governance and collective responsibility. In this view, the AI future becomes less of a technical issue than a continuous social process, mirroring the open-ended nature of society itself.

Coping with AI disruption does not mean understanding every algorithm, but demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures, recognizing one’s role as a network participant and resisting anthropomorphic myths that obscure the constructive relations among humans and non-humans.

It is important to emphasize that AI is not a bounded, individual system that can be dealt with in isolation from society. Instead, AI must be understood as a socio-technical network, a dynamic constellation of humans, non-humans, institutions, regulations, economic incentives, data infrastructures, algorithms and much more. This conceptual shift has profound implications for how individuals and societies can respond to AI-induced disruption.

For societies, the most important coping strategy is abandoning the illusion of technical containment. Just as automobiles cannot be blamed in isolation for traffic deaths, pollution or urban sprawl, AI cannot be held solely responsible for social harm or benefit. Responsibility is distributed across designers, deployers, users, regulators, markets and diverse cultural expectations.

This implies that societies must:

  • Develop collective responsibility frameworks rather than scapegoating AI.
  • Treat AI governance as an ongoing institutional practice, not a one-time regulatory fix.
  • Accept that AI disruption reflects pre-existing social conflicts, inequalities and power asymmetries rather than creating them ex nihilo.

For individuals, this means:

  • Admitting that AI is not a mere tool, or an object opposed to human subjectivity, but a social partner.
  • Recognizing that AI is not an external force acting upon society but something in which both humans and non-humans are already entangled as users, data sources, workers, citizens and decision-makers.
  • Realizing that coping thus involves understanding one’s own role in AI networks rather than imagining oneself as a passive victim or sovereign controller.

In light of the above assumptions, there are three levels of coping, each requiring different strategies.

1) Technical safety and robustness: At this level, AI is still treated as a tool, as one technology among others. Societal coping involves engineering safeguards, testing, verification and reliability standards. While necessary, this level is insufficient on its own. Safety measures cannot address misuse, power concentration, or unintended systemic effects, nor can they address cultural transformation.

2) Prevention of misuse: The assumption at this level is that disruption arises from human actors using AI for harmful purposes of economic exploitation, surveillance, manipulation, crime, or terrorism. Coping requires institutional oversight, legal accountability and political coordination, especially at transnational levels. Individuals cannot shoulder this burden alone; democratic societies must not only strengthen but also reconceptualize regulatory measures.

3) Social integration of AI: Once AI becomes an autonomous or semi-autonomous actor, societies face not a tool problem but a coexistence problem. Disruption now affects foundational concepts: responsibility, agency, accountability, labor, autonomy, self-determination and even the meaning of intelligence itself. Coping means that societies must prepare for a post-human world not by attempting to retain humanist values and asserting human dominance over AI, but by learning how to integrate non-human actors into a new form of social order. It must be admitted that traditional concepts such as fairness, justice, dignity or freedom are vague and context-dependent, culturally pluralistic, historically and socially contested, and inapplicable to a post-humanist, global network society.

On the other hand, moral consensus cannot be outsourced to AI and encoded in algorithms. It will not work if we attempt to encode ‘the good’ risk and freeze contested norms, or if we amplify dominant interests, or if we create brittle systems that fail under novel conditions. Rather than demanding that AI embody final moral truths, societies must develop procedural mechanisms that allow norms to be negotiated, revised and contested over time. Not substantive values but procedural values should guide coping strategies. Instead of attempting to define what AI should aim for, societies should define how socio-technical networks ought to operate. This approach mirrors democratic constitutionalism in that the legitimacy of socio-technical networks derives not from outcomes but from processes.

Such procedural values could be:

  • Taking account of all affected actors, prioritizing risk analysis, preventing tunnel vision and catastrophic oversimplification.
  • Producing stakeholders rather than victims or perpetrators, thus enabling participation rather than passive subjection.
  • Prioritizing and instituting bottom-up governance frameworks in transparent, revisable ways rather than through top-down, inflexible government regulation.
  • Balancing local and global concerns, acknowledging scalability without erasing contextual specificity.
  • Separating powers, preventing concentrations or asymmetries of informational, economic, or political control.

For societies, this translates into governance architectures that are adaptive, pluralistic and reflexive. For individuals, it implies participation, contestation and literacy rather than blind trust or rejection.

Given the impending post-labor economy, it is to be expected that AI will initially exacerbate existing power asymmetries, bartering productivity gains against mass unemployment, weakened labor bargaining power and extreme capital concentration.

Coping strategies in this domain could be:

  • Framing the idea of the market as the fundamental mechanism of the material reproduction of society and designing new productive and distributive mechanisms.
  • Rethinking the relationship between labor, income, social participation, and identity. Human existence and self-understanding need not be defined by labor, as it has been for most people over the last 5,000 years.
  • Developing institutional experimentation beyond closed systems to open networks in organizations in all areas of society, as well as in politics.

"We do not need a new enlightenment to regain human autonomy from the dominance of functional systems as the European Enlightenment once freed the individual from feudal and clerical domination. We need to shift from fantasies of control to situated agency and cooperative integration in complex socio-technical networks. Coping with AI disruption does not mean understanding every algorithm, but demanding institutional accountability, participating in the design of governance frameworks for acceptable procedures, recognizing one’s role as a network participant and resisting anthropomorphic myths that obscure the constructive relations among humans and non-humans.”

(Source: Institutions Must Lead Now in Building Up Human Resilience for the AI Age - Imagining the Digital Future Center)

Prof. Dr. David Krieger

David J. Krieger studied philosophy, theology, and religious studies at the University of Chicago. He earned his doctorate with a communication-theoretical methodology for intercultural and interreligious understanding in a global society.

In Switzerland, he initially held an interim professorship in religious studies at the University of Lucerne, where, together with Andréa Belliger, he co-founded the Institute for Communication and Culture (IKK) in 2000. In 2006, he moved to the Institute for Communication Research, which he had co-founded in 1988 together with Christian Jäggi as the first private social science research institute in Switzerland.

Since 2006, he has also been Co-Director of the IKF, which was renamed the Institute for Communication & Leadership to reflect its increasing focus on continuing education activities. David J. Krieger is the author of numerous academic publications in the fields of philosophy, sociology, ethics, and religious studies.

Contact: david.krieger@ikf.ch

CAS Digitale Transformation

CAS Digitale Transformation

More Information
CAS AI & Future of Work

CAS AI & Future of Work

More Information
CAS AI Expert in Learning

CAS AI Expert in Learning

More Information
CAS KI erfolgreich umsetzen im Gesundheitswesen

CAS KI erfolgreich umsetzen im Gesundheitswesen

More Information
Online CAS Digital Ethics & Governance

Online CAS Digital Ethics & Governance

More Information
Online CAS Digitale Transformation

Online CAS Digitale Transformation

More Information

Wir sind für Sie jederzeit unter info@ikf.ch per E-Mail oder telefonisch unter +41 41 211 04 73 erreichbar. Oder buchen Sie einen Beratungstermin! Wir freuen uns auf Ihre Kontaktaufnahme.

Ihr IKF-Team

Jetzt Termin buchen