Insights

The Challenge of Trust in AI: Examining the EU’s Liability Directive

In September 2022, the European Commission unveiled a proposal for an “AI liability directive,” intending to revamp the legal landscape surrounding artificial intelligence. The core objective of this directive is to introduce a set of novel regulations within the existing European Union (EU) liability framework, specifically targeting instances where AI systems cause harm. The overarching aim is to ensure that individuals affected by AI-related incidents receive a level of legal protection commensurate with that afforded to victims of conventional technologies throughout the EU.

At the heart of this proposal is the creation of a “presumption of causality.” Essentially, it establishes a legal predisposition to attribute responsibility to AI systems when harm occurs, thus simplifying the burden of proof for those seeking recompense. Additionally, the directive empowers national courts to request the disclosure of evidence when suspicions arise regarding high-risk AI systems contributing to harm.

Liability in the medical domain is one of the most complex challenges for designing AI applications

Nonetheless, this proposition is not without its detractors. Stakeholders and academics are actively scrutinizing several aspects of the proposed liability regime. Their concerns span the adequacy and efficacy of these regulations, their alignment with the concurrently negotiated artificial intelligence act, potential repercussions on innovation, and the intricate interplay between EU-level and national regulations. It is a complex legal landscape where scrutiny remains a vital component of the process.

Artificial intelligence (AI) is taking center stage in various sectors like healthcare, transportation, and agriculture, where it helps make better decisions. For instance, AI aids in diagnosing diseases, powers self-driving cars, and improves farming practices. It’s like having a digital assistant that’s helping out everywhere.

The European Commission recognizes the potential of AI but also understands the importance of keeping AI in check with EU values and principles. In their 2020 White Paper on Artificial Intelligence, they promised to take a ‘human-centric’ approach to AI. They want to make sure AI doesn’t go rogue and harm people or businesses.

The interplay between AI and robotics introduces a new dimension to the liability challenge.

Now, when things go wrong with AI, figuring out who’s responsible and who should pay is a real puzzle. This gets even more complicated when AI teams up with the internet of things and robotics. The result? People and businesses in the EU are a bit wary of trusting AI. They like the idea of AI making their lives easier, but they’re also worried about things going south.

In fact, a recent survey in the EU found that 33% of companies feel that dealing with who’s responsible for AI-caused problems is one of the big headaches when it comes to using AI. So, there’s potential, but also some growing pains when it comes to AI adoption in the EU. It’s like having a powerful tool in your hand, but you’re not entirely sure if it’s safe to use.

It’s not just the people who use AI tools and make decisions with them who are concerned; it’s also the individuals and companies developing AI systems. They’re worried about the liability for the results their AI systems produce. As a result, most AI developers are hyper-focused on minimizing these liability risks right from the beginning, even during the early stages of planning and design. This cautious approach often results in AI systems that offer a narrow range of outputs, all carefully tailored to minimize potential risks. The goal is to create applications that are as safe as possible.

However, there’s a potential downside to this approach. When AI systems only provide answers that are absolutely foolproof and fully secure in every aspect, it might raise questions about their reliability. Trusting a system that’s overly rigid, even if it appears quite versatile on the surface, can be a bit of a conundrum. In essence, there’s a trade-off between safety and flexibility in AI development.

The influence of artificial entities and their AI-driven communication may surpass existing legal liability definitions. (This image was generated by DALL·E 2, following the prompt: “Show me a person who never makes mistakes.”)

Taking this a step further, the idea of creating a completely unleashed AI without any risks to developers, prioritizing speed of development above all, could lead to a violation of human rights. This is particularly concerning if this approach places something like the “survival of mankind” as its highest value. There’s a real risk that as AI development progresses, such a requirement could be enforced, especially in authoritarian political systems.

Given these considerations, the ongoing trust dilemma involving human instincts and intuition is likely to persist for an extended period. The alternative, which entails fully embracing AI without caution, seems too bleak to gain widespread trust at this point. That’s the current outlook.

But the evolution of AI is reshaping the concept of trust. Recent scientific research suggests that trust in AI is not a static or absolute concept but a dynamic and relative one. As AI continues to advance, trust is intertwined with transparency, user experience, ethics, and human-AI collaboration. The pace of AI progress is indeed redefining our understanding of trust, and our relationship with technology, with trust becoming a multifaceted, adaptive element in this evolving landscape.

The dividing line between virtual characters and real people is getting ever more thin

New developments in Ai – based video creation are rapidly blurring the line between reality and simulation. With the advent of animated photographs, talking characters can now be created that are almost indistinguishable from real human beings. It’s a special time in the field, with most experts predicting that we could see fully realistically moving and talking AI-generated people becoming commonplace within just one to two years. This technological leap is expected to have a profound impact on society. Teams will soon have virtual members and employees will have virtual colleagues. But the potential applications of this technology extend far beyond the workplace. In the realm of entertainment, for instance, we can expect to see a revolution in the way films, TV shows, and video games are made. Therapy and counseling could also be revolutionized, with virtual therapists offering a level of personalization and accessibility that is currently impossible. In politics, consulting services, banking, and beyond, we could see dramatic changes as people form relationships and forge partnerships with their virtual counterparts. It’s a brave new world, the possibilities are endless but so are the risks for human society.

Scholastic Logic and the Future of Artificial Intelligence: Implications for Human Existence and Rights

Scholastic logic, an ancient discipline developed by Aristotle over two millennia ago, is known for its simplistic dichotomy of true and false. In contrast, the schools of Indian philosophy offer a more nuanced approach to logical evaluation of facts. Philosophical discourse, in this context, has traditionally relied on sensory perception as evidence, as opposed to epistemology. The scholastic tradition employed intuition to determine the truth or falsity of a statement such as “every human being is an individual”. To do so, one need only ask whether there exists a human being who is not an individual (the answer is no), or whether every human being has a demonstrable individual characteristic (the answer is yes). In this way, statements could be tested for their truth content.

Different schools of epistemology explore various philosophical questions, such as the possibility of living in a simulated reality or the extent to which subjective reality is shaped by personal views, ideas, and biological determinants. These are complex and multifaceted issues, requiring careful consideration and in-depth analysis. While a brief statement cannot do justice to such questions, it is clear that if simulations of equivalent life and views, no less well-founded and even superior to human perception, penetrate our own perceptions, views, and reflections, our conventional ways of understanding and ordering the world could be shaken. This could have far-reaching consequences for the value of human life and individual existence, as well as for people’s respect for each other.

While some fear the overpowering potential of artificial intelligence, there are indications that if AI surpasses human expression, human rights might lose their position as the most essential values. This could have significant consequences for the future of society. Moreover, with the rise of artificial characters in virtual worlds, questions arise about their status and rights. As there are no specific regulations or rights for these characters, it is unclear what this means for people in the “real world” if they begin to coexist with them in the virtual world. The individual expression of AI-generated characters do not differ from real people anymore as the followimg gallery underlines.

Have notions of Western orientalism an impact on the way, AI generators modify pictures from the Middle East?

The concept of “Orientalism” is a term coined by Edward Said to describe the way in which Western cultures have historically depicted and understood the people and cultures of the Middle East and Asia. These depictions often involve stereotypes and simplifications that reinforce Western superiority and exoticize the “Orient.”

While AI generators themselves do not hold cultural biases or preconceptions, they are only as unbiased as the data they are trained on. If the training data for an AI image generator includes primarily images of the Middle East that have been produced through an Orientalist lens, then the output of the generator may also reflect these biases.

In recent years, there have been several examples of AI image generators producing stereotypical or offensive depictions of people from the Middle East, such as automatically adding headscarves to images of women or darkening skin tones. These examples suggest that the Orientalist biases of the training data are being reflected in the output of the AI generators.

It is therefore important for the creators and users of AI image generators to be aware of the potential for bias and actively work to diversify and de-bias the training data. This includes images produced by artists and photographers from the Middle East themselves and avoiding using images that reinforce Orientalist stereotypes.

Occassionally AI generated images show irritating content but usually this is a result of distorted visual patterns.
There is no indication that AI generator DALLE E 2 could identify any “Middle East” context.

Bisociation through AI Image Generation: From a 19th century oil painting depicting a Lakeside promenade at Lake Garda to the Golden Gate Bridge

The process of bisociation through the use of an AI image generator is showcased in the transformation of a painting by Dutch artist Herman Jacob van der Voort in de Betouw (1847-1902) into an expressionistic image of the Golden Gate Bridge after 20 generations of variations.

Bisociation is a creative process where two unrelated or distant elements are combined in a way that generates a new and unexpected idea or solution. Similarly, pro-active work with AI generated images involves exploring and combining different variations of an initial image in a way that generates new and unexpected visual outcomes.

In both cases, the process of combining seemingly unrelated elements creates a new perspective or insight. The constant flow of variations in AI-generated images provides a fertile ground for the human mind to make unexpected connections and associations.

However, it’s important to note that the AI-generated images are created through mathematical algorithms and don’t have conscious intentionality or understanding of their content. The same applies for the impressive images created on the base of the initial image, van der Voort’s painting.

The role of the creator in this process is significant, as they are the ones who make choices throughout the process that shape the final outcome. The choices made by the creator in selecting which variations to keep, discard, or further modify can be seen as a parallel bisociative creation process in cooperation with the AI.

The creator’s subjective interpretation and aesthetic preferences also come into play in this process. In this way, the process of bisociation through the use of an AI image generator is not solely reliant on mathematical algorithms, but is a collaborative process between the AI, the creator, and the viewer’s subjective interpretation and attribution of meaning. The result is a new and original image that is the product of both technological innovation and human creativity.

The variations of the AI generator (DALLE E 2) are “blind” in terms of content. Their assumed meaning is purely subjective meaning-creation by the viewer.

The base of AI images: Deep Learning or Deep Eclecticism?


If we keep hearing that AI cannot know more than what it has been trained to know via algorithms, then most people already know that this is not true. The almost incomprehensible number of vectors, which is approaching the trillion mark, and also the almost infinite variety of possible combinations do not correspond to the idea that the system has been given concrete learning content. And yet this cannot be doubted, at least for the image generators of the AI: fed with millions and millions of images and their labels, the generated images do not get out of the model of a complex combinatorics. However, this is so complex that one cannot grasp with human consciousness all factors which have led to the respective combination. Thinking in combinations, however, the assumption of a per se eclectic modus operandi, have a direct effect on perception. Like a counterfeit coin, we hold the offer sideways against the light, as it were, and scan the surface – only we can’t bite on it yet. The result is subject to the examining gaze – at least for the time being. Then, however, an interaction can occur between the object of investigation and the investigator: no cheap copy, no dull imitation, no rigid superimposition is revealed. What emerges instead is a free-acting play of very different materials, modestly associable with sources about whose true existence we know nothing: why does the city of Paris wrap itself in the modification sequence of an old Kodachrome photograph of anglers on the Seine like Christo’s Reichstag? Why do images emerge that resemble illustrations of “Waiting for Godot”? With the dried-up rivulet in the cartoon pastel at the end, is concern about climate change already inherent in AI? You don’t have to be an AI expert to know that all these associations are ultimately nonsense. But why does our imagination not allow to deny an imaginer the imagination? Because then it is already indicated that our imaginations and the AI have no connection, but produce this in the best case only in the cooperation.

The second AI modification of the source image creates an impression that the city has been wrapped by Christo and Jeanne-Claude, but this is a coincidental association resulting from confusion between AI-arranged patterns and Christo’s wrapping material.
The transformation of downtown Paris into a dried-out desert landscape was not the result of any human intention, but rather an outcome of pre-defined contextualizations generated by AI processes or vector logics.
The source image of the sequence is a kodachrome photo made by an unkown photographer in 1962.

Meeting Batman in the desert: how AI – image sequences reinterpret clichés

Starting with a 1970’s tourist shot of an elderly couple having a break with their limousine in a desert-like landscape, the AI sequencing scans all the motif elements in the sequential episodes: the luxury sedan, the woman in the desert, High Noon, water towers amid deadly drought, until finally, in the emptiness of the desert, first Batman and finally Superman appear. “What was the AI thinking?” is a question that we have long known to be empty. By means of language – desert, woman, man, limousine – associations and variations appear that seem to come from the spectrum of topics, i.e. also from the linguistic: Water, danger, fiction, encounter. As light and clichéd as these “associated” elements initially appear, they continue to develop in a straightforward and at the same time multifaceted manner: vectors point the way. They are vectors of meaning that orient themselves on linguistic meaning and always enter into new constellations and create motifs; they are at the same time formal vectors that orient themselves on colors, proportions, surfaces, and much more. No one can think or know what connections they will finally make in the “deep learning” process, just as someone who gives a direction thereby does not know and cannot know what will appear in the dimension of temporality on the further way in this direction. In this respect, the semi-conscious exploratory course of the user and his careful or also spontaneous micro-decisions are directly reflected in the process of the vector constellations – but show substantially a different quality. It is, so to speak, an analogous process.

We shouldn’t think, the AI “thought” anything to turn the following image into this variation. Or should we, anyhow?
A kodachrome photo slide image from around 1970 was the source of the sequence.

Exploring Meaning and Interpretation: The Explosive Journey of a Tin Box at the Bosphorus

It’s fascinating how a single image can undergo so many rapid transformations, resulting in completely different versions each time. The first version with pastel-colored geometric shapes provides a clear image of the character discovering an old tin can on the Bosphorus beach, but it lacks vibrancy, making it somewhat dull. The second version, with a diffuse landscape of shards and debris, uses colors that remind one of 3-D glasses, adding vibrancy and depth to the image.

However, the third iteration takes things to a completely different level by dissolving the original motif and transforming the metal can into a giant tube in a comic style. The Bosphorus now becomes a sunny fantasy landscape, and a figure appears but ultimately fades away into the diffuse background.

With each new generation of pictorial material, the diffuseness takes refuge in surfaces, faces, colors, and diffusely structured crystallization points, eroding any impression of unambiguous meaning. This erosion of meaning is fascinating and irritating in the same time.

A dream landscape comes up two generational steps after the initial image of a rusty metal box at the Bosphorus.
Initial source image.

Interaction of meaning and visual structures: is the simulation of AI perception consistent or coincidentally?

Perception is a term that has limitations when applied to AI functions. It refers to the processes of the human brain, whereas “deep learning” imitates neural processes but is not the same as the neural process of the human brain. When “deep learning” AI image generators present image sequences or offer variations, the images are based on algorithms and vectors predetermined by programmers. However, the results can defy their assumptions and correspond to their own machine logic. The interaction between the generated images and human perception can result in a flow that goes beyond visual correspondence and content logic. For example, tracing the modification sequences of an accident event (such as a person falling or jumping off a bridge) in unpredictable ways can reveal the quality of this machine logic as a simulation of perception.

At first glance, the modifications to the original image may appear to have only slight differences. However, these differences can lead to gradually evolving projections by the viewer and user, resulting in choices that have an ever-increasing impact on the subsequent images in the sequence.