English Articles

Artificial Intelligence and Applied Ethics – Risks and Solutions

Let’s take a closer look at the risks of artificial intelligence and offer potential solutions. Let’s get started:

Risks in the use of AI systems from an ethical perspective and potential solutions

A lack of transparency concerning the training data and algorithms of AI systems, which essentially make up their „artificial intelligence“, is a pressing problem, especially in the case of deep learning models, i.e. generative AI such as ChatGPT, which can be complex and difficult to interpret and control. This opacity obscures the underlying logic of AI systems as well as their decision-making processes. However, if we humans can no longer understand in detail how an AI system arrives at its results, i.e. its „output“, humans lose access and the ability to control and thus also the ethical basis of the results. This means that generative AI systems, which have already been improved, can still generate output that is misleading, incorrect in terms of content or even questionable or reprehensible from an ethical point of view. Let’s take the example of so-called AI hallucinations: This is false or misleading output. It can occur when it is based on patterns that are objectively false, or when the AI processes prejudice, hate speech and misinformation in its output. AI hallucinations can come in various forms – sentence contradictions, contradictions in prompts or contradictions in facts. For example, Google’s AI Bard chatbot once falsely claimed that the James Webb Space Telescope had captured the world’s first images of a planet outside our solar system – a harmless example. There are many such examples, not all of which are so harmless. Healthy skepticism and verification of AI-generated output will always be essential for an informed approach to AI systems.

A consequence of this lack of transparency may also be that AI systems unintentionally perpetuate or reinforce biases when they rely on biased training data or biased algorithms. For example, the facial recognition algorithms from Microsoft, IBM and Face++ all exhibited biases when it came to recognizing the gender of people; these AI systems were more able to accurately recognize the gender of white men than those with darker skin tones. A 2020 study examining the voice recognition systems of Amazon, Apple, Google, IBM and Microsoft also found that they had a higher error rate when transcribing the voices of “blacks” than “whites”. Another example: Amazon stopped using AI to recruit and hire employees because the algorithm favored male applicants over female ones. The reason for this was that Amazon’s system was trained with data collected over a period of 10 years and predominantly from male applicants. The algorithms learned the (biased) pattern from the historical data and made predictions for the future which led to these types of applicants most likely to get the job. As a result, the hiring decisions made by the AI system were found to be biased against female and minority applicants. To minimize discrimination and ensure fairness, it will therefore become increasingly important from an ethical perspective to invest in and sufficiently monitor the development of unbiased algorithms and diverse sets of training data.

Another danger due to a lack of transparency is that inequality in society– especially economic inequality – could be deliberately exacerbated, as AI development is dominated by a few large companies and governments. Large companies could thus accumulate more and more capital and power, while smaller companies struggle to compete. Such a concentration of power can therefore limit the diversity of AI applications, which can have negative consequences for competition. The promotion of decentralized and collaborative AI development (open-source models) could be a possible key to avoiding concentration of power and all its unethical excesses. In addition, legislative measures and initiatives to promote economic justice – such as retraining programs for workers affected using AI systems whose jobs become redundant, or the adaptation of social safety nets and inclusive AI development that ensures a more equal distribution of opportunities – could help to combat inequality.

The risk complex „lack of transparency“ in the broadest sense also includes AI-generated content that spreads false information and can manipulate public opinion, such as deepfakes. There is no doubt that a transparency imperative is crucial to safeguarding the integrity of information in the digital age. We also need measures to detect and combat AI-generated misinformation, as well as sanctions/consequences where deepfake creators do not comply

In a Stanford University study on the most pressing dangers of AI, the researchers explained:

“AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse by feigning consensus and spreading fake news, there is the danger of AI systems undermining social trust. The technology can be co-opted by criminals, rogue states, ideological extremists, or simply special interest groups, to manipulate people for economic gain or political advantage.”

In addition to the lack of transparency that may even be „intended“ by developers of AI systems, an unintended lack of transparency can also pose an enormous risk. This can arise due to the complexity, autonomy, adaptability and the associated lack of human control of the AI systems, which can exhibit unexpected behavior or make decisions with unforeseeable consequences. Depending on how such complex AI systems are deployed, this unpredictability can lead to outcomes that negatively impact individuals, organizations or society. Only robust testing, validation and monitoring processes will help developers and researchers identify and address these types of issues before they escalate and lead to devastating damage. To minimize these risks, which can have existential implications for humanity depending on the application of an AI system, the AI research community will need to actively participate in safety research, collaborate on ethical guidelines and promote transparency in the development of AI systems. Without a doubt, it must be ensured that AI serves humanity and does not pose a threat to our very existence!

Another risk of AI-supported automation will be that it can lead to job losses in various sectors – not necessarily, and not only for low-skilled workers. It may take some time before new fields of employment are created through the development and application of AI and thus new jobs are created again using AI in the upcoming period of upheaval. However, there are also studies that suggest that AI systems will improve the nature of work rather than destroy jobs – according to a UN study from mid-2023. One way or another, however, our own willingness and ability to adapt will remain an important issue here. As AI technologies evolve and become more efficient, workers will need to adapt and acquire new skills to remain relevant in the changing work landscape.

Another risk not to be underestimated is the over-reliance on the „output“ of AI systems. There will inevitably be a certain degree of loss of creativity, critical thinking and human intuition. An AI system is based on the training data with which it has been „fed“ and therefore draws on an existing pool of data and algorithms – but does not actually add anything completely new and is limited in this respect by its pool of training data, existing information and algorithms determined by their programming. In contrast to human beings, AI systems that exclusively follow causal processes lack transcendence, i.e. the ability to „transcend“ the determinacy anchored in it by its programmers/creators). A balance will therefore need to be struck between AI-supported decision-making and human input in order to preserve our human, transcendent, cognitive and creative abilities.

AI technologies that collect and analyze large amounts of personal data, which they „siphon off“ from all available sources, pose another significant risk. The processing of personal data, i.e. data that can be clearly assigned to an individual, also raises data protection and security concerns. Who wants to become a „transparent person“, that could easily become the target of political persecution, manipulation, exploitation, fraud (scam) or exposure using powerful AI systems? To eliminate the risks to privacy, strict data protection regulations will have to be put in place and everyone will have to ensure that their own data is handled securely.

As AI technologies become more sophisticated, so do the associated security risks and previously unimagined opportunities for misuse. Hackers and malicious actors can harness the power of AI to develop highly sophisticated cyberattacks, bypass security measures and exploit vulnerabilities in public service systems such as power plants, nuclear power stations, etc.

The emergence of AI-based systems, such as self-driving cars, robots, drones, but also weapon systems, also raises concerns about the associated dangers, especially when we consider the potential loss of human control in critical decision-making processes. To mitigate these risks, government organizations will need to develop best practices for the safe development and use of AI and promote international cooperation to create global standards and regulations to protect against AI safety threats.

If you look at the social capabilities of today’s AI systems, which make it difficult to recognize whether you are dealing with a human or an AI, this can also be a blessing, especially in the care of the elderly or disabled and in the fight against loneliness. Modern AI systems are already able to engage in meaningful and playful exchange. However, the increasing dependence on AI-driven communication and interaction could also lead to a loss of empathy, social skills and human relationships. To preserve the essence of our social nature and evolve as humans in an ethical sense, we will have to strike a balance between technology and human interaction and should not „disconnect“ from society and our neighbors out of convenience or other reasons. Actually, applied empathy would be the „human“ imperative of the hour. No AI system, however human it may be in its appearance and habitus, will be able to replace the sincere, empathetic and well-intentioned exchange of ideas between two people – especially considering the effects that such exchanges have on their psyche.

Conclusion and outlook

We are still in the early stages of AI. Every country in the world and every major company that can afford it is thinking about using or developing AI systems – as quickly as possible to avoid competitive disadvantages. This is an enormous risk in itself, as the „speed“ of the development leaps means that all the risks described in this article are much more likely to materialize, as we simply do not take enough time to properly regulate and „manage“ the risks. For a great example of the incredible speed at which new, even more powerful AI systems are entering our world, watch this video about the use of ChatGPT-4o, a multimodal AI system that helps a boy with his math problems. In April 2024, when the first version of this article was published, multimodal AI systems were not yet available to the public… now in June 2024, only 2 months later, they are already available to all.

A kind of AI arms race is currently raging, which could lead to the development of AI technologies with potentially harmful consequences for us all. Humanity is not yet accustomed to dealing with AI; it has not yet developed the necessary immunity and healthy skepticism towards AI-generated content – if we are honest with ourselves, this is even the case with purely human-generated content – that is needed to deal with the „output“ and, above all, the almost unbelievable efficiency of these systems in a sensible, trustworthy and, above all, ethical manner.

AI is a new technology that, depending on its area of application – and there will only be a few areas of life in the future that will not be influenced or even controlled by AI in some way – will undoubtedly change humanity significantly. Meaningful change needs to be sensibly managed and steered in the right direction, guided by a legal framework based on fundamental ethical principles. It is therefore crucial to develop new legal frameworks and regulations to address the unique risks and issues arising from AI technologies. Legal systems need to evolve quickly to keep pace with technological advances and define the obligations when dealing with AI systems with the aim of protecting the rights of all. A first step in this direction has been taken, at least in the European Union. The so-called EU „AI Act“ (regulation on artificial intelligence) will come into force in 2024. These first comprehensive AI regulations worldwide govern areas such as the transparency of AI systems, the use of AI in public spaces and for use in high-risk systems. They are intended to ensure the safety and ethical use of AI systems while safeguarding the fundamental rights and values of the EU. Strict requirements apply to AI models with major impacts and systemic risks, including model assessment, risk mitigation and incident reporting.

Anchoring moral and ethical values in AI systems, especially in decision-making contexts with significant consequences, represents one of the greatest challenges. For us to master the age of AI – from which there is no turning back – researchers and developers in particular should prioritize the ethical implications of AI technologies.

In this context, it should also be mentioned that in 2023, shortly after the public launch of ChatGPT, more than a thousand technology researchers and leading figures, including Apple co-founder Steve Wozniak and even Elon Musk, who is not exactly known as a „warner“, called on AI development centers to pause the development of advanced AI systems. The letter states that AI systems pose „profound risks to society and humanity„. However, in the letter, these leaders also state:

Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.

Author: Michael Winkler. Mr. Winkler has been working for 15 years as an in-house lawyer with a focus on „International and European Business Law“ for a globally active American company that occupies a leading position in the development and marketing of AI solutions.