Artificial Intelligence: Getting More and More Real
Author: Michael Vogel
With algorithms like ChatGPT, the use of artificial intelligence (AI) is entering a new phase. This makes testing and security all the more important.
Microsoft is doing it. Google is doing it. Meta is doing it. China’s internet giant Baidu is also doing it: In the technology industry, chatbots equipped with AI are considered the next big thing. The media coverage is shining the main spotlight on ChatGPT, a so-called language model, a trained algorithm that is able to hold conversations with humans. ChatGPT is backed by OpenAI, a start-up that receives massive financial support from Microsoft.
ChatGPT can write song lyrics, poems, or newspaper articles. It can program computer games or draw causal conclusions from pictures. People only have to provide a few pieces of information to the AI chatbot – and the algorithm does the rest.
Easy access to vast amounts of information
Such powerful language models did not appear overnight or out of thin air. Many people have been developing them over the past few years, continuously improving them. On its own, each improvement is rather small. In this respect, AI chatbots like ChatGPT are no breakthrough, nothing fundamentally new. What is new, however, and what is now causing so much debate, is how they can be used to tap into the vast amount of information online: People can access this information in an intuitive and natural-language way – they no longer have to type in more or less appropriate search terms and use the fragments of information shown in the results to create their own overview. Thanks to tools like ChatGPT, the computing capacities required are available to virtually anyone.
Microsoft and Google have announced that they will gradually integrate AI support into their respective Office programs. Microsoft plans to integrate a “Copilot” assistant into programs like Word, Excel, and Teams. At Google, the workspace programs Docs, Slides, and Gmail will receive equivalent AI support. AI helpers will play an even greater role in the use of the two companies’ respective search engines. An example of AI support could be the automatic creation of presentations based on text documents.
Getting the job done faster with AI chatbots
“Algorithms like ChatGPT are ultimately just tools that make it possible to get the job done faster,” says Xavier Valero, Head of AI & Advanced Analytics at DEKRA. “Yet people and companies must always ask themselves what risks and dangers may be associated with the use of AI, and weigh them accordingly.” Not all monotonous tasks, for example, can necessarily be automated.
In any case, the opportunities are both manifold and not yet clear. Numerous routine tasks – many of them unpopular – can be automated with AI. Recognizing patterns in very large amounts of data is extremely difficult for humans, but not for AI. AI also does not get tired after working all day. People in virtually every industry are now coming up with ideas about what could be improved or even tackled for the first time by AI.
The ethical and societal risks that AI poses
The downside are the ethical and societal risks that humanity faces with these algorithms. Most of the AI systems being discussed in public are designed in such a way that their decision making is no longer transparent. This can become a problem, for example in medical, financial, or legal matters. These algorithms can also discriminate against people because the training data was discriminatory. Essentially, they cannot distinguish between true and false. They have no ethical compass. It is possible that they end up processing sensitive data in an undesirable way.
To illustrate this using the example of the automatically generated presentation: It is still the human’s responsibility to ensure that no confidential information appears in the presentation or is available to the AI for possibly even more extensive use.
Like any technology, these algorithms can also be abused: for misinformation campaigns using almost perfectly faked texts, images, or videos. In the future, AI tools are also going to play an inglorious role in cyberattacks. Because of the AI algorithm’s eloquent communication, people may believe they are dealing with a real person – and thus trust the algorithm more than is perhaps wise. Last but not least, it is completely unclear what consequences the widespread use of AI will have on the job market – as is the case with any new technology.
DEKRA focuses on responsible use of AI
“This is why DEKRA wants to help ensure that these algorithms are used safely and responsibly,” says Valero. “And, thanks to our know-how, we are well positioned to do so.” The AI & Advanced Analytics division has drastically expanded its competencies in big data, AI, Cyber Security, and algorithms in recent years. “We currently have 13 AI projects underway for our own company and for customers, and our current roadmap includes an additional 70 projects,” says Valero.
An example of a completed project using the same technology as ChatGPT is the analysis of test reports from a chemical plant. The AI analyzed the entire inventory of existing reports in order to clarify important questions. For example, when were people the cause of an incident and when was technology?
Another project involved anonymizing video data recorded by surveillance cameras in public places. The algorithm’s first task was to recognize sensitive data in the videos, such as license plates or faces of pedestrians. Next, the AI had to obscure these critical areas in each frame. Since the cars and people were in motion, the algorithm had to follow them throughout the video sequence. Machine learning methods made this possible.
“We are also involved in worldwide workgroups that are developing standards for testing and certifying algorithms,” says DEKRA expert Valero. “More and more countries are working on a comprehensive regulation, such as the EU, the USA, Japan, and the UK. The EU in particular is taking on a pioneering role.” There, the Commission, Parliament, and member states are negotiating the so-called AI Act. They expect it to be passed at the end of the year at the earliest. This will be followed by the implementation phase in national law. The existing draft provides for a classification of algorithms into different risk categories, which will also entail graduated obligations for transparency, quality, and risk management. DEKRA’s testing and safety expertise will be sought after for the practical implementation in companies.