Artificial? Yes. Intelligent? Not really
Why AI still needs humans
AI does not understand. AI does not think. AI is a stochastic parrot with a tendency to hallucinate. Currently available algorithms and applications are still a long way from superintelligence. This makes it all the more important for AI users to be aware of their responsibilities. By the editor of Computing Deutschland.
There are three levels of AI, according to IBM:
- basic or narrow AI (specific task/purpose)
- strong or general AI, where the AI can perform new tasks in a different context
- super AI or artificial superintelligence
We are currently between basic and strong AI, probably closer to basic in most cases. Physical limits on infrastructure are slowing progress towards the higher levels; in particular, power supply is a problem that cannot be solved with money alone.
Secondly, there is a lack of algorithms and applications that allow AI models to adapt confidently to different contexts, cultures and areas of knowledge and to learn independently. The situation is unlikely to change anytime soon. Meanwhile, the development of super AI comes with a huge amount of risk.
AI does not think
Let’s step back and consider what we’re dealing with. AI is mathematics: statistics, analytics, probability theory, calculation, pattern recognition.
The training of large language models (LLMs) is fundamentally based on pattern recognition. The more patterns available, the more accurate the results of a model are in application.
But there’s a sweet spot. Too many patterns leads to less accurate answers. It's like us humans: the more choices we have, the harder it is to make a decision.
We will look at why this is the case and what solutions are currently available in a separate article later.
AI does not understand
GenAI has been described as a stochastic parrot. But the latest generation of LLMs from Meta, Anthropic and OpenAI are edging closer to the second stage and are beginning to understand contexts and the meaning of individual words. This means the stochastic parrot analogy is now a little less relevant. But there is still a very big difference between meaning and sense. AI does not yet know this.
Meaning is defined. Meaning arises from the intention behind the use of the word or from a specific interpretation in the context of the statement. Meaning is independent of the user, i.e., it is more objective. Algorithms can handle this.
Sense is context-dependent and, moreover, strongly influenced by the subjective situation of a consumer. This is currently still too much for any AI.
Accordingly, we are (very) far from artificial superintelligence. Super AI currently only exists in futuristic films or science fiction literature: the intelligent robots from Westworld, HAL 9000 in 2001: A Space Odyssey, the high-tech skyscraper from Game Over, Sonny from I, Robot, or the mechas and supertoys like Teddy from A.I.
Don't be fooled by the smileys and friendly chatter of your favorite AI bot. That, too, is due to mathematics and underlying programming. Copilot and co do not understand real emotions.
AI is most comparable to an interpreter. Like a computer program that reads and analyses source code or script line by line, an AI application reads and analyses your prompt and generates appropriate results based on previously stored instructions.
Incidentally, this is also one of the biggest security risks with AI. Prompt injections can cause AI to make dangerous statements or disclose sensitive information. On the developer side, basic prompts can be used to manipulate the behavior and results of a model. If you want to know more about this, check out Eva Wolfnagel's talk Our Words Are Our Weapons at 37C3.
AI doesn't make mistakes
So you mean superintelligence does exist after all?
No.
AI is designed to produce results – no matter what. AI would rather make something up than “admit” it doesn't know something. Not producing a result would be a “mistake”. Hallucination is the biggest problem with GenAI and one that may not be solvable.
Will AI replace humans?
Perhaps. Eventually. But it still needs us – at all levels. This starts with conceptualisation and doesn't end with the prompt. AI needs help with data acquisition. This data must be prepared and processed for more relevant and accurate results. AI models must be developed or found and implemented. AI needs maintenance and must be continuously developed and optimised.
In some areas, however, the AI currently available is already vastly superior to us and can make us more efficient in areas such as information gathering, analysis, automation and translation.
AI can make our everyday lives less stressful and free our minds for more strategic or creative tasks. In cybersecurity, it is a great tool for combating alarm fatigue. It can relieve us of tedious, repetitive tasks and reduce manual intervention. It can show us new approaches or perspectives and help us recognise more and larger connections. AI can spot and fix spelling and grammatical errors, and may improve the quality of texts.
But it’s important to recognise the limitations of this powerful tool, how the way we interact with AI influences the quality and relevance of the result, and the risks involved. A misplaced (or malicious) prompt could trigger a serious security incident. Inbar Raz demonstrated how easy this is in his impressive presentation 15 Ways to Break Your Copilot at DefCamp 2024.
Kerstin Mende Stief is the editor of Computing’s brand new sister site Computing Deutschland