Although Artificial Intelligence (AI) systems that handle language and images have recently become widely available, the technology has been around since the 1960s and it has been used in many fields for decades.
Over the past six decades, there have been repeated predictions that computers would demonstrate human-level intelligence which also sounds very much like the sort of nightmare scenario that we’ve heard about AI in recent years.
Can AI spontaneously evolve, become conscious, and pose an existential threat to humans? A group of researchers have answered that it can’t happen anytime soon. In fact LLMs are very safe according to the group.
According to a new study by researchers from the University of Bath and the Technical University of Darmstadt, Large Language Models (LLMs) not only can just follow instructions but also cannot develop new skills. AIs, specifically LLMs, are not potentially dangerous as they are shown to be inherently controllable and predictable.
In experiments, researchers tested LLMs by asking them to complete tasks that they hadn’t come across before. They found that LLMs were very good at following instructions and showed proficiency in languages. Not only that, they excelled when shown only a few examples, such as answering questions about social situations.
However, LLMs could not go beyond the instructions or learn new skills competently without adding more specific instructions. Although LLMs show some surprising behavior, they are just following the program. In other words, there are no godlike machines as these models cannot evolve into something beyond how they were built.
On the other hand, the team did not deny the potential threats that AI poses. In fact, the risk to workers around the world is very real.
As always, the danger isn’t with the machines but with people who program and control them. AI systems can become more sophisticated. The technology has the frightening potential to manipulate information, create fake news, commit fraud, provide falsehoods, and suppress the truth.
After all, it isn’t the computers we need to worry about but the humans behind them. As long as people have bad intentions, there will be danger from technology.