Many of the pioneers who began constructing artificial neural networks had no idea how they operated, and we still don’t know.
During a year-long tour to London in 1956, mathematician and theoretical biologist Jack D Cowan, then in his early twenties, paid a visit to Wilfred Taylor and his bizarre new “learning machine.” When he arrived, he was taken aback by the “huge bank of apparatus” that greeted him. Cowan could only stand there and observe “the machine doing its thing.” It appeared to be using a “associative memory scheme” – it appeared to be able to learn how to discover links and retrieve data.
Cowan was watching an early analogue form of a neural network – what appeared to be cumbersome blocks of circuitry glued together by hand in a jumble of cables and boxes. a forerunner to today’s most advanced artificial intelligence, such as the much-discussed ChatGPT, which can generate written content in response to nearly any instruction. The core technology of ChatGPT is a neural network. (Read more about the AI emotions dreamed up by ChatGPT)
Cowan and Taylor stood there watching the machine work, unsure how it was accomplishing its duty. Taylor’s mystery machine brain has an answer somewhere in its “analogue neurons,” the associations created by its machine memory, and, most crucially, the fact that its automated operation cannot be entirely described. It would take decades for these systems to discover their purpose and release their power.
You might also like:
- The languages that defy auto-translate
- Why we place so much trust in machines
- The weird and wonderful art of AI
According to IBM, “neural networks – also known as artificial neural networks (ANNs) or simulated neural networks (SNNs) – are a subset of machine learning and are at the heart of deep learning algorithms.” The name, as well as its form and structure, are “inspired by the human brain, mimicking the way biological neurons signal to one another.”
There may have been some scepticism about their worth in the beginning, but as time has gone, AI styles have firmly shifted towards neural networks. They are currently widely regarded as the future of artificial intelligence. They have far-reaching ramifications for us and our understanding of what it means to be human. Recent demands to halt new AI advances for six months to guarantee trust in their ramifications have echoed these worries.
It would be a mistake to dismiss neural networks as only being about flashy, eye-catching new technologies. They have already established themselves in our lives. Some people are really realistic. A team at AT&T Bell Laboratories utilised back-propagation techniques to train a system to recognise handwritten postal codes as early as 1989. Microsoft’s recent announcement that Bing searches will be powered by AI, making it your “copilot for the web,” demonstrates how the things we discover and understand will increasingly be a product of this type of automation.
Using massive amounts of data to detect patterns, AI can be trained to accomplish things like image identification quickly, resulting in it being used in facial recognition, for example. This capacity to recognise patterns has led to many additional uses, such as stock market forecasting.
Neural networks are also influencing how we interpret and communicate. Google Translate, created by the Google Brain Team, is another well-known neural network application.
You also wouldn’t want to play chess or shogi with one. Their understanding of rules, as well as their recollection of strategies and all recorded moves, makes them extraordinarily effective at games (albeit ChatGPT appears to struggle with Wordle). Neural networks are used to create the systems that are troubling human Go players (Go is a very difficult strategy board game) and Chess grandmasters.
However, their reach extends far beyond these examples and continues to grow. At the time of writing, a patent search limiting to use of the exact phrase “neural networks” yielded 135,828 entries. With AI’s rapid and continued proliferation, the possibilities of completely explaining its impact are becoming increasingly slim. These are the questions that my research and my new book on algorithmic thinking have been addressing.
Mysterious layers of ‘unknowability’
Looking back at the history of neural networks teaches us a lot about the automated decisions that define our present, as well as those that may have a more profound impact in the future. Their presence also indicates that humanity will understand AI decisions and repercussions even less over time. These systems aren’t merely black boxes, or hidden parts of a system that can’t be seen or understood.
There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why
It’s something new, something anchored in the goals and design of these platforms. There has long been a fascination with the incomprehensible. The more opaque the system, the more authentic and advanced it is perceived to be. It is not only about systems growing more complicated or intellectual property ownership limiting access. (although these are part of it). It is more accurate to state that the mentality that drives them is preoccupied with “unknowability.” The mystery is even built into the neural network’s shape and discourse. They have deeply layered layers – hence the term “deep learning” – and inside those depths are the even more strange sounding “hidden layers.” These systems’ mysteries lie far under the surface.
There’s a significant risk that the larger the impact of artificial intelligence in our lives becomes, the less we’ll comprehend how or why. Today, there is a significant push for explainable AI. We want to know how it operates and how it makes judgements and achieves results. The European Union is so concerned about possibly “unacceptable risks” and even “dangerous” uses that it is working on a new AI Act to establish a global standard for “the development of secure, trustworthy, and ethical artificial intelligence.”
These new laws will be based on the need for transparency, requiring that “for high-risk AI systems, the requirements of high-quality data, documentation and traceability, transparency, human oversight, accuracy, and robustness are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI.” This is not just about self-driving cars (although systems that assure safety fall under the EU’s category of high-risk AI), but also about the possibility that systems with ramifications for human rights could arise in the future.
This is part of a broader need for AI transparency so that its behaviours may be examined, audited, and evaluated.
Another example is the Royal Society’s policy briefing on explainable AI, which states that “policy debates around the world are increasingly seeing calls for some form of AI explainability, as part of efforts to embed ethical principles into the design and deployment of AI-enabled systems.”
However, the history of neural networks suggests that we are likely to move further away from that goal rather than closer to it in the future.