In their current form, it seems that machine learning algorithms excel at certain kinds of problems, but do less well at others. It is one thing to comb through countless strategies to produce a winning move in chess or Go; another, it seems, to nail the perfect movie recommendation (an early adopter of machine learning of course being Netflix).
In finance, machine learning has been used since the late 1980s by hedge funds. One popular machine learning approach is to look at investor sentiment, as measured by things like hashtags on Twitter. The limitations of such approaches are shown by the fact that the Eurekahedge AI Hedge Fund Index, which tracks the returns of 13 hedge funds using machine learning, has had an average annual return for the past five years of 5.5 percent, as compared to 12.5 percent for the S&P 500.
In healthcare, where data analytics is playing an increasingly important role, machine learning algorithms also tend to be frustrated by the noisy nature of the data, to the point where there are few rigorous studies that can prove superiority over expert-based methods.
The problem of bias in other areas such as recruitment is well documented. Amazon had to terminate one program because it consistently recommended hiring males, presumably because the other people to have been hired were also mostly male.
Superintelligence
In general, it seems that computers are highly efficient at finding patterns in anything from CVs to hospital visiting time data, but are less good at assessing whether they are relevant or meaningful. Machine learning algorithms therefore do well at analysing closed games with well-defined rules, such as chess, but must be used with care when it comes to complex
real-world problems. On the other hand, humans aren’t perfect either – so perhaps the solution is to combine the two.
According to the philosopher Nick Bostrom, who is head of Oxford’s Future of Humanity Institute, such a merger of human and machine can lead to what he calls a ‘superintelligence’ that can outperform either humans or machines acting alone. The problem is how to correctly integrate humans and machines to work together in synergy.
A merger of human and machine can lead to a ‘superintelligence’ that can outperform either humans or machines acting alone
One example of such a project is the MSI Brain system of Mitsui Sumitomo Insurance, which their CEO, Shinichiro Funabiki, described for World Finance as “a fusion of human and artificial intelligence, combining customer relationship management with sales force automation. The agent is able to uncover the customer’s potential needs through analysis of massive amounts of data, with MSI Brain then suggesting what insurance products to propose and in what way.” The aim is to “create a sustainable system in which AI and people grow together.”
Such hybrid systems may even play a role in geopolitics. As former NORAD chief Terrance O’Shaughnessy wrote of the artificial intelligence program known as Strategic Homeland Integrated Ecosystem for Layered Defense (SHIELD), it “pools this data and fuses it into a common operational picture. Then, using the latest advances in machine learning and data analysis, it scans the data for patterns that are not visible to human eyes, helping decision-makers understand adversary potential courses of action before they are executed.” One question of course is how computers themselves will evolve, particularly if and when quantum computers see widespread application. Many of the companies that currently lead in big data, such as Google and Amazon, along with governments and state-led consortia, are investing billions in the development of such computers.
Quantum chimera
As political scientists James Der Derian and Alexander Wendt note, there is “a growing recognition – in some quarters an apprehension – as quantum artificial intelligence labs are set up by tech giants as well as by aspiring and existing superpowers that quantum consciousness will soon cease to be a merely human question. When consciousness becomes a chimera of the human and the artificial, not only new scientific but new philosophical and spiritual cosmologies of a quantum bent might well be needed if we are to be ‘at home in the universe’.” A trope often explored in sci-fi movies, such as The Terminator when Cyberdyne Systems created Skynet, may not now seem so far-fetched – computers may really start to think for themselves.
Or even host life. Bostrom is perhaps best-known for his simulation hypothesis, which states that since computers in the future could one day produce consciousness, “we would be rational to think that we are likely among the simulated minds rather than among the original biological ones.” The hypothesis is taken seriously by people including Elon Musk, who probably uses it to justify the Tesla share price. Personally, I hold out hope that we are not just apps on some future teenager’s phone. However, it seems likely that the boundary between humans and machines will continue to evolve in fascinating ways.