In May 2017, researchers at Google Brain announced that they had created an Artificial Intelligence called AutoML. This AI was unique for one reason: it could generate its own AI — something that seemed eerily similar to the plot of the acclaimed PS4 game Horizon: Zero Dawn.
Recently, Google researchers decided to test AutoML with its biggest challenge yet, and in response AutoML created a “child” AI that managed to outperform all of the human-made AIs built for the same tasks.
AutoML develops a child AI network for a particular task. For this child AI specifically — called NASNet — the task was to recognise things like humans, cars, kites, traffic lights, backpacks, etc. in a real-time video. This is an example of ‘reinforcement learning’.
AutoML would then evaluate NASNet’s performance and use the date to improve the child AI so that it could recognise certain images more efficiently. When it was tested on predicting images on an image classification test, NASNet scored an 82.7% accuracy rate. It was 4% more efficient, and a less demanding version of NASNet outperformed the best similarly-sized models for mobile platforms by 3.1%.
While that may not sound like much, the point is that these AIs can improve. They can learn, and though that may be along strict, delineated software lines, it does raise a number of concerns. Not necessarily to do with homicidal robots, but rather AIs whose programmes can be used by various corporations and states for a wide range of self-serving — and potentially ruinous — purposes such as mass surveillance or military operations.
Whether these concerns are warranted remains to be seen, but the speed at which Artificial Intelligence technology is advancing calls for more preparation and urgent measures to be made in controlling it.
Seeing such advanced tech developed or acquired by powerful entities should not be taken lightly, even if at first glance it appears to benefit us.