A funny video is in the circulation, where you could see the Tesla Autopilot mistaking the moon for a blinking yellow traffic light, consequently trying to slow the vehicle down.
Hey @elonmusk you might want to have your team look into the moon tricking the autopilot system. The car thinks the moon is a yellow traffic light and wanted to keep slowing down. 🤦🏼 @Teslarati @teslaownersSV @TeslaJoy pic.twitter.com/6iPEsLAudD
— Jordan Nelson (@JordanTeslaTech) July 23, 2021
This is not the first hack that could make the AI to make incomprehensible decisions, for example a few years back it had been revealed, that placing a few stickers here and there could make the AI unable to properly identify traffic lanes – this “fake lane” for example can cause a Tesla to drive in the wrong lane and, potentially, against the tide, err, the oncoming traffic.
In yet another experiment aMobilEye EyeQ3 camera and Tesla Automatic Cruise Control (TACC)-equipped Model S has been tricked to misinterpret a 35 MpH speed sign for a 85 MpH one, using a small piece of a black adhesive tape.
While the Tesla AI is undergoing a lot of different adversarial machine learning processes (to include having an ability to properly handle encounters with kangaroos) it is becoming clear that this phenomena is firmly in the category of an alaog “hack” that could make the AI go crazy, qualifying for a counter-artificial intelligence method.
Trouble is that the reliance on the self-driving cars and on AI in general spreading into the mainstream in way quicker steps than the understanding of the related dangers and threats. Inasmuch as while there are now Tesla cars everywhere, the counter-AI specialists are nowhere to be found…