Back in December of 2020, the Australian defence portal, Defence Connect published a groundbraking article, titled “Counter AI”. The article was penned by two researchers, Dan Whitham and David Leibowitz, at Penten, an Australian cyber security firm.
Their writing is a very healthy and balanced view on the arising need for a sound counter-AI capability in an era when almost every nation on Earth posess some kind of military AI in development or already deployed.
They correctly identify a range of actual applications where the AI is already gaining the upper hand over solely human operations and also acknowledge the reliance on sensors of present and future AI-driven military capabilities.
Their proposal comes with a bouquet of ideas concerning spoofing and combatting AI applications, which are also represent a balanced view of the things need to be done in order to tackle the issue at hand.
They mention the alteration of objects so as to confuse and spoof the video and vision sensors, the use of decoys, applying appropriate camouflage, yet acknowledging the possibilities of the use of multiple sensors for the AI to look trough the “fog of war“.
They also point out to the possibility for data poisoning as a means to combat AI, by inserting adversarial data/images into the datasets used to train those AI algorhythms. Because of the nature of things this latter option pretty much needs an offensive approach, so they correctly assume that to combat AI one should apply both a defensive and both an offensive posture.
What is of particular interest is their conclusion which declares that practice should offset academic research, because in the military field purely conceptual solutions might not work at all, given the uncertainty that comes with the “fog of war“.