The danger of losing control of an Unmanned System (US) operated by artificial intelligence.

Aliaksei Stratsilatau, UAVOS Lead developer.

As experts in the field of automatic control systems and control algorithms, and thus robotic behavior, we believe it is too early to implement artificial intelligence into military systems.

Artificial intelligence (AI) is a strict set of algorithms. Implementing artificial intelligence is usually limited to video processing which can hardly be called artificial intelligence…even the term AI itself can be interpreted in many ways!

An AI algorithm is a set of simple yet clear instructions performed by a computer. Quite often, if a software contains several layers of algorithms, programmers are bold enough to call it “AI software”, forgetting that without the evolutionary aspect, it can’t be called AI and this particular component is incredibly difficult to program and control.

We believe that any judgment on the danger of using AI-controlled weapons is a fraud precisely because AI isn’t at the core of the reflection and the judgment. In the near future, it is very unlikely that anyone at all will succeed in this area, considering the current state of developments and the processing speeds achieved even by the best processors.

AI is defined by its capacity to evolve independently of its designer, it is, by nature, out of control. Our current software verification and certification system doesn’t allow for unpredictable results at any level and most of all at the software level and at any stage of the product lifecycle.

Based on this fact, AI can’t be and, it is our opinion that it shouldn’t be allowed in military applications.