The UN Security council reported the incident wherein, for the first time, a drone assaulted humans without any outside input.

Most military drones are programmed for specific function and advances which give them a certain autonomy tethered to programming or its operator.

Drones may have attacked humans already

In March, the alleged AI drone, Kargu-2 quadcopter which is made by the Turkish firm and military tech company STM, attacked soldiers who were withdrawing and are loyal to the Libyan General Khalifa Haftar, reported the Independent.

The United Nations Security Council's Panel of Experts made a report on Libya. They have yet to see if there were fatalities due to the rampaging quadcopter. A global ban might also be considered to prevent the building of autonomous robots that can opt to kill and will be prohibited before it happens.

UN-recognized Government of National Accord drove the Haftar Affiliated Forces (HAF) out from Libya's capital Tripoli over the year; and the drone may have been functional since January 2020, said experts.

According to the UN, cited the Big Think, "Unmanned combat aerial vehicles (UAVs) or lethal autonomous military equipment such as the STM Kargu-2 were used to trace logistics convoys and retreating HAF and attack them remotely."

The UAV drone is designed to loiter and implement use of machine learning-based object classification to select and engage targets remarked STM. These drones have a swarn function that links up to twenty of them, functioning as one when looking for a target.

In a report, noted NPR, "These military drones that attack humans are programmed to attack targets without the need for data connection between operator and the munition: in essence, a real 'fire, forget and find' capability."

Read also: Top Secret Unmanned Aircraft System Built by Lockheed Martin, Is it Ready for Flight?

What experts think of UAVs?

Several prominent robotics and AI experts, like Elon Musk, Stephen Hawking, and Noam Chomsky, already advocated for a ban on "offensive autonomous weapons," like those which can search for and kill specific people depending on their software.

Researchers have predicted that the data utilized training such autonomous sentient robots to recognize and classify objects including buses, automobiles. People may have not been highly powerful or resilient, but that the AI system may learn erroneous lessons.

One warning came with the black box in machine learning, which is a form of decision-making in AI which is most opaque that carries a risk. A mistake can happen that a UAV can attack the wrong target based on reasons that cannot be immediately understood.

National security consultant Zachary Kallenborn, whose specialty is UAV drones, stated that there is an elevated risk of something going awry. Drone AI can miscommunicate the wrong command which can be dangerous.

He added that communication between AI can end in cascading error, which will be shared by all units in a swarm. This is something that can be extremely serious, causing a whole swarm to act in uncertain ways.

If anyone has been slain by a UAV that attacks its own, then it will be the first case of military drones attack on humans.

Related article: Kim Jong Un Develops Suicide Drones for Spying, Remote Attacks