Skip to main content

The UN is Afraid of Killer Robots, Here’s Why

 Thanks to the rapid advances made in the field of AI, autonomous weapons systems, or killer robots in colloquial terms, might soon become a reality. As a result of the fact that this is the case, the UN has adopted a resolution to make these systems less effective than might have been the case otherwise. These types of weapons can acquire targets without any human involvement whatsoever, which makes them an especially dangerous outcome of the current AI race.


With all of that having been said and now out of the way, it is important to note that Harvard law lecturer Bonnie Docherty recently spoke out about this issue. She described autonomous weapon systems as systems that rely on sensor inputs to determine targets rather than human input. It turns out that they have actually been used multiple times in the past, although they are not quite as sophisticated as they would eventually end up becoming.
Systems used during the ethnic conflict in the Nagorno-Karabakh region were able to identify targets all on their own. The same can be said of the systems deployed in the Libya conflict, with some referring to them as loitering munitions. These weapons can hover over the field of battle and deploy their payloads as soon as an enemy target is detected, even if a human didn’t order the strike.

Needless to say, autonomous weapon systems come with a whole host of ethical concerns with all things having been considered and taken into account. It can reduce the taking of human life to a matter of numbers and data, which many consider to be crossing a line.

Algorithmic bias is also essential to consider because of the fact that this is the sort of thing that could potentially end up discriminating against people based on their ethnicity, gender and other aspects. Even disabled individuals could end up being targeted, with the AI based targeting systems unable to discern human rights in the appropriate circumstances.
Apart from ethical considerations, legal concerns have also arisen. Machines might not be able to differentiate between military combatants and humans that are present on the battlefield in a civilian capacity. Human judgement is essential in this regard due to the reason that weighing civilian casualties against military outcomes.

This involves something called the proportionality test, wherein someone or the other determines whether or not civilian loss of life justifies military action. For all of its advancement, AI can’t yet be programmed to display human judgement.

This raises another important question that must be asked. If the AI can’t show judgement, how can it be held accountable for any potential atrocities or crimes against humanity? At the same time, the operator of the system can’t be held accountable either, since they’re not technically the one that ordered the attack.

So far, any attempts to ban autonomous weapon systems have met stiff resistance from countries like Russia. Even the US and the UK have proposed non-binding resolutions in order to leave the door open for future use of these systems should the need arise. Indeed, a number of countries prefer non-binding resolutions, with each of them coincidentally developing autonomous weapon systems of their own.

As it currently stands, the UN is trying to collect civil opinions on the matter at hand. 164 member states voted in favor of this resolution, and it will be interesting to see where things go from here on out. According to the UN Secretary General, a new treaty might be coming as early as 2026. If it fails to reach the required number of voters, the potential loss of life might be staggering. Unlike landmines and other munitions, these aren’t tried and tested weapons yet, which might make obtaining a vote harder in the long run.

a

Comments

Popular posts from this blog

Apple In The Hotseat After Reviewer Confirms Its Vision Pro’s Eyesight Feature Doesn’t Work

  For months, we’ve seen tech giant Apple speak about how its Vision Pro entails features that set it far apart from all others in the industry. Now, a reviewer is casting serious doubt on the iPhone maker’s claims after adding that one of the key features of the new Vision Pro Eyesight does not work. And that’s shocking considering how much Apple has marketed the product as one of the best in the industry. When you consider a wide array of real-life examples, you’ll find how Apple has always spoken about this technology being one of the best out there. But in reality, one reviewer says that’s far from the truth. CEO Tim Cook took out the time to argue about how AR is far more superior and entertaining than the world of VR. The former was better as it did not isolate individuals from the community arising around it. Moreover, this is where the entire EyeSight product range came into existence from this notion as it ensured users were well aware and engaged in everything in their su...

OpenAI Sets Eyes On New AI Project Worth Trillions As Sam Altman Begins Talks With Potential Investors

  OpenAI has made it very clear that it’s not coming slow in terms of its ambitions for the year 2024. Sam Altman is said to be in talks with leading investors including the UAE government for a massive AI project that’s said to be worth trillions. This would entail the production of AI chips as confirmed by the WSJ in a new report. The CEO has yet to unveil the curtain on what exactly the project is all about and how it’s only in the early stages. Meanwhile, the list of investors taking part in this ordeal is still unknown, the company explained. Sam Altman also held similar discussions regarding the raising of funds for plans such as the production of a new plant with Japan and UAE-based investors such as their leading tech giants which include SoftBank Group and G42. It’s not too shocking as we’ve heard the OpenAI CEO mention time and time again how the world needs more AI chips now than ever. He feels they are designed to better enhance performance and assist in running AI mode...

The Arrival Of Gemini 1.5 - Google Unveils Its Latest Iteration Of Its Conversational AI System

  Google just unveiled  Gemini 1.5 , its latest rendition of the conversational AI system. The product is said to entail a greater array of advances in better efficiency, long-form reasoning, and enhanced performance. The latest system was detailed in a post by Google’s AI head that entailed a large figure of architecture enhancements, ensuring the core model can perform on the same level as the big Gemini 1.0 Ultra endeavor, without using extra computing resources. This latter was rolled out in the past week. The biggest leap comes at a time when there’s a huge window for carrying out experiments that the company says have to do with long-form context comprehension. The standard model of Gemini analyzes several prompts within a small 128k token context. With this new upgrade, the model will have a large number of data to process which can now be done quicker than before. This huge leap arose at a time when we saw the firm’s CEO analyze and classify as well as summarize a huge...