Autonomous Robots: Protection from Grief or Its Ultimate Source?
Weeping relatives, clothed in black, gather around a folded flag, the last remaining symbol of who their loved one used to be. War has claimed far too many people for this scene to be unfamiliar. In situations of crisis, one must act as they see fit, but do decisions made in the heat of the action always pay off? The evidence lies before our eyes. As humans we are entitled to our faults, but the same faults do not make a good soldier. What if there was a super-soldier that never tired or let their decisions be swayed? It would be unparalleled. Thanks to the advancement of technology, this vision of future may not be too far away. But is it the best choice for humanity?
The Need for Autonomous Robots
Fear, indecision, anger, vengefulness. As humans, we are privy to such emotions which may be our downfall on the battlefield. Every decision we make is influenced by outside stimulus that may or may not prove to be a detriment. In wartime, situations progress so rapidly that our brains cannot process everything going on, and it leads to rushed decisions. An adeptly programmed robot might be able to use external information and compute a viable solution in response in an expedited fashion, eliminating a margin of error. Driven by emotions, our passion can get the best of us. The most accomplished soldier can be stricken by a need for revenge or paralyzing fear, causing them to react adversely, and resulting in a phenomenon often known as a war crime. A war crime describes an action made during wartime that violates ethical standards set by international conventions. Robots, with no manifested emotions, would be capable of avoiding the atrocities that violate nationally recognized ethical standards. As technology progresses, we can no longer be safe using the customary weapons of a century ago. Why not increase our safety by pre-programmed, weaponized protectors? Is it possible for a robot to adhere to a code of ethics without emotion? If it acts out of turn, who is responsible?
Can Robots Make Truly Ethical Decisions?
If the military were to release fully automated robots into the field, what would ensure that the robot could tell friend from foe and interpret situations of surrender? What decisions would be open to enact by the robot? Not every wartime situation, especially involving civilians, follows a formula. In each instance, a proportionate amount of force should be used to control the situation. In what way would a fully-autonomous robot be able to define the force needed?
Robots, depending on their flexibility, can be programmed with an “artificial moral agent.” It would dictate targets deemed “worthy of moral consideration,” such as out of action soldiers and civilians or those considered “friendly.” With several modes of action to choose from, a false form of ethical judgement would occur. Due to the unpredictability of situations, decisions such as open fire could be left to the humans, or complete responsibility could be instilled in the robot. Extensive moral agents could potentially be programmed into the robot, enabling a sense of judgement to aid in its reaction to a situation.
How Far Would the Autonomy Progress?
Militaries already employ some semi-autonomous technology that enables people to control a machine from afar, while still contributing to the effort. However, such a connection can cause detrimental psychological effects to the soldiers that operate them. A strong, emotional bond may be formed between user and machine, which limits its effectiveness. Disorders regarding the daily decisions they make and the lethal consequences that arise miles away often affect those in control of semi-autonomous machines. It causes a disjoint between actions and consequences in the pilot’s mind, which affects mental health.
The main concern is of fully autonomous robots and their range of decision-making. The amount of power a, essentially uncontrolled, robot should have is debated. After all, would you put others’ lives in the hands of something of which you had minimal or no control?
Eliminating the Need for Humans
If automated soldiers were sent to scout and fight on the front instead of people, the need for recruits would drop dramatically. This could either be viewed as a beneficial side-effect or a detrimental one. A large expense of the army is the cost of soldiers, before and after service. Keeping a single soldier in Afghanistan costs around 850,000 dollars yearly, and the costs are rising. This does not account for after-service life. Automated soldiers would be a significant cost-saver over time. As technology progresses, the need would continue to decrease until the need for people would become almost obsolete. In a world of automated robots, humans would be, by far, the weakest link.
The Dangers of Armed Robots: Who Would Take the Responsibility?
In any manufacturing processes, there is always room for error. A robot could malfunction and unleash mayhem, possibly killing many people. With as intricate and complex a program as an artificial moral code, the work would have to be scrupulous. Any kink or defect in the system could result in the loss of life. Violation of ethical codes are not taken lightly and could spell death for automated systems. Any mistake made in regards to automated systems would likely be a disaster.
With the possibility of defects, there lies a question of who takes responsibility. A robot’s actions reflect onto that of its programmers, but that doesn’t encompass all those involved. Both its operator and manufacturer are liable. There is legal uncertainty of who to blame for a malfunction.
Conclusion
Autonomous armed robots could be the solution to a many-layered problem or could wreak havoc on a whole race. Most prominent developers and scientists are against the idea of autonomous robots with full control, fearing possible disasters and violations of ethics. Perhaps the idea hits a little too close to home, as it is the subject of many dystopian works. There is always the undeniable possibility of the robots going haywire, being hacked by enemy combatants, or the less likely but more exciting, becoming self-aware and waging war against humans. Future or fantasy, the progression of autonomous armed robots is still in debate.
Autonomous Robots: protection from grief or its ultimate source
I know its bad, but at least it is something
good luck