View Single Post
Old 10-03-2011, 02:46 PM   #1 (permalink)
rabmizzle
Member
 
rabmizzle's Avatar
 
Join Date: Sep 2008
Location: Idaho
Posts: 33
Killer Drones That Can Think for Themselves

The Rise of Killer Drones That Can Think for Themselves
By Rania Khalek, AlterNet
Posted on October 2, 2011, Printed on October 3, 2011

There must be a crazy-haired mad scientist roaming the U.S. military’s research laboratories unsupervised. That’s the most reasonable explanation for the military's latest advancement in drone technology.

Drones, also known as Unmanned Aerial Vehicles (UAV), are flying robots remotely operated by pilots thousands of miles away, allowing soldiers to spy, survey, and obliterate the so-called enemy at the press of a button, much like a video game, except in the real world people die. As though video game warfare wasn't disturbing enough, it appears the military has gone even further, attempting to remove human control from the equation.

According to the Washington Post’s Peter Finn, the U.S. military is a decade or so away from deploying an army of pilotless drones capable of collaborating with one another in order to hunt down, identify, and annihilate an enemy combatant all on their own, without any human guidance. The U.S. military has teamed up with the Georgia Tech Research Institute to test these autonomous aerial drones, which will use facial-recognition type software to identify the targeted individual.

In other words, in the very near future, automated flying robots, instead of human pilots, will make decisions on whether or not to launch an attack to annihilate human beings on the ground based on biometrics software. I can think of a half-dozen science fiction movies (Terminator, anyone?) where allowing the machines to call the shots, particularly when dealing with life and death, backfired on their human overlords.

But let’s put the Hollywood, “robots take over,” fear aside for a moment, and look at the technology behind these robot killers.

Superhuman Powers: Programmed Reasoning and Biometrics

According to the Unmanned Aircraft Systems Flight Plan 2009-2047, published by the U.S. Air Force, "Human senses, reasoning, and physical performance will be augmented using sensors, biotechnology, robotics, and computing power."

Spencer Ackerman at Wired explains that programmed reasoning is already taking place with algorithms that mimic the reasoning process of human pilots to avoid air collisions. Earlier this year, the X-47B, a UAV built by Northrop Grumman for the U.S. Navy’s Unmanned Combat Air System Carrier Demonstration (UCAS-D), successfully launched and landed from an aircraft carrier autonomously. The program also plans to demonstrate autonomous aerial refueling by 2014.

As for biometrics, today’s unmanned drones preform endless hours of surveillance hovering, watching and gathering data from below. The army wants to enhance that surveillance power by arming drones with the technology to track down and identify specific individuals, in a process called Tagging, Tracking, and Locating (TTL).

According to Danger Room’s Noah Shachtman, the Army has awarded several contracts to high-tech firms in order to equip their army of drones with top of the line facial recognition software and state-of the art programs capable of recognizing “potentially hostile behavior and intent”, which basically tasks the firms with transforming existing drones into TTL machines.

The Army is requesting “Long Range, Non-cooperative, Biometric Tagging, Tracking and Location” systems that “prove the ability to track object of high value in any weather and when only appearing momentarily throughout the area of interest.”

Progeny Systems Corporation won a contract to develop a “drone-mounted” TTL system capable of distinguishing between identical twins. According to Shachtman, Progeny “is one of several firms that has developed algorithms for the military that use two-dimensional images to construct a 3D model of a face.”

With a single 50-pixel image between a person’s eyes, Progeny can build a 3D model of the person’s face. Once the model in “enrolled” into their system, a 15 to 20-pixel image is all that is needed to identify the individual in the future. Progeny will also utilizes “soft biometrics” capable of “digital stereotyping” that can track a person too far away for facial recognition based on gender, ethnicity, height, and weight.

Charles River Analytics garnered a contract to develop the “Adversary Behavior Acquisition, Collection, Understanding, and Summarization (ABACUS)” system, which will use a combination of intelligence gathered from informants, UAV surveillance, and wire-tapped phone calls to formulate an “intent-based threat assessments of individuals and groups.”

In case that doesn’t work, another firm, Modus Operandi, Inc., is using its Army contract to design “Clear Heart”, a program that determines “the likelihood of adversarial intent” based on an individual's behavior and apparent emotions.

Rules of Engagement

Since drones enable soldiers to assassinate individuals by remote control from thousands of miles away, their increased use in Iraq and Afghanistan—along with countries such as Pakistan, Yemen, and Somalia, where the U.S. has not officially declared war—has led to a great deal of controversy over ethics and international law. Therefore, the introduction of autonomous killing machines further complicates the already shaky rules of engagement.

Although it's not applied consistently, an international legal framework does exist to hold people accountable for human rights violations and war crimes. However, there is no parallel legal structure that regulates the behavior of autonomous robotic weaponry, which is advancing at a faster rate than we can understand. For example, if a drone malfunction leads to civilian deaths, who is held responsible? The machine? The programmer? The commander who approved the use of the machine? This is where it gets confusing.

Ronald Arkin, director of Georgia Tech’s Mobile Robot Laboratory and author of Governing Lethal Behavior in Autonomous Robots, argues that lethal robots can and should be programed to make ethical decisions and follow International law in warfare. He even suggests robots will behave more ethically than humans because they lack emotions, meaning they won’t make reckless decisions that harm civilians based on anger, vengeance or fear of death.

However, Arkin’s hypothesis has yet to be proven. In the meantime, a group of robotics specialists and human rights advocates formed the International Committee for Robot Arms Control (ICRAC) out of serious concern over the proliferation and advancement of robotic weapons technology without an international framework or doctrine to abide by. The ICRAC is calling for the international community to institute an arms-control regime to reduce the dangers associated with killer machines.

According to the Unmanned Aircraft Systems Flight Plan 2009-2047, humans will retain the authority to override the system during a mission. That’s good news, considering that every so often, a robot goes crazy and spontaneously empties its magazine, a not-so-hypothetical scenario that took place in South Africa in 2007 when an antiaircraft cannon mysteriously malfunctioned, killing 9 soldiers and seriously injuring 14. Nonetheless, Peter Singer, author of Wired for War, told the BBC, “We can turn the system off, we can turn it on, but our power really isn't true decision-making power. It's veto power now.”

Lethal Robots on the Rise

As robotic technology quickly advances, human willingness to gradually surrender control to autonomous thinking machines isn’t inherently bad. If used appropriately, robotics could potentially save lives when geared toward search and rescue operations, medical equipment and treatments, or the destruction of mines and other explosives, all of which are already taking place. However, these tasks are far different from the increasingly likely possibility of weaponized autonomous systems assassinating human targets.

When UAVs were first deployed in 2001, there were 50 drones in the Pentagon's unmanned arsenal. Fast forward ten years, and the Pentagon’s inventory has soared to over 7,000 unmanned vehicles that come in a variety shapes and sizes--and that doesn't even include the 15,000 driverless vehicles on the ground. According to National Defense Magazine, there are over 2,000 unmanned robots deployed alongside human ground troops in Afghanistan.

Christian Caryl writes in the New York Review of Books, “[T]he U.S. aerospace industry has for all practical purposes ceased research and development work on manned aircraft. All the projects now on the drawing board revolve around pilotless vehicles."

Earlier this year, The Guardian reported that an internal report from the U.K.’s Ministry of Defence warned that drone technology was leading to an "incremental and involuntary journey towards a Terminator-like reality" and suggested that Britain quickly institute rules and regulations for "acceptable machine behavior” to avoid a disastrous future.

Perhaps raising alarms about a horrific future where humans are hunted by Terminator killing machines is hyperbolic. But are autonomous drones really worth the risk?

The Rise of Killer Drones That Can Think for Themselves | | AlterNet
__________________
(Offline)   Reply With Quote