Latest Episode
Play

Go Back   Keith and The Girl Forums Keith and The Girl Forums HUAR! Command Center

HUAR! Command Center HUAR! news

Reply
 
Thread Tools Display Modes
Old 05-24-2007, 10:05 AM   #1 (permalink)
Senior Member
 
scottperezfox's Avatar
 
Join Date: Dec 2005
Location: Brooklyn
Posts: 384
Futuristic Robot Overlords.

Original Story Here

May 22, 2007

Bots with Brains: Future Robotic Overlords?

By David R. Butcher
Science fiction has portrayed machines capable of thinking and acting for themselves with a mixture of both anticipation and dread, but what was once the realm of fiction has yet again become the subject of serious debate as robots become more intelligent.

In 1981, Kenji Urada hopped a safety fence at a Kawasaki plant to carry out maintenance work on a robot. While working on the machine, the robot reached out and pushed 37-year-old Japanese factory worker into a grinder with its powerful hydraulic arm.

Urada’s death is often said to mark the first recorded victim to die at the hands of a robot, although Robert Williams was killed by a robot two years earlier. Since both deaths, and despite the introduction of improved safety mechanisms, there have been many more gruesome industrial fatalities involving robots crushing humans, smashing their heads and even pouring molten aluminum over them.

As even nonthinking robotic machines clearly can be fatal, and as robots emerge from the factory floor into homes and workplaces, and develop to a point where they can make their own decisions, there are growing demands that they should be bound by ethical laws.

South Korea, which spends about $80 million a year to develop robots, predicts there will be a robot in every household in little more than a decade. This is not necessarily worth writing home about — robot vacuum cleaners, which can “decide” for themselves when to move from room to room, as well as robotic toys and lawnmowers, are already in many households — except that the country’s Ministry of Commerce, Industry and Energy’s robot team also predicts these robots would develop “strong intelligence.”

Indeed, the creation of a superhumanly intelligent artificial intelligence (AI) system could be possible within 10 years, with an “AI Manhattan Project,” Dr. Ben Goertzel, CEO and Chief Scientist of AI firm Novamente LLC and bioinformatics firm Biomind LLC, recently wrote.

In the late 1990s, there was Deep Blue (pictured to the right, Credit: Avaye), the IBM computer programmed to process millions of alternative chess positions per second. With its massive computational ability, a technique of AI known as “brute force,” Deep Blue could analyze instantly every move chess prodigy and world champion Garry Kasparov made and (theoretically) compute the best counter measure. Sure enough, Deep Blue won the first game of their match, becoming the first computer ever to defeat a human in chess under regulation time controls.

Software robots — basically, complicated computer programs — already make important financial decisions. Whose fault is it if they make a bad investment?

But there’s a more sinister aspect that is being debated.

Autonomous robots, which are able to make decisions without human intervention, increasingly are being applied to military roles. There is also the DARPA Grand Challenge, a robotic contest for building a driverless car capable of successfully completing a 132-mile off-road course; this year, rather than navigating across the desert, the vehicles will be required to negotiate a 60-mile course through simulated traffic in less than six hours.

The development and deployment of these autonomous robots raises difficult questions, according to Professor Allen Winfield of the University of West England.

“If an autonomous robot kills someone, whose fault is it?” Professor Winfield asks.

Speaking ahead of a public debate at the Dana Centre, part of London’s Science Museum, scientists expressed concern about the use of decision-making robots, particularly for military use, BBC News reported last month. And a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation.

At the top of their list of concerns: safety.

Robots were once confined to specialist applications in industry and the military, where users received extensive training on their use, but they are increasingly being used by ordinary people. And as these robots become more intelligent, it will become harder to decide who is responsible if they injure, or kill, someone: the designer, the user, the robot itself?

Currently, experts in South Korea are drawing up an ethical code to prevent humans abusing robots, and vice versa. The committee’s ethical code draws in part from the “Three Laws of Robotics,” introduced by renowned author Isaac Asimov as early as the 1940s and since used often in works of science fiction by other authors:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Robot laws rise to popular debate yet again due to the clear fact that robots are becoming more mainstream and, more important, based in real science rather than in science fiction.

Earlier this month, RoboBusiness 2007, the international business development event for the mobile robotics and intelligent systems industry, showcased a motley mix of consumer, commercial and military robots in Boston, Mass. At the conference, Carnegie Mellon University announced its 2007 Robot Hall of Fame inductees, comprised of both real and science fiction robots — and for the first time, the jury selected more robots from science in fact than science fiction. Three of the four robots selected by a jury of 25 leading roboticists were built by actual scientists.

In many ways, the Deep Blue chess match was inconclusive in settling the “human” versus “artificial” intelligence debate. If Kasparov’s humanity — his ability to reason — was his strength, so was it also his weakness, The News-Journal recently noted. Before game six, Kasparov was mentally drained and played cautiously, and when he made a disastrous mistake early in the game, he resigned himself to losing. Unlike computers, humans feel.

The issue of robot rights was again addressed last December after a speculative paper commissioned by the British government suggested robots might one day be smart enough to demand emancipation from human owners and raised the possibility that they might have to be treated as citizens.

Yet, to paraphrase Alden March Bioethics Institute director Glenn McGee’s recent column in The Scientist, we are much closer to making stronger, more intelligent robots than we are to creating a code of ethics to guide our stewardship of robo-peers.

Will thinking robots ever become Data-like androids or humanoid Cylons, capable of interacting as smarter human peers? Will the world's Roombas and RoboSapiens one day tire of their servitude and attempt to unleash Judgment Day on their foolish masters?

Are you anxious for or dreading the rise of intelligent robots?

VIDEO via YouTube
(Offline)   Reply With Quote
Old 05-25-2007, 02:12 PM   #2 (permalink)
Senior Member
 
ncfcyank19's Avatar
 
Join Date: Jul 2006
Location: San Diego, CA
Posts: 691
Just found this article on Digg. Seems we might need to eliminate some scientists (I like to call them traitors) and push back this industry a few decades before this gets any worse.
(Offline)   Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off



All times are GMT -5. The time now is 10:09 AM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
SEO by vBSEO 3.6.1
Keith and The GirlAd Management plugin by RedTyger