When I think of robots or androids, only one stands out in my mind; Data from Star trek Next Generation. Data exhibited humane behavior and, as per Asimov’s definition, was a ‘positronic’ robot. He latently followed the three laws of robotics, penned by Asimov many decades ago. These laws were,
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, with the exception of orders conflicting with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Most people are familiar with these laws but what many do not know is that, in Asimov’s own words these laws were analogous to the design of any valuable tool. He postulated,
1. A tool must be safe to use.
2. A tool must perform its function efficiently unless this would harm the user.
3. A tool must remain intact during its use unless its destruction is required for its use or for safety.
In one of his short stories (Evidence), Asimov explained the moral grounding behind these laws. He stated that human beings are generally expected to refrain from harming other human beings (except in times of extreme duress; like war, or to save a greater number). This is equivalent to a robot's First Law. Likewise, society expects individuals to obey instructions from recognized authorities (such as doctors), which equals the Second Law. Finally, humans are generally expected to avoid harming themselves, which is the Third Law.
In the recent decade, technology has come a long way and scientists are in the process of actually setting forth a Robot Ethics Charter. This document would be heavily influenced by the three laws but there is sufficient criticism for them out there to think twice.
Modern experts tend to agree that the Laws are perfect for a good story line but present practical problems. According to them the first law is flawed as it states that a robot cannot 'through inaction, allow a human to come to harm'; it should be noted that a robot has finite knowledge and if anything which harms a human is not in its data bank it will invariably end up harming a human. Further when humans harm humans (e.g. in case of war or accidents); the laws would imply that robots end up taking charge of humanity, in an effort to prevent humanity from harming itself.
The above criticisms aside, robots are a very distinct possibility of the future and there are those who feel some ground rules should be set. In the July/August 2009 issue of IEEE Intelligent Systems, Robin Murphy and David D. Woods proposed "The Three Laws of Responsible Robotics":
1. A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
2. A robot must respond to humans as appropriate for their roles.
3. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control, which does not conflict with the First and Second Laws.
The above three laws are also suggestions. No matter what ground rules are set, they will have to be updated and improved constantly as technology progresses. Some day we may have creations which follow the idealistic 3 Laws of Robotics, but for now we will have to contend with the practicalities of the philosophy and the limitations of our technology.
Monday, May 31, 2010
Subscribe to:
Posts (Atom)