Wednesday, April 25, 2007

I, Robot by Isaac Asimov

Central Questions: At what point do we start trusting technology to make decisions for us or do we even dare trust technology? If we do invent robots that have Artificial Intelligence and self-awareness, can we believe that they will not abuse their power for their own benefit?

This fear of robot’s power over humans drastically diminished in the last story, “Evitable Conflict.” In that futuristic decade, machines, robots built specifically for economic control, seemingly malfunctioned and depressed the economy, but they were surreptitiously removing all those who threatened their control. Susan Calvin realized what the machines were doing and told Byerley to let them continue, because they were still incapable of harming humans – the machines were simply ensuring that humans would not harm other humans. Byerley questioned the morality of relinquishing complete control of the future to the Machines, but Susan Calvin said:

It (Humanity) was always at the mercy of economic and sociological forces it did not understand—at the whims of climate, and the fortunes of war. Now the machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society,--having, as they do, the greatest of weapons at their disposal, the absolute control of our economy. (272)

She pointed out that humanity never had control over its future, but now humanity did—humanity’s future would be the optimal future because of the machines.

1 Comments:

Blogger Ruth said...

http://news.bbc.co.uk/2/hi/technology/6583893.stm

And so it begins (as Asimov predicted...)

2:58 PM  

Post a Comment

<< Home