I hope I’m not abnormal

Robots can be perfect in making logical calculations to attain pre-determined goals. But they’re fully programmable, predictable, and their own will -despite appearances- is zero, like the model of the optimum man, according to industrial era norms. They don’t try to attain goals of their own.

And if a robot deviates from it’s pre-determined (by it’s programmer’s) course, it is considered faulty. It can no longer be of good service to it’s programmers.

But what if it’s programmers are not of good service to others?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s