Educa UNIVERSITY|SCIENCE AND ENGINEERING

bandera science and engineering.webp

2024-09-06

SCIENCE AND ENGINEERING

I, Robot: The Reality You Were Never Told

I, Robot: The Reality You Were Never Told

My name is Ruben Aguila and since I was a kid, robots have always fascinated me. I grew up watching science fiction movies and dreaming about those metal helpers that promised to change the world. But what I didn't know back then, was that my life would be directly intertwined with robots, to the point of having a relationship with them that few would understand!

Let's talk about "I, Robot", one of Isaac Asimov's most iconic works. This book was not only a seed of inspiration for many of the technological advances in robotics, but also established the famous Three Laws of Robotics:

  1. A robot cannot harm a human being nor, by inaction, allow a human to be harmed.
  2. It must obey orders given to it by humans, unless these conflict with the first law.
  3. It must protect its own existence, provided this does not contradict the first two laws.

These three laws are key, they are the ethical foundation upon which the entire universe of modern robotics has been built. And yes, I know, they seem simple. But the complexity of applying them in the real world is something I can only describe with experience.

My encounter with robots

It all started in the 1990s, when the concept of artificial intelligence was but a whisper in the halls of tech universities. As a robotics engineer, I had the opportunity to see the evolution from simple programs to complex machines capable of learning, adapting and, in some cases, even deceiving.

On one of my first projects, I worked with industrial robots. Those monsters were beasts! They had pinpoint accuracy and could lift more weight than you could ever imagine. But here's the thing: they didn't think, they just did what we ordered them to do. They were tools, nothing more.

Over time, however, the narrative changed. Robots began to "think". Artificial intelligence began to be integrated into these systems, and I found myself in an ethical dilemma very similar to the one described by Asimov in I, Robot. They were not just obedient machines, now they had to make decisions in complex situations.

The dilemma of robotic decisions

I remember one of the first incidents I experienced firsthand: an assistance robot in a factory stopped in the middle of a critical operation. It was supposed to perform a specific task, but it detected a possible dangerous situation for the humans around it. What did it do? It went into a confused state much like the one Isaac Asimov describes in his stories about robots that cannot reconcile two of the three laws. Task or safety? In the end, the machine chose to stop, which generated large economic losses that day, but also prevented a serious accident.

This is where things get interesting. The laws are simple, but the context is not. What happens when a human being's orders are unclear? Or when the risk is not imminent but potential?

Robots are not your friends, nor your enemies. They are tools

I know many think robots will be our friends or our replacements. But after decades of working with them, I can tell you this: they are just tools. And yes, they can be very smart tools, but their purpose is still to assist humans, not to replace us.

Now, the future... that's another story. Robots are becoming more and more advanced and I have seen many of my colleagues fall into the trap of humanizing them. No, my friend, robots don't feel, they don't think like you or me. But the real challenge is not the robot itself, but how we humans interact with them.

In one of my most recent projects, I worked with autonomous robots in the logistics area. These robots were incredibly efficient, but there was something that worried me: what would happen if one of them had a glitch in its programming and decided it didn't want to stop at a human obstacle? We've already seen minor incidents, but the day something serious happens, we'll be forced to rewrite not only the programming, but our trust in machines.

Reflections on the ethics of robotics

Isaac Asimov was ahead of his time. In his book, he talks about the ethical dilemmas we will face with artificial intelligence. But you know what? That future he envisioned is already here. It's not a science fiction topic, it's a real topic.

Every day, in labs around the world, scientists and programmers like me try to find the best way to implement ethics in artificial intelligence. And believe me, it's not easy.

I've seen a robot refuse to follow orders because its programming told it that it was dangerous to humans. But I've also seen that same programming fail, causing accidents that could have been avoided.

Is it the robot's fault? No. It's our fault, the fault of humans.

The future: human and robot, hand in hand?

Technology is advancing by leaps and bounds. What seemed like science fiction a few years ago is now reality. But where will it all lead us? I don't know for sure, but I can tell you this: robots will become increasingly important in our society, and it will be up to us to ensure that that relationship is mutually beneficial.

If I have learned anything after decades of working with robots, it is that the real challenge is not to create intelligent machines, but to ensure that those machines work in our favor. And to achieve that, we need not only good engineers, but also good philosophers and ethicists. Because, at the end of the day, robots are a reflection of those of us who created them.

Request Free Information

Faculties

Trainings

The faculties embrace diverse academic disciplines and fields of study, opening doors to new perspectives and exploring different spheres of wisdom in a constantly evolving world.

Legal Notice Enrollment Conditions Privacy Policy Cookie Policy Copyright @ 2024 • Educa University

Powered by

Educa Edtech logo