Getting machines to work alongside people requires an understanding of the ‘safety zones’ of the human body. The robot, at the moment, does not understand that hitting an eye has worse consequences than hitting an arm.

He only knows that he shouldn’t do it, so the consequence analysis can’t do it. It doesn’t make ethical decisions, only probabilistic ones. The work of the developers of ‘cobots’ is precisely there at this time.

The challenge is to free the collaborative robots from those closed spaces they are usually in, to get them out of virtual cages and allow them to interact freely while respecting the human being in all his integrity. Doing so by understanding the repercussions of each action, of each mistake, is the challenge.

Robots making that the human being achieve its objectives and human impelling actions for that the robot make the same. It is about assuming that we can be technologically more human if we understand the value of the difference of tasks and the increase of skills.

LEAVE A REPLY

Please enter your comment!
Please enter your name here