. |
Can artificial intelligence have a conscience?
If a robot drinks alcohol it starts to act like a real drunken person if it has the chemical and other detectors that are confirming the thing what the robot drinks are alcohol and the module in its program that allows it to act like a drunken person. But what somebody would do with a robot, what emulates people, who drink too much alcohol? The fact is that the human-shaped robots can use also in covert operations of intelligence and law enforcement, and if they can emulate perfectly human behavior, they can work as body-doubles in undercover intelligence operations.
Can the machine intelligence have a conscience? Is it possible that the machine shames if it makes wrong things? The fact is that artificial intelligence might store things when it pushes somebody or makes something wrong. In this register, the artificial intelligence would store the cross, when it would not make things what it should.
And if that thing is possible the system might have different levels of errors or mistakes, which causes the different level marking in the register. If artificial intelligence makes mistake, what risks human life, that would cause a report and system shutdown. And here we are coming to one of the most powerful things in computer programs. The computer would not shame of what it makes, even it makes horrible mistakes.
The system acts in this case as its databases are giving orders to it. And that means that the robot can say "I'm sorry" and continue acting as before. The thing is that the robot doesn't realize what it says and what it makes. When it touches or pulls somebody, that pressure to its sensors would activate a certain part or table in the database, and then the robot just says that it's so sorry.
But the fact is that it can otherways be acting against that thing if other parts of databases are ordering it move or make something else. But what if the master orders the artificial intelligence to make a crime? The thing is that the robot should resist the orders to take a gun and rob a bank. But in this case, the robot would have the willingness to resist. And that means the robot has its willingness. If the robot would resist the orders of its master that is willingness.
So what every robot that is sold to civilians has the base program in its microchip which denies breaking the law? In that case, we must realize, that robot should have the ability to defend its master. But how far it can go? If the mission of that machine is to make everyday jobs for its human master. It should not give the merchandise, that it carries to anybody.
But what if somebody will try to just take those things from the robot or harm the robot itself? How far that kind of machine is allowed to go in that kind of cases? Or what if the robot has a so-called shadow protocol? That means that when the robot sees some armed people, who are not recognized as authorized weapon carriers, those robots can attack the person immediately if it just cuts grass.
The shadow protocol means that robots might have skills and sensors, which the owners are not known. One of the biggest differences between humans and robots is that the robot doesn't need the training. The only thing that must be done when the janitor robot is updated to the combat robot. And that thing is the USB-stick where is the new action module. If the robot has instructions that it must keep that module secret it will keep it secret. The part of programming that is covered that end-users would not know about its existence is calling as "shadow protocol".
They might have infra-red systems and sonars that existence the robot must not uncover until they are needed. The robot might have telemetry what is connecting it to the official databases, and if that robot sees firearms in the hands of people, who have not been allowed to carry them. Then that robot would start the weapon disarming process. And that kind of robot can cause a very big surprise.
https://curiosityanddarkmatter.home.blog/2021/01/11/can-artificial-intelligence-have-a-conscience/
Comments
Post a Comment