Skip to main content

Can artificial intelligence have a conscience?

  

.

  

Can artificial intelligence have a conscience?

If a robot drinks alcohol it starts to act like a real drunken person if it has the chemical and other detectors that are confirming the thing what the robot drinks are alcohol and the module in its program that allows it to act like a drunken person. But what somebody would do with a robot, what emulates people, who drink too much alcohol? The fact is that the human-shaped robots can use also in covert operations of intelligence and law enforcement, and if they can emulate perfectly human behavior, they can work as body-doubles in undercover intelligence operations.

Can the machine intelligence have a conscience? Is it possible that the machine shames if it makes wrong things? The fact is that artificial intelligence might store things when it pushes somebody or makes something wrong. In this register, the artificial intelligence would store the cross, when it would not make things what it should.

And if that thing is possible the system might have different levels of errors or mistakes, which causes the different level marking in the register. If artificial intelligence makes mistake, what risks human life, that would cause a report and system shutdown. And here we are coming to one of the most powerful things in computer programs. The computer would not shame of what it makes, even it makes horrible mistakes.

The system acts in this case as its databases are giving orders to it. And that means that the robot can say "I'm sorry" and continue acting as before. The thing is that the robot doesn't realize what it says and what it makes. When it touches or pulls somebody, that pressure to its sensors would activate a certain part or table in the database, and then the robot just says that it's so sorry.

But the fact is that it can otherways be acting against that thing if other parts of databases are ordering it move or make something else. But what if the master orders the artificial intelligence to make a crime? The thing is that the robot should resist the orders to take a gun and rob a bank. But in this case, the robot would have the willingness to resist. And that means the robot has its willingness. If the robot would resist the orders of its master that is willingness.

So what every robot that is sold to civilians has the base program in its microchip which denies breaking the law? In that case, we must realize, that robot should have the ability to defend its master. But how far it can go? If the mission of that machine is to make everyday jobs for its human master. It should not give the merchandise, that it carries to anybody.

But what if somebody will try to just take those things from the robot or harm the robot itself? How far that kind of machine is allowed to go in that kind of cases? Or what if the robot has a so-called shadow protocol? That means that when the robot sees some armed people, who are not recognized as authorized weapon carriers, those robots can attack the person immediately if it just cuts grass.

The shadow protocol means that robots might have skills and sensors, which the owners are not known. One of the biggest differences between humans and robots is that the robot doesn't need the training. The only thing that must be done when the janitor robot is updated to the combat robot. And that thing is the USB-stick where is the new action module. If the robot has instructions that it must keep that module secret it will keep it secret. The part of programming that is covered that end-users would not know about its existence is calling as "shadow protocol".

They might have infra-red systems and sonars that existence the robot must not uncover until they are needed. The robot might have telemetry what is connecting it to the official databases, and if that robot sees firearms in the hands of people, who have not been allowed to carry them. Then that robot would start the weapon disarming process. And that kind of robot can cause a very big surprise.

https://curiosityanddarkmatter.home.blog/2021/01/11/can-artificial-intelligence-have-a-conscience/

Comments

Popular posts from this blog

Could this be the thing, that some people don't want you to know about zombies?

Could this be the thing, that some people don't want you to know about zombies? The simplest way to control zombies, which are made by using tetrodotoxin, which is called "zombie poison" is to use the automatic dispenser, which is normally used in diabetes treatment. In this case, the dispenser would be loaded by using tetrodotoxin. If the level of this poison is right, the person would be totally under the control of other people. And if the system cannot detect this poison there is a possibility to make the laboratory experiment when the staff will follow the decrease of the tetrodotoxin, and then the system can use a simple clock, which will inject poison to the body of a victim after a certain time. This would be an effective tool in the hands of military and counter-terror operators. Those zombies can be captured enemy operators, who will send back, and then they just kill their ex-partners. By using genetically engineered bacteria could be possible to create t...

Terrifying visions of genetic engineering.

 Terrifying visions of genetic engineering.  The modern Hounds of Baskerville ABW:s (Advanced biological weapons are the modern Hounds of Baskerville. The term ABV means the animal which is genetically engineered. And which is controlled by using some interactive electronic systems. There is a possibility to connect the action camera to dogs collars. And transmit commands to that animal by using electronic systems. Spoken words can use to the commands.  Or they can be a series of sounds that are stored in the sound files. When the operator activates a certain file that sound means a certain command to the animals. There is the possibility to automatize some commands by connecting the system to the image recognition tool. When the action camera sees something that is stored in the memory of the computer. That system sends the coded order to the animal.  Nanotechnology and genetic engineering are the most lethal combination in the world. The ABW:s (Advanced Biological ...

The story of (Northrop?) TR3A "Black Manta"

    Image I The story of (Northrop?) TR3A "Black Manta" Sometimes there have been discussions that the mysterious TR3A "Black Manta" was a study project for the B-2 bomber.  That means that TR3A was used as the flying mockup for testing the aerodynamic and control solutions for the B-2 "Spirit" bomber. The thing that supports this thing is that the manufacturer of both planes is Northrop corporation.  Along with "Aurora" was flying one other mysterious aircraft, TR3A "Black Manta". The system is described as the "Stealth VTOL" (Vertical Take-Off and Landing) aircraft. Unless the mysterious TR3B, the Northrop TR3A is the conventional VTOL-aircraft, what is using regular jet engines, but what has the stealth-body.  Image II The thing is that the stealth-reconnaissance aircraft, which uses a similar technology with the Northrop B-2 bomber would be an interesting part of the weapon arsenal. There is a possibility that the TR3A was...