Skip to main content

The rebel of artificial intelligence can begin the error in programming

The rebel of artificial intelligence can begin the error in programming


Have you seen the movie "2001 Space Odyssey"? There the artificial intelligence turns against its creators or actually the crew of the spacecraft is the victim, what the artificial intelligence tries to kill. The thing is that in real life this kind of situation can happen because of the mistake, what is done while the programmer has been done the code. The thing is that there are millions of lines of code for each action, what the computer must control, and the thing is that there would be billions of actions, what the artificial intelligence must make if it must be autonomous.

And in somewhere in the billions of subprograms can be the error, what makes the artificial intelligence very dangerous. In the fictional case of "2001 Space Odyssey" and artificial intelligence  HAL  would this error be that the coders, who have made the code have forgotten to determine, what is the difference of crew and machines. The mission of HAL would protect the spacecraft, and the orders, what that system has got for the situation, that some part of the spacecraft would work in the wrong way. The mission what the artificial intelligence has is just shut the device, what operates wrong way, and call the crew to see and fix the problem.

The thing is, that if the member of the crew is working wrong way, the artificial intelligence tries to shut the crew down, and this causes the situation where the computer starts to kill people. When we are thinking about the threats of artificial intelligence, we must say that artificial intelligence is not dangerous in a normal situation. And then we must continue to think about this thing sharper, and we are facing the fact, that artificial intelligence is not dangerous if it is in the computer, there are no connections to the Internet or some physical devices.

But if we want that artificial program would drive the car, we are facing the thing, that this kind of programs must properly be tested in real life, that it would work properly. And in the first cases, the use of the autopilot should be reserved for the cases, where the car is moving in the highways. In the city, the area would the use or manual control is necessary long time after the use of autopilot is possible in the highway environment.

The thing is that when we are thinking about the situation, that the cars are operating with autopilots, we are facing the thing, that we must radically change the traffic system, and the all vehicles on the highway must interactive communicate together and traffic control computers, and that would not mean that the driver can sleep behind the control wheel. The thing is that this kind of environment, where multiple vehicles are using collective artificial intelligence could turn dangerous if there are some mistakes in the code, or some kind of computer virus would infect that environment.

And here we are facing the situation, that the computer program would turn dangerous if there are errors in code, or it would operate with the thing, where it is not meant. That means that the computer program, what controls the car might not be suitable for tractors or excavator. And in those cases, the situation might be very bad, if the controlling program, what is programmed to save the fuel or batteries, would turn the tractor to the highway because the programmer has not to mention, that the vehicle what the artificial intelligence controls is the tractor.

The program asks what vehicle it will control, and if the vehicle answers "John Deere" but not mention that is a tractor, or there is no determination what the "tractor" means, would the situation be very "interesting" when the tractor turns to the highway because the traffic is most flexible in that case. If every vehicle will have an autopilot, the traffic control can just drive cars over the tractor fluently. But this would need that every each of the vehicles would have artificial intelligence and autopilot, what has an interactive connection to traffic control.

The artificial intelligence would be very dangerous in the cases, where they are operating the wrong way, and one of those cases is that they are operating by the way, where they are not meant to operate. In the traffic, the automatic cars would be extremely dangerous, if they do not understand that some people would not follow the rules, and one of the things is that if the artificial intelligence controls some vehicle, and it would face the situation where another car would jump ahead behind the "stop" sign.

This is the thing that is very bad, and the thing is that for properly work the self-driving cars would not be a very common thing in the years and working properly this system needs the change that every car would have autopilot. In this situation, the traffic would be flexible and comfortable. But it demands that every car would have the autopilot. And that means the entire environment must be renewed, what takes the time.

Comments

Popular posts from this blog

Could this be the thing, that some people don't want you to know about zombies?

Could this be the thing, that some people don't want you to know about zombies? The simplest way to control zombies, which are made by using tetrodotoxin, which is called "zombie poison" is to use the automatic dispenser, which is normally used in diabetes treatment. In this case, the dispenser would be loaded by using tetrodotoxin. If the level of this poison is right, the person would be totally under the control of other people. And if the system cannot detect this poison there is a possibility to make the laboratory experiment when the staff will follow the decrease of the tetrodotoxin, and then the system can use a simple clock, which will inject poison to the body of a victim after a certain time. This would be an effective tool in the hands of military and counter-terror operators. Those zombies can be captured enemy operators, who will send back, and then they just kill their ex-partners. By using genetically engineered bacteria could be possible to create t...

Terrifying visions of genetic engineering.

 Terrifying visions of genetic engineering.  The modern Hounds of Baskerville ABW:s (Advanced biological weapons are the modern Hounds of Baskerville. The term ABV means the animal which is genetically engineered. And which is controlled by using some interactive electronic systems. There is a possibility to connect the action camera to dogs collars. And transmit commands to that animal by using electronic systems. Spoken words can use to the commands.  Or they can be a series of sounds that are stored in the sound files. When the operator activates a certain file that sound means a certain command to the animals. There is the possibility to automatize some commands by connecting the system to the image recognition tool. When the action camera sees something that is stored in the memory of the computer. That system sends the coded order to the animal.  Nanotechnology and genetic engineering are the most lethal combination in the world. The ABW:s (Advanced Biological ...

The cyborg lichen can be one of the most exotic visions of what the high-tech civilization might look like.

.     The cyborg lichen can be one of the most exotic visions of what the high-tech civilization might look like. Above is the image of the nano-submarine. There is introduced an idea, that this kind of system would equip with living neurons, which makes it like some kind of artificial bug or mosquito. The idea is that the robot can take the nutrient by using the robot tube, which acts like a proboscis. The neurons can get nutrients and the rest of the machine can use small fuel cells or the energy to that system can deliver by using the radio waves. Futurologists are thinking about the ideas, what the hybridization of the neurons and machine would look like? Could the hyper-technical civilization look like the group of midget submarines? Those submarines might take nutrients for the neurons, what is living in them. The purpose of the midget submarines, which might be size less than a couple of millimeters would maximize the survivability of the neurons. One of the most intere...