The rebel of artificial intelligence can begin the error in programming
Have you seen the movie "2001 Space Odyssey"? There the artificial intelligence turns against its creators or actually the crew of the spacecraft is the victim, what the artificial intelligence tries to kill. The thing is that in real life this kind of situation can happen because of the mistake, what is done while the programmer has been done the code. The thing is that there are millions of lines of code for each action, what the computer must control, and the thing is that there would be billions of actions, what the artificial intelligence must make if it must be autonomous.
And in somewhere in the billions of subprograms can be the error, what makes the artificial intelligence very dangerous. In the fictional case of "2001 Space Odyssey" and artificial intelligence HAL would this error be that the coders, who have made the code have forgotten to determine, what is the difference of crew and machines. The mission of HAL would protect the spacecraft, and the orders, what that system has got for the situation, that some part of the spacecraft would work in the wrong way. The mission what the artificial intelligence has is just shut the device, what operates wrong way, and call the crew to see and fix the problem.
The thing is, that if the member of the crew is working wrong way, the artificial intelligence tries to shut the crew down, and this causes the situation where the computer starts to kill people. When we are thinking about the threats of artificial intelligence, we must say that artificial intelligence is not dangerous in a normal situation. And then we must continue to think about this thing sharper, and we are facing the fact, that artificial intelligence is not dangerous if it is in the computer, there are no connections to the Internet or some physical devices.
But if we want that artificial program would drive the car, we are facing the thing, that this kind of programs must properly be tested in real life, that it would work properly. And in the first cases, the use of the autopilot should be reserved for the cases, where the car is moving in the highways. In the city, the area would the use or manual control is necessary long time after the use of autopilot is possible in the highway environment.
The thing is that when we are thinking about the situation, that the cars are operating with autopilots, we are facing the thing, that we must radically change the traffic system, and the all vehicles on the highway must interactive communicate together and traffic control computers, and that would not mean that the driver can sleep behind the control wheel. The thing is that this kind of environment, where multiple vehicles are using collective artificial intelligence could turn dangerous if there are some mistakes in the code, or some kind of computer virus would infect that environment.
And here we are facing the situation, that the computer program would turn dangerous if there are errors in code, or it would operate with the thing, where it is not meant. That means that the computer program, what controls the car might not be suitable for tractors or excavator. And in those cases, the situation might be very bad, if the controlling program, what is programmed to save the fuel or batteries, would turn the tractor to the highway because the programmer has not to mention, that the vehicle what the artificial intelligence controls is the tractor.
The program asks what vehicle it will control, and if the vehicle answers "John Deere" but not mention that is a tractor, or there is no determination what the "tractor" means, would the situation be very "interesting" when the tractor turns to the highway because the traffic is most flexible in that case. If every vehicle will have an autopilot, the traffic control can just drive cars over the tractor fluently. But this would need that every each of the vehicles would have artificial intelligence and autopilot, what has an interactive connection to traffic control.
The artificial intelligence would be very dangerous in the cases, where they are operating the wrong way, and one of those cases is that they are operating by the way, where they are not meant to operate. In the traffic, the automatic cars would be extremely dangerous, if they do not understand that some people would not follow the rules, and one of the things is that if the artificial intelligence controls some vehicle, and it would face the situation where another car would jump ahead behind the "stop" sign.
This is the thing that is very bad, and the thing is that for properly work the self-driving cars would not be a very common thing in the years and working properly this system needs the change that every car would have autopilot. In this situation, the traffic would be flexible and comfortable. But it demands that every car would have the autopilot. And that means the entire environment must be renewed, what takes the time.
Have you seen the movie "2001 Space Odyssey"? There the artificial intelligence turns against its creators or actually the crew of the spacecraft is the victim, what the artificial intelligence tries to kill. The thing is that in real life this kind of situation can happen because of the mistake, what is done while the programmer has been done the code. The thing is that there are millions of lines of code for each action, what the computer must control, and the thing is that there would be billions of actions, what the artificial intelligence must make if it must be autonomous.
And in somewhere in the billions of subprograms can be the error, what makes the artificial intelligence very dangerous. In the fictional case of "2001 Space Odyssey" and artificial intelligence HAL would this error be that the coders, who have made the code have forgotten to determine, what is the difference of crew and machines. The mission of HAL would protect the spacecraft, and the orders, what that system has got for the situation, that some part of the spacecraft would work in the wrong way. The mission what the artificial intelligence has is just shut the device, what operates wrong way, and call the crew to see and fix the problem.
The thing is, that if the member of the crew is working wrong way, the artificial intelligence tries to shut the crew down, and this causes the situation where the computer starts to kill people. When we are thinking about the threats of artificial intelligence, we must say that artificial intelligence is not dangerous in a normal situation. And then we must continue to think about this thing sharper, and we are facing the fact, that artificial intelligence is not dangerous if it is in the computer, there are no connections to the Internet or some physical devices.
But if we want that artificial program would drive the car, we are facing the thing, that this kind of programs must properly be tested in real life, that it would work properly. And in the first cases, the use of the autopilot should be reserved for the cases, where the car is moving in the highways. In the city, the area would the use or manual control is necessary long time after the use of autopilot is possible in the highway environment.
The thing is that when we are thinking about the situation, that the cars are operating with autopilots, we are facing the thing, that we must radically change the traffic system, and the all vehicles on the highway must interactive communicate together and traffic control computers, and that would not mean that the driver can sleep behind the control wheel. The thing is that this kind of environment, where multiple vehicles are using collective artificial intelligence could turn dangerous if there are some mistakes in the code, or some kind of computer virus would infect that environment.
And here we are facing the situation, that the computer program would turn dangerous if there are errors in code, or it would operate with the thing, where it is not meant. That means that the computer program, what controls the car might not be suitable for tractors or excavator. And in those cases, the situation might be very bad, if the controlling program, what is programmed to save the fuel or batteries, would turn the tractor to the highway because the programmer has not to mention, that the vehicle what the artificial intelligence controls is the tractor.
The program asks what vehicle it will control, and if the vehicle answers "John Deere" but not mention that is a tractor, or there is no determination what the "tractor" means, would the situation be very "interesting" when the tractor turns to the highway because the traffic is most flexible in that case. If every vehicle will have an autopilot, the traffic control can just drive cars over the tractor fluently. But this would need that every each of the vehicles would have artificial intelligence and autopilot, what has an interactive connection to traffic control.
The artificial intelligence would be very dangerous in the cases, where they are operating the wrong way, and one of those cases is that they are operating by the way, where they are not meant to operate. In the traffic, the automatic cars would be extremely dangerous, if they do not understand that some people would not follow the rules, and one of the things is that if the artificial intelligence controls some vehicle, and it would face the situation where another car would jump ahead behind the "stop" sign.
This is the thing that is very bad, and the thing is that for properly work the self-driving cars would not be a very common thing in the years and working properly this system needs the change that every car would have autopilot. In this situation, the traffic would be flexible and comfortable. But it demands that every car would have the autopilot. And that means the entire environment must be renewed, what takes the time.
Comments
Post a Comment