Skip to main content

The artificial intelligence would not have moral breaks for its actions.

The artificial intelligence would not have moral breaks for its actions.

The thing, what makes robots the ultimate tool for many things is that they don't have a consequence, and they ever doubt the orders, what they have got from their leaders. And this makes those things very dangerous. In fact, robots can be physical machines, but they can also be artificial intelligent programs, what is collecting shares from the stock market. The thing what we ever thought before is that those computer programs might have an algorithm, what tells that everything, what cannot be uncovered does not exist. When we are thinking this kind of computer programs, we might think that they can't harm human life, but there is an exception, what would make computer programs very dangerous.


The thing is the ability to collect data automatically from the different sources, and then connect that data to the operational profile, what the artificial intelligence could use to maximize the profits, what it would get to the owners of that thing. If there are no limits for the actions, what is allowed to that artificial intelligence, the results would be devastating. In this scenario that artificial intelligence makes mistakes and buys the shares of the company, that would operate ethnically wrong way. One of those companies is the mercenary or war business companies, what is working for international companies and non-democratic governments. In that case, we might think, that artificial intelligence notices that action, what is not acceptable.


But if the program has taught to make a profit with any means, the case that somebody would uncover that kind of investment would cause the situation that artificial intelligence would buy contract killers or even try to kill that person by hacking the cars, who uncovers the secret. The thing is that the computer program uses machine learning to create tactics against those people, who dangers its investments. The thing is that if there is not a moral level in artificial intelligence, that kind of programs can make things, what would normal people ever do.


And the reason for that is the artificial intelligence ever excuse the orders. The orders to this program would be to make a profit without caring results, and this kind of things might cause the situation, that the artificial intelligence would start to care only the profit, what it can get, and the computers have no moral. They don't care anything else than the missions, what they have got. And this makes them so effective weapons.

Comments

Popular posts from this blog

Could this be the thing, that some people don't want you to know about zombies?

Could this be the thing, that some people don't want you to know about zombies? The simplest way to control zombies, which are made by using tetrodotoxin, which is called "zombie poison" is to use the automatic dispenser, which is normally used in diabetes treatment. In this case, the dispenser would be loaded by using tetrodotoxin. If the level of this poison is right, the person would be totally under the control of other people. And if the system cannot detect this poison there is a possibility to make the laboratory experiment when the staff will follow the decrease of the tetrodotoxin, and then the system can use a simple clock, which will inject poison to the body of a victim after a certain time. This would be an effective tool in the hands of military and counter-terror operators. Those zombies can be captured enemy operators, who will send back, and then they just kill their ex-partners. By using genetically engineered bacteria could be possible to create t...

The story of (Northrop?) TR3A "Black Manta"

    Image I The story of (Northrop?) TR3A "Black Manta" Sometimes there have been discussions that the mysterious TR3A "Black Manta" was a study project for the B-2 bomber.  That means that TR3A was used as the flying mockup for testing the aerodynamic and control solutions for the B-2 "Spirit" bomber. The thing that supports this thing is that the manufacturer of both planes is Northrop corporation.  Along with "Aurora" was flying one other mysterious aircraft, TR3A "Black Manta". The system is described as the "Stealth VTOL" (Vertical Take-Off and Landing) aircraft. Unless the mysterious TR3B, the Northrop TR3A is the conventional VTOL-aircraft, what is using regular jet engines, but what has the stealth-body.  Image II The thing is that the stealth-reconnaissance aircraft, which uses a similar technology with the Northrop B-2 bomber would be an interesting part of the weapon arsenal. There is a possibility that the TR3A was...

The cyborg lichen can be one of the most exotic visions of what the high-tech civilization might look like.

.     The cyborg lichen can be one of the most exotic visions of what the high-tech civilization might look like. Above is the image of the nano-submarine. There is introduced an idea, that this kind of system would equip with living neurons, which makes it like some kind of artificial bug or mosquito. The idea is that the robot can take the nutrient by using the robot tube, which acts like a proboscis. The neurons can get nutrients and the rest of the machine can use small fuel cells or the energy to that system can deliver by using the radio waves. Futurologists are thinking about the ideas, what the hybridization of the neurons and machine would look like? Could the hyper-technical civilization look like the group of midget submarines? Those submarines might take nutrients for the neurons, what is living in them. The purpose of the midget submarines, which might be size less than a couple of millimeters would maximize the survivability of the neurons. One of the most intere...