Monday, January 31, 2022

Dimensions, bosons, and Philosopher stones

The 4D equivalent of a cube is known as a tesseract, seen rotating here in four-dimensional space, yet projected into two dimensions for display. (Wikipedia, Four-dimensional space)


What means dimension?


The fourth dimension is the theoretical thing that cannot model in a 3D universe. The thing is that when the energy level of particles is starting to grow. They are getting the attribute that it has not before. We can say that Higgs Boson is "only" an extremely high-energy electron. 

The thing is that some researchers believe that Higgs Boson is at least close to the particle that is next to photon and other particles. The photon is a gauge boson. And Higgs boson is a scalar boson.  The thing that makes Higgs Boson remarkable is it that is the first in the line of scalar bosons. 

We can think that dimension is the space or room where the particles and wave movement have a certain energy level. When the energy level rises high enough causes that the particle slips into another dimension. The brane theory explains that thing as the distance between energy levels of the observer and particle.

When the difference of the energy level of the particle and observer will turn higher. That makes it harder to see those particles. So the particle flows away from the visible area. 

Otherways if the energy level of the particle will flow away it will turn flattered and flatter until it gets the 2D form. So the 4D material is the particles that are oscillating with the frequency that is invisible to us. 

If the oscillation frequency between 3D material and higher or lower energy material is high enough. That means the interaction between those materials turns weaker at any time when the distance between energy levels is turning higher. 



Modern Philosophers stones


The WARP bubbles and Higgs bosons are the modern Philosophers stones. Those things can use for adjusting the energy level or the size of the quarks. Turning material to dark is theoretically a very easy thing. The energy must just pull out from quarks. 

And that thing should make the material invisible. And when the material wanted to return to its original form. That thing can happen by pumping energy back to quarks. 

One thing that can turn material invisible is the WARP bubble. When WARP bubble will surround particles that are not moving. That thing causes the energy will start to travel away from the particle.  So the particle will first expand, And then it will turn flat. 

The idea of dark matter is that it has quarks that size is different than invisible material. Theoretically, adjusting the size of quarks is quite easy. The Warp bubble is enough and the time of existence of that bubble determines how much energy will flow away from the material. 

The Higgs material is a theoretical form of material. The Higgs material is only the material that is turned into different sizes. The tool for that thing could be the flat or low energy Higgs boson that pulls a small part of the energy of the material to itself. 

When the WARP bubble is removed the quantum fields will press the particle to flat or different size than other particles are. Another theoretical thing that can turn quarks into a flat is Higgs Boson. When Higgs Boson is turning flat there is like the small quantum cup that is pulling energy to that thing. 

If that flat Higgs Boson is taken near material or quarks. That causes that energy travels to that quantum cup. That thing turns material to "Higgs material". The quarks in Higgs material have different sizes than visible material quarks. That makes Higgs material invisible to us. Theoretically, returning the Higgs material to visible form is easy. The lost energy must just push back to the quarks.  


Image 1) https://en.wikipedia.org/wiki/Four-dimensional_space

Image 2) https://en.wikipedia.org/wiki/Particle_physics


https://thoughtsaboutsuperpositions.blogspot.com/

What means cognitive learning systems?

   

Pinterest


Cognitive systems are systems that are handling problems like humans. They are learning systems that are increasing their capacity all the time. When those systems are facing first time some problem. They are asking human operators to make the solution for those things. Then the system creates the matrix of that solution. And when the system faces a similar case. 

It can use the solution that the operator is made automatically. When its sensors are facing similar parameters that are connected with the solution. That made for the previous case. The thing that makes cognitive systems autonomous is the number of solutions stored in the databases. The idea is that when a sensor faces a similar situation that is stored in a database. 

It can autonomously connect that database to the AI. Data that the system benefits can be the film that is connected to the database. The thing is that this is one way to make the system learns things. The idea is similar to training humans. When the system faces the problem the first time it asks for help. But when the number of solutions is increasing that makes the system more independent. 

The problem is where the system can collect data? The AI can use machine learning for many purposes. Robotics is the most impressive thing where AI can be effective. But there are two types of robots. The robot can be physical. Or it can be an algorithm or some kind of bot that collects virtual data from the Internet. The algorithm can follow the homepage and then transmit the data to the user. 

A good example is an application that follows certain aircraft or air routes by using public databases. That kind of bot can keep a record or database of how often a certain flight is late. And that kind of bot can also follow many airfields for seeing. If some certain airlines are late more often than others. 

The thing is that AI can collect data from certain systems. It can follow the cases like sick leave. Then it can search is there something that predicts the sick leaves. That thing can be like Christmas parties. Or some other things are predicting the sick leaves. The thing that makes AI a very powerful tool is that it can search data from multiple systems. 

It can search the emails and connect the information of the surveillance cameras to that data. The image recognition systems are making AI more powerful than ever before. The system can see. If there are some kind changes in the walking style or other body communication. The AI can see changes and report them to security personnel. 


https://thoughtandmachines.blogspot.com/

Sunday, January 30, 2022

Russia transfers "Iskander" tactical ballistic missiles near the border of Ukraine.




The thing is that situation at the Ukraine border is turning hotter. The Russian highly mobile SRBM (Short Range Ballistic Missiles) was transported near the Ukrainian border. Those highly mobile battlefield systems are extremely powerful. And they can transport to the operational area fast. Those missiles are The 9K720 Iskander (NATO reporting name SS-26 "Stone"). There is one thing that makes those tactical missiles very "interesting".  

They are hard to find but "easy to destroy". The thing that can be effective against those systems is the "Predator" drone that can patrol at the sky and then destroy the missile launchers. But that thing is extremely difficult to make in the practical world. There is the possibility that there are infrared and radiation suppressors in those transporter vehicles. So they are not easy to find. And they are supported by anti-aircraft and jet-fighter combinations. So that means that those "Iskander" missiles are hard to destroy. 

Those mobile launchers can bring to the operational area by using the airplane. And there is possible parachute delivery ability in those launcher trucks. That means the large-size strategic cargo planes can drop those missiles into a combat zone where they can whip enemy forces away in seconds. There is the possibility that there is some kind of "police role" in those tactical missiles. They can use to destroy also rebellious troops.

There are many types of warheads in those missiles. The nuclear, bunker-busting, or cluster warheads with HE and thermobaric options are the things that are making those advanced and deadly missiles capable and respected systems. In nuclear strategy, those tactical missiles are supported by aircraft and more powerful strategic missiles. So if tactical systems fail, the next step is to use full-scale thermonuclear devices. 

Why have those highly mobile nuclear-capable missiles are not ever faced criticism? Why they are more acceptable than some "Predator"-drones? The missiles are the most powerful weapons. And there is always a risk that the battlefield missile systems will be stolen. 

Predator drones are faced criticism but those tactical missile systems have not caused any kind of discussions. Those tactical nuclear-capable missiles are the thing that might seem only the battlefield use weapons. But the fact is that nuclear warheads make them very capable and feared systems. 


https://en.wikipedia.org/wiki/9K720_Iskander

https://en.wikipedia.org/wiki/General_Atomics_MQ-1_Predator

https://en.wikipedia.org/wiki/General_Atomics_MQ-9_Reaper



Sunday, January 23, 2022

Maybe the human brains are the key to making a more energy-friendly and powerful quantum computer.



The researchers of Riken are testing the free-energy-based theories. For creating models for self-learning neural networks. There is a link to that article below this text. The energy-effective quantum computers are the key to self-operating robots. 

But when we want to make an effective self-learning quantum network. We can connect the energy to data that is driven in the system. And another thing that makes those neural networks effective is that every single part of the intelligent neural network would be intelligent. 

They can share the energy overdose. To other parts of the neural network. And that decreases the need to use electricity. That thing can make those quantum computers more energy-friendly. This makes it possible to make smaller quantum computers. 

In neural networks is many small components that are acting as an entirety. And the most well-known neural system is the human brain. One neuron might not very impressive. But 200 billion neurons are turning that neural system into the most powerful quantum computer in the world. 


https://scitechdaily.com/the-free-energy-principle-explains-the-brain-optimizing-neural-networks-for-efficiency/


Image: https://scitechdaily.com/the-free-energy-principle-explains-the-brain-optimizing-neural-networks-for-efficiency/


The principle of AI is that it must do only things made by programmers in purpose. 


The principle of the developers of the AI is that they don't want to make any kind of rebellious robots. Artificial intelligence must make things that programmers want without any nasty surprises. 

The AI must not have the power and abilities that are possible to make. Those systems must have only the wanted abilities. Same way nuclear weapons must not give the most powerful power that they can create. The nuclear weapon must give the wanted and needed power. The thing that makes AI's safe is that they can operate in limited areas. But self-learning platforms are the thing. That can make those systems the same way multipurpose. And able to operate the same way in the large sector as humans. 

This thing can move to the world of AI. The platforms of AI must do only things that are wanted. But if there is a possibility to make robots that can visit shops for people and paint their houses. That thing can seem very nice. But those robots must have limits. There must be a system that allows robots to resist and make a report if somebody orders it to make something illegal. 


The robot is not the same as the AI.


The AI is a computer program or algorithm that can do many things. The robot is that platform that is making physical operations under the control of the AI. So the robot body itself would not be independent. That physical system can cooperate with supercomputers by using the WLAN systems. 

When developers are creating the AI. They are creating systems that can make things what their controller wants. They are making the segment of which kinds of things the AI must do. There is no AI that can do all possible things. 

Modular AI can do many things. In modular AI many separated algorithms can operate independently. There might be different algorithms that are reserved for the military, law enforcement, and civil work. So the same robot body can make all missions that humans can. And the only limit is what skill segment it can use. 

The fact is that modular AI is making robots operate in many areas. One of the examples is the robot servant. There are certain rooms in the house. And there is a series of actions or certain modules for every room. When the robot is moving to the living room it downloads the module. In that module are the actions that are meant for the living room. 

When the robot moves to the kitchen there might be an electronic calendar. When the robot's owner stores some work in the calendar. That mission is transferred to the AI that controls the robot body. If the owner of the robots will give the order to make the certain food. 

At the first, the robot or controlling AI can search for the receipt of the food from the Internet. The robot would see the freezer if there is the needed raw material. If that raw material is not stored. That robot visits the shop. Putting the required algorithms in modules that denies that the AI's code will not grow too large. 


https://thoughtandmachines.blogspot.com/

The new technology makes it possible to create new multirole systems that can be powerful tools. They can operate in civil and military work.




Image 1


Things like flying submarines are turned into reality. At least in small-size systems. Modern quadcopters can operate underwater and in airborne conditions. The quadcopter can have laser sensors. At the end of its engine stems. Those sensors can make the quadcopter stay away from the walls. Those quadcopters can dive to the Mariana Trench. 

The technology bases the idea that the ball. Where instruments are can be made by using carbonite glass. That carbonite layer can steam at the outer layer of the carbon fiber or titanium ball. The hatch must open outside. Because of water pushing that hatch inside. 



Image 2


 That is the so-called artificial diamond. The engine stems can connect. To the ball by using magnets. And the control must happen by using a wireless system. That thing makes it possible to break the core of the ball. And because there are no holes or scratches where water can flow in at extreme pressure. That quadcopter can dive deeper than regular systems. 

The small size legged robots can travel at the hard terrain. They can also deliver to the area by using quadcopters. The robot can have infrared cameras and chemical sniffers. And they can search the toxic wastes. If they have a Geiger sensor. That makes them able to find radioactive material. Those robots can go to caves and near volcanic craters. They can also use as theodolites for mapping areas and especially caves. 




Image 3) 


Artificial intelligence-controlled weapons are coming.


That kind of walking robot that cooperates with another system can use as a recon tool.  Those robots can also use to put detonators on enemy vehicles or they can have internal explosives. Or those robots can carry small antitank weapons. 

But when we are thinking about things like smart rifles and so-called intelligent bullets. There are two ways to make that thing.  

The semi-intelligent system where a bullet can have the recognition system. That system can be the RFID-based radar receiver. The intelligent aiming system takes an image of the target. Then the rifle follows the flight of the bullet. Then the system will put the mark on the screen of the scope how the shooter should turn the gun. Or the guidance system can be inside the bullet. 

The bullet can have internal guidance electronic and laser seeker. Small wings are controlling its flight and turning it to the target. But the power of the microchips increases. And their price is getting lower. That means that the intelligent bullet can turn the infrared or image homing very soon. That makes those bullets fire-and-forget type systems. 

The camera simply takes an image of the target. Then the system downloads it to the computer. That is in the bullet. Then the system just launches the weapon. The intelligent rifle might know the distance to the target. And that weapon shoots bullets when the range is optional. 


Image 1) https://spectrum.ieee.org/legged-robots-anymal


Image 2) https://oscarliang.com/flying-quadcopter-under-water/


Image 3 Pinterest



Thursday, January 20, 2022

The problem in innovation meetings is how to make people share their knowledge?



Many times in innovation meetings are two benefits against each other. The personal interest of the individual worker might be hiding the ideas. And the benefit of the company is that the person shares those innovations or ideas that can turn to innovation. 

One thing about teamwork is that if the people are staying in their workplace during all their working period that sounds fine. When people enjoy their work they know their work and all stages of work very well. But there is one negative thing about that kind of way to make jobs. The problem is that if the people in the work never change their workplace that causes the problem to get new ideas for the company. 

If people have ideas. But they don't want to share them. This thing means those ideas are not even known. Sharing ideas is in a key role in innovation meetings. Silent knowledge is an interesting thing. But people should let other people know about their knowledge. 

This means that there might be many wise men on Earth. But if those people are ever sharing their thoughts and ideas. If that knowledge remains silent that thing does not benefit the workplace or company. The problem with silent knowledge is that it's capital for a person. And people would not want to share their skills with people who they feel like competitors. 

Personal skills and knowledge brings value to the person. And those things are important when the person wants things like promotions. That thing limits the will to share knowledge which might have a great value for the company. 

When a company requires new ideas in innovation meetings. They need visible knowledge. The term "visible knowledge" means ideas and innovation that a company and its leaders can benefit from while they are making products. They do nothing about the silent knowledge. Innovation meetings are meant for getting information on how to make a company's products and services better and more addictive. 

The problem with fixed-period workers is that they might have motivation for sharing their ideas. The new employee might not necessarily bring ideas to the company. Sharing ideas requires that person wants to share ideas. But also the workgroup needs will receive those ideas. If those people are not listening. That would not motivate people to participate in the innovation processes. And innovation plays a key role in the productization process. Every company requires new products and concepts that it remains as the 

That means there are no new ideas in the workplace. But the problem is that the fixed-time employers would not want to bring their full know-how to the working team. The people who are working in fixed periods feel being outsiders. Employees who have normal employment relationships might feel that fixed-time workers are competitors. 

The normal question is why the fixed-time employee wants to bring their full know-how to the company?  How do motivate those people participating to share their ideas with the workgroup? If there is no way to get a regular working relationship with the company. That means that the person would not want to make an extra thing for the team. And there is the possibility that the team would not listen to those opinions. 

Tuesday, January 18, 2022

The new biological and chemical weapons are more stealthy than ever before





The virus that causes multiple sclerosis is found by the U.S military. The thing is that kinds of viruses can use. For creating brand new, non-lethal bioweapons. Those biological weapons can make the enemy unable to fight. And another thing is that those weaponized microbes can use as control tools. 

Things like dangerous prisoners can keep in order by infecting them with viruses that are causing something like multiple sclerosis or allergy. If the skin of the person is itching all the time. It is hard to concentrate on the things like following commands. 

The new nerve agents would not kill a victim instantly. The origin of those chemicals was based on the Soviet age chemical weapon called Kolokol 1. That opioid is lethal if there are no counter chemicals. The thing that makes Kolokol-1 or some of its derivatives lethal is that they deny the neuron to null itself. The neuron sends the signals until it dies.

Normally nerve agent just denies the action of one enzyme. That enzyme will destroy the neurotransmitters. If that enzyme is not operating. The electricity of the neurons will cause neuromuscular cramps. That kills a person in about 10-30 seconds. 

The reason why the Novichok chemical is so lethal is that there could be a transporter enzyme. That takes this chemical like VX straight to the heart nerves. That causes a heart attack in seconds. The Novichok is packed in crystals. Those crystals deny sensors to detect that chemical agent. When the sound impulse or some acid breaks those crystals. They release the nerve agent.  

There is also super cannabis. That nanotechnology-based chemical is two carbon pyramids. The transporter chemical transfer those carbon bites between neurons synapsis. Then those neurons are not able to shut down their electric actions. 

But there is the possibility that there is a nanorobot version of that opioid. One version of those things is the nanotechnical molecule that can enter the nervous system. And then that thing is acting as a radio antenna. When that thing will stimulate by using electromagnetic radiation. That thing stimulates neurons. The electric signals that are stressing the neuron are finally causing death. 

The genetically manipulated amoeba or some bacteria can transport those nanomachines to human brains. The thing is that if those amoebas are microchip controlled. That allows delivering the nanomachines to the precisely wanted place.  And the electric shocks can cause the person would stay awake until their brains are empty from the neurotransmitters. The systems that are basing microchips or some nanomachines are not alarming detectors. 



https://phys.org/news/2022-01-molecular-device-cells-bioelectric-fields.html


https://scitechdaily.com/u-s-military-evidence-that-epstein-barr-virus-causes-multiple-sclerosis/


https://en.wikipedia.org/wiki/Kolokol-1


https://en.wikipedia.org/wiki/Novichok_agent


https://en.wikipedia.org/wiki/VX_(nerve_agent)


Friday, January 7, 2022

Hacking the brains is needed for advanced robotics.

 

 Hacking the brains is needed for advanced robotics. 



Hacking the brains and creating synthetic memories is one of the biggest conspiracy theories in the world. The thing is that these kinds of things are under research and the purpose of that kind of technology, where brains are hacked or cracked is to fix brain damages. Hacking the brains is theoretically very simple. The researchers must only record the EEG from the brains and then connect that EEG signal to certain images. So this thing makes it possible to read dreams and affect them. 

The person would see images about things like animals or humans, and then the EEG will record while that process. Then the person looks at a longer film. And then the EEG that is recorded during that process. Will compile with the EEG records that are taken while the person looks at images. The problem is only if those EEG curves are unique. But if the EEG curves are similar there is the possibility to download things like dreams to the screen of computers. 

The thing is that hacking brains are giving the ultimate possibilities in many areas from science to the military. There is the possibility to educate people simply by inputting data into their brains. And the false memories are making it possible to give combat experience to soldiers before they even go to combat. But these kinds of systems are extremely dangerous in the wrong hands. 

There are visions of the external robot body. Brain hacking gives those systems the ability to communicate with the nervous system without borders. This kind of system requires brain hacking for working perfectly. 

Hacking the brain is needed for developing the prosthesis. That is reacting to nervous signals. The system must "only connect" certain EEG curves to certain movements. The same system can use to control robots and especially human-looking robots by using brain waves. The reason, why the human-shaped robot is easy to control by using the EEG. 

Is that the manipulators and the senses have the match in brains and robot body.  So that thing makes it possible to make a robot emulate the movements that the operator makes. If the system uses the signals taken from the brain areas. Which controls the person's movements the controlling that kind of system requires that person is moving. 

So the system that controls the EEG signal requires the brain tracks that are activating the neurons that are moving muscles. The problem with that kind of external body is that the person who is using them must move while using that robot. The system uses the signals that are taken from the movement centers. So making that kind of system using the system requires the data that activates the neurons that are moving muscles. 


The series of neuron activation while the person will move hands

1) Will to move hands will be sent 

X) To the transmitter-neuron sends that data to 

2) Neuron what is controlling muscles


So the series of actions that this kind of system needs to move robots. While a person sits on the chair is this:


1) Will to move hand or leg will be sent 

X)  to transmitter neurons

2) The robot control level


The problem is how to aim the signals of the transmitter neurons to the robot. That means that those signals must reroute to the robot without those signals can activate the neuron that controls muscles. But if the operator can move all the time. That thing is not the problem. 

Of course, muscles can be disabled, by using some medicals, but that thing is dangerous. And that means the system that controls the external bodies by using the EEG is a challenging but fascinating mission. 

The BCI (Brain-Computer Interfaces) are the key to BMI (Brain-Machine Interfaces). Those systems are control robots by using EEG. Normally the BCI is connected to the speech center in the human brain. And that thing transforms EEG to speech. The same system can use to control robots. The same signals that control the speech synthesizer can route to control robots. So when the operator thinks "robot move forward" the robot moves forward. 

The problem is. How the AI knows which commands are meant for the robot? The solution is taken from dog training. When trainers give commands to dogs they must separate the commands from common noise. 

Certain gestures or certain words predict those commands The purpose of those things is to tell the dog that commands are meant to it. The same problem is with robots. Robots must separate the commands that are given in purpose from other words. 

Before every command is a confirmation word. And the purpose of that prediction word is to route the command to the robot. 

Every order that is given to the robot is forwards the control word. That the AI recognizes that commands are meant for the robot. The same system can use voice commands. And the BCI is only the enhanced version of this system. 

The operator can use virtual reality sets and headphones to connect themselves to the robot. But there is the possibility to connect the robot to the body by using the neural connection that allows controlling robots straight with the EEG. 


https://futurism.com/could-we-hack-our-brains-to-gain-new-senses


https://scitechdaily.com/cracking-the-neural-code-to-the-brain-how-do-we-provide-meaning-to-our-environment/


Image: https://scitechdaily.com/cracking-the-neural-code-to-the-brain-how-do-we-provide-meaning-to-our-environment/


Thursday, January 6, 2022

Classification of databases is making it possible to create more complex artificial intelligence.

  

 Classification of databases is making it possible to create more complex artificial intelligence.




The idea of sorting databases under certain topics is taken from the library. In libraries, every book is sorted under certain main classes. That makes it easier to find the right books from shelves. Same way if all actions of the AI are sorted under certain classes that makes it easier to find those databases. Databases are the heart of artificial intelligence. In the tables of databases is every single action. What the AI should do. 

So those databases should sort under by topics because that thing makes it the system easier to find the right tables. Otherwise, the system must check the entire database. And that thing takes time. If the databases are sorted under main classes that save time. 

One of the ways to make the database easier to drive is sorting the data or data boards below certain attributes. If we are thinking about the robot that plays tennis the sorted database means that when the robot is ordered to play tennis. Robot searches first, the table where the needed data locating. When it finds that tablet. It can search for everything that it needs. From the sub-tables that are stored below the topics "tennis". 

Sorting the database under certain topics is making the AI more powerful. It also limits the number of tablets that are needed to compile when it gets some mission. The idea is similar to the library. When the staff is searching data from the library they would not look for every single book. All books are sorted in classes and then they are easier to find. If somebody wants the geography book. The staff must find the main class "geography". And then the book is easier to find. 

Same way if the database is sorted under the classes where is the subtables where are the needed actions for the class that means that the AI must not compile the entire database all the time. When the operator of the robot orders that the robot should go to play tennis the robot might look for its location by using the GPS. Then it might see from the map where is the tennis field. 

Then the robot might ask the question, does it need to take a club with it? Or is it at the tennis field? Then the AI recognizes from the map that there are streets between its location and tennis field.  So the robot must get the street walking class to its memory. And there are all needed actions stored in those databases. 

Then it can go to the tennis field and download class "tennis-playing" to its memory. That example introduces how sorting and classification of data are making the work of AI easier than compiling every single database or tablet whenever our robot is needed to make something. 

Sort and classification of the databases make it possible to drive more complex databases and AI than some regular linear programming modes. Classification makes it possible that the use of the databases is more flexible than in a normal linear model of programming. 


https://thoughtandmachines.blogspot.com/

Wednesday, January 5, 2022

And then the dawn of machine learning.

   

 And then the dawn of machine learning.

Image: Pinterest


Machine learning or autonomously learning machines are the newest and the most effective versions of artificial intelligence. Machine learning means that the machine can autonomously increase the data mass, sort the data and make connections between databases. That ability is making machine learning someway unpredictable. And that kind of thing makes the robot multi-use systems that can do the same things as humans. 

The reflex robot is a very fast-reacting machine. The limited operational field guarantees. that there is not needed a very large number of databases. And that means the system must not search the right database very often. That makes it very fast. But if it goes out from its field it will be helpless. 

When we are thinking of robots that can make only one thing like playing tennis they can react very fast in every situation. That is connected with tennis. There is a limited number of databases. And that means the robot is acting very fast. 

When a robot or AI makes the decision it systematically searches every single database. And if there are matching details to observed action. That activates the database or command series that is stored in the database. But the thing that makes this type of computer program very complicated is that when the number of stored actions is increased the system will slow.  

If we want to make a robot that can make multiple actions. That thing requires multiple databases. And searching for the match for the situation in every database takes a certain time. So complicated actions require complicated database structures. Compiling complex databases takes time because there are limits in every computer. And in the case of a street operating robot, the system compiles data that its sensors are transmitting to its computers. 

So the conditions that this kind of system must handle might involve unexpected variables like fog or rain. And for those cases, the system needs fuzzy logic for solving problems. In that case, only the frames of the cases are stored in databases by the system creators. And that system is compiling those frames with the data sent from the sensors. 


The waiter robot can be used, as an example of machine learning.


A good example of a learning machine is the waiter robot that is learning the customer's wishes. The robot will store the face of the customer to its memory. When it asks does the customer wants coffee or tea? Then the robot will ask "anything else". And in that case, the robot can introduce the menu. 

And then the customer can make an order. There are certain parameters in the algorithm. Those are stored in the waiter-robots memory. The robot is of course storing that data in the database. The reason for that is simple. The crew requires that information that they can make the right things for the customer. But that data can use to calculate also how many items the average customer makes after a question "anything else"? 

The robot can also store the face in the database that it can calculate how often that person visits the cafeteria. Then that robot can simply store the orders below the customer's face. And it learns how often a person orders something. If some customer is ordering some certain products always. The robot can send the pre-order to the kitchen. That they can get a certain type of order. When some customers will visit often and order all the time same thing, the robot can start to say "do you want the same as usual? For that thing the system requires parameters how often in a certain time is "often"? That was an example of the learning system. 


https://thoughtandmachines.blogspot.com/

What mean to AI to understand?

 What mean to AI to understand?




What mean to understand anyway? We can do many things. And we might not understand them. When we are trying to think about the question of what means understanding? We are facing an ultimate question do we still understand anything? The fact is that I can put seven years old child to read the text about quantum physics and that child might read those texts pretty well. But does that child understand those words? The fact is that I can also use the Google text-to-speech application to make that thing. And this application will make its job well. 

The AI can make many things and if that application doesn't know about things like the mark of the sum or something like that. The programmer needs to store the characters of those things on the computer. The mark of the sum will store in the database, and then there will make the text that is connected to that mark. So the sigma-mark will trigger the words that are connected to that mark. 


1) To know means: That a character knows what to do in certain situations


2) Understanding means: To realize why actors must do things in a certain way.


When we are making the robot do something it makes things what we programmed in it. If we want to make a tennis robot that plays tennis with us, we might make a robot that hits the ball. The robot might have gesture control.

If the ball is coming to the robot it will hit it. Then robot must have some algorithms for how it aims at that ball. If the referee will give the pass to the robot, it must know the gesture and then make the pass. Or movement series what makes it pass. The robot must calculate many things like the right hit point and power. But then it might have one problem does it understand anything? 


The pseudo-understanding is that the AI can give pre-programmed answers to certain questions. 


It knows how to react to the ball. And simple gestures that are making the person who sits on the chair.  But could that robot play tennis in a real match? Does it separate the referee from the audience that might show similar gestures? The robot must "know" that it should not follow any other than the referee's marks. So the robot knows how to punch a ball. 

The ball acts as a trigger that activates a certain series of movements. The robot might have orders where the punch must and where the ball should not go. The robot would not strike outside the field, because it's programmed in there. And if somebody asks about why the robot doesn't hit the ball outside the field area, it can answer: "that's dangerous". 

If the programmer is put that answer to robot's computer. Or it might have an answer "that's prohibited" whenever the person asks it to make something that is not programmed in its memory. If somebody asks a robot to punch a ball to humans or vehicles robot might say "it's prohibited" and make a report to its operators. 

The reflex robot recognizes that some action is filling the notes that are stored in the database. After that, the action triggers the database. And then that database begins the response to that action. 

Those actions are programmed in the program of the robot's programs by programmers. The robot does all the time same things. There is a series of triggers that are activated by certain actions. So that robot has a reflex. A certain action activates certain types of reactions. 

The reflex automation is simple to make. When somebody says "good morning" to the computer, it might answer by saying "good morning". And then that computer might have a voice or image scanner that connects a certain workspace to it. If the computer uses an infrared camera or ultrasound-based system. It can also recognize a person. Even if that user has a beard or is in flu. The idea of those deeper-than-surface systems is to benefit the static components of the human body. 

Of course, the system can ask the person to identify self. The command that the operator gives for access to the workspaces. Might be "I'm Eric, open my workspace". In the place of that name is the operator's name. That means the system can also recognize if somebody tries to play as that operator. The system recognizes the face but asks the name of the operator. That uncovers if the person tries to use some other user's accounts. This is one version of artificial intelligence called "reflex automation". 

The thing is that the machine has some kind of model in its memory. When some action fits some models. That thing activates certain actions in the system. This type of system is effective. Artificial intelligence-controlled robots might make many things like activating traffic lights or bringing tea or coffee to certain persons. They know how to respond to some kind of command or action. But those computer programs don't know why that response is given. 

A robot or computer program has a series of reactions to how to react to something. And if something that is outside its databases is asked robot might say "I cannot do that thing". Or it can say that it transmits the problem to the system supervisor who is making an algorithm for that thing. And the time for machine learning starts to dawn. 

Tuesday, January 4, 2022

Is EM-drive joke or not?

   

 Is EM-drive joke or not?




Is the EM-drive some kind of weapon? That kind of electromagnetic loudspeaker can install on killer satellites. Which mission is to destroy communication or some other important support systems like positioning or recon satellites. The microwave can cause damages to the electronics of other satellites. 

The thing about EM drives is that they are all purely theoretical. And their thrust in sea level is less than one billionth of the regular rockets. They are all hypothetical systems if we are talking about wave-movement-based systems or so-called photon rockets. And their cousins microwave-based EM-drives. The photon rockets are researched from the 1970s and the idea is that only photons can reach the speed of light. 

The fact is that all engines that are introduced here are hypothetical or theoretical. And the photon or other  EM-drive would not ever operate in the atmosphere. The only operational area of those engines would be far away from the atmosphere. 


If we shoot photons in a WARP bubble. They would travel faster than other photons. Same way microwaves can cross the speed of light in WARP-bubble. 

The idea of the EM-drive like photon or microwave engines as the higher-than speed of light thrusters bases the idea of the shoot photons to the WARP bubble. In that case, the photons or microwaves that are traveling in WARP-bubble. Travel faster than the speed of light is outside the WARP bubble. 

A small WARP bubble might be created accidentally by researchers. There is the possibility create the electron-size WARP bubble. And then shooting photons through it. That thing makes it possible that photons travel short moments faster than light. 


The problem with photon rockets is a weak thrust and that system is useful only in the speed that is near the speed of light. The photon rockets are like the microwave engines systems that are planned to use in interstellar flights. So wave-movement-based EM-engines are not suitable even for interplanetary flight. There are three types of photon rockets planned in theoretical research. 


1) So-called non-coherent photon engine which uses the normal photons. That is the system that is normally called a photon rocket. The thrust for that engine can increase by using antimatter. In that case, the annihilation is creating the thrust and photons. That is needed in the last part of acceleration. 


2) The laser-beam or coherent photon engine. That engine uses laser rays. And it can give more thrust than a non-coherent photon engine. The lasers can also use to push things like solar sails. But in this text focus is on the laser engines, which laser is in the craft. 


That system would shoot laser rays. Through the rocket's reaction chamber. That system is planned to use with antimatter engines. The antimatter would react in the combustion chamber. 

That laser ray can use to increase the temperature in the combustion chamber. And it can use along with ion rockets. In that case, ions will travel through the ion accelerator and the laser will increase their energy level. 

And then the laser beam would give the last punch to craft. The laser beam can be used to vaporize some gas. And then it can use to travel between planets. But for interstellar flight, there is needed to use some other device like an antimatter engine. 


3) Microwave engines. Along with microwaves, the thrust of those systems is similar to some free photon engines. But microwaves can use to replace combustion in the combustion chamber. In that case, gas like liquid methane or hydrogen will give the thrust. The EM engine is normally useless. 

But if there is injected vaporizing material. The vaporized gas would give the thrust for that engine. The microwaves can also be used to vaporize silicon or some other solid material for use in ion engines. The ion engines would ionize that vapor and then it can be driven to the magnetic accelerator. 

Is EM engine some kind of microwave weapon? As I wrote at the beginning of this text? The microwave engine can also be a microwave weapon. If some killer satellite would be equipped with microwave cannon which is similar to the so-called EM-drive, that thing can damage the other satellites quite easily. 


https://bigthink.com/starts-with-a-bang/no-warp-bubble/


https://www.extremetech.com/extreme/329631-scientists-havent-created-a-warp-bubble-but-theyre-a-bit-closer-to-testing-one


https://me.ign.com/en/tech/192998/news/scientists-take-a-step-towards-building-a-real-life-warp-drive-by-accident


https://www.nasa.gov/centers/glenn/technology/warp/warp.html


https://www.space.com/can-emdrive-space-propulsion-concept-work


https://www.techtimes.com/articles/265295/20210912/nasa-warp-drive-explained.htm


https://www.techtimes.com/articles/269047/20211207/worlds-first-warp-bubble-discovered-serendipity-darpa-researchers-find-strange.htm


https://en.wikipedia.org/wiki/Alcubierre_drive


https://en.wikipedia.org/wiki/Antimatter


https://en.wikipedia.org/wiki/Antimatter_rocket


https://en.wikipedia.org/wiki/Beam-powered_propulsion


https://en.wikipedia.org/wiki/EmDrive


https://en.wikipedia.org/wiki/Laser_propulsion


https://en.wikipedia.org/wiki/Nuclear_photonic_rocket


https://en.wikipedia.org/wiki/Photon_rocket


The link to PDF files of photon engine:


Image: https://www.businessinsider.com/rocket-lab-launches-photon-satellite-2020-9?r=US&IR=T




Let's go back to entangled tardigrades. And Captain James T. Kirk and his material transportation.

 Let's go back to entangled tardigrades. And Captain James T. Kirk and his material transportation. 



How to teleport people to another planet safely? One version would be that the promising star would send the chamber in which atoms are quantum entangled and superpositioned. In the simplest version, the chamber just releases robots. That transfer data by using light-year-long superpositioned and entangled particles. 

Because superpositioned and entangled particles are like an extremely long sticks. That thing makes it possible to round the cosmic speed limit. When another particle moves the other will move too. But biotechnology can also be used in that thing. 

Then that chamber would make clones of the people and then those clones would start to communicate with their creators. This is one version of long-term teleportation. But maybe someday we could teleport even humans. But the first teleported organisms could be tardigrades. 

Tardigrades might be the first teleported organisms. Making quantum entangled tardigrades might not be so fantastic as some futuristic Captain James T. Kirk from the fictional Star Trek series. Where the starship Enterprise travels around the universe faster than light. 

And in that series, Captain Kirk uses the material teleportation every day. The question is can we someday make the real teleportation system that will make the fiction of Star Trek true. There is biotechnology and other possibilities to make the superposition of the space crew in our futuristic mission. 

There is three possible way to make teleportation. The robots are offering the possibility of virtual teleportation. That thing makes it possible to create the VR-reality connection between the robot and human operators. 


1) The crew might use zombie bodies. The robots that landed on the planets and then those crew members can simply use them by using virtual reality. Or the crew might have the ability to communicate with external bodies by using the EEG-based remote control. 

In that case, our spaceship would have two parts. The first comes the robot deliverer. And then the hypothetical manned spacecraft will follow that robot carrier. In that case, the system can be called virtual teleportation and this technology is already in use.

Those robots can be made of steel or carbon fiber. Or they can be biorobots. The biorobot is the genetically engineered body that is controlled by using microchips. That is implanted in its nervous system. That kind of robot can operate on the planet. And the crew would be safe. The crew can communicate with landed robots and use them as remote-control tools.  

There is the possibility that there are bio chambers in our futuristic spacecraft where are always clones of the crew members. When the first person dies. The memories of that person would copy to that clone. 


2) Ion-based teleportation. The ion ray would send to the chamber where they will re-order. In that system, the person would be turned to ions and then that system sends the ions to the chamber that is equipped with an invisibility cloak. 

There the highly advanced, futuristic system puts ions in the same order. Where they were when the hypothetical craft sent them to that chamber. This type of system is interesting. And it might be true in the future. 


3) The quantum entanglement or superposition-based systems. Those systems are making quantum entanglement by using atoms of the body. And that means the person would be in two places at one time. 

But there is the possibility to make this kind of quantum entanglement by benefiting the organisms on the planet. The quantum entanglement can make straight to the nervous system. Or it can create by using a technical device. The quadcopters can install the small-size microchip to the organisms that are delivered to the planet. And then those organisms can transmit data from the surface.


https://bigthink.com/hard-science/human-teleportation/


https://www.iflscience.com/physics/tardigrade-might-be-first-animal-to-be-quantum-entangled-and-live/

https://en.wikipedia.org/wiki/Teleportation


Image: https://bigthink.com/hard-science/human-teleportation/


https://thoughtsaboutsuperpositions.blogspot.com/


The moonbase plays a big role in the solar system colonization.

   

 The moonbase plays a big role in the solar system colonization.



Moonbase is the thing that might not look so impressive as Mars or Jupiter bases. But in moonbase, the researchers can find out how to make structures in space conditions. The moonbase is the perfect place for making social and other types of tests for long-term flights. If something goes wrong the mission must abort immediately. There is possible to test life-support and other systems.

Moon station is also possible to use for testing systems like computers, independent operating robots, and AI. And other things that will be trusted and easy to operate. If something goes wrong on the Moon. that thing is far easier to fix than if something goes wrong on Mars. 

And on the moon returning to Earth is easier than from Mars or the asteroid belt. The moonbase is also offering a safe place for loading and starting the nuclear reactors for nuclear-powered rockets. When some nuclear-powered NERVA or ion rocket starts near-Earth. And if something goes wrong. There is the possibility that radioactive debris will fall to the Earth. 

So if the starting of those engines will happen at the Moon orbiter the radioactive debris will fall to the ground. The moon is also offering a shield against the EMP-pulses that are forming if the Orion-type of system where the small nuclear bombs will push the craft to the outer solar system. The ion rockets can also cause problems for satellites if the high-energetic ions will impact them. 

The thing is that electrostatic and microwave-based systems where the propellant is vaporized by using microwaves and them shoot back by using magnetic accelerators are one of the most capable systems. The ion engine requires vaporized material like iron steam but also other ions would be useful. 

The problem is how to vaporize the iron or silicone? One version is to use a hybrid system where the material is vaporized by using microwaves. And then it will ionize. Then the system will drive those ions backward by using the magnetic track. 

Things like water jet rocket engines are rocket engines. Which are using water or hydrogen as the propellant. The idea of those systems is to replace nuclear reactor what is used in NERVA engines by using something that is not so radioactive. That thing makes the microwave engine a useful tool. 

Liquid hydrogen or water will pump into the nozzle. And microwaves are vaporizing it. The full benefit of those systems will get if the liquid gas has a low vaporizing point. The microwaves can use to expand the propellant as well fire or nuclear engines are making. The moonbase is ideal for testing that kind of system. 


https://scitechdaily.com/off-earth-manufacturing-using-local-resources-to-build-a-new-home-on-another-world/


https://en.wikipedia.org/wiki/EmDrive


https://en.wikipedia.org/wiki/NERVA


https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion)


Image: https://scitechdaily.com/images/Future-Moon-Base-1536x864.jpg


https://thoughtsaboutsuperpositions.blogspot.com/

Sunday, January 2, 2022

The spaceflight to Jupiter and beyond

 

 The spaceflight to Jupiter and beyond





Image 1) Artist's concept of Jupiter Icy Moons Orbiter.


NASA is looking for contractors for the "Prometheus" or JIMO (Jupiter Icy Moons Orbiter) mission system. And those nuclear-powered unmanned space probes are opening the road to the sample bringing missions to outer solar systems. 

The experiences that are opening the road to the manned missions to the outer solar system. Those missions require more powerful and advanced technology than missions to Mars. 


What if the EEG of the sleeping crew can use to control spacecraft? 


The scientific team talks lucid messages to people's dreams.

The ability to control dreams is one of the most interesting visions in the world of neuroscience. Dreams are the position where the brain can use the most of its capacity. And that time can use to turn the brain into a productive machine. The ability to control dreams is making it possible to create the obsession to make something. But if we will create a system that can interact with sleeping people. That thing makes it possible to communicate with people. 

While they are in anesthesia or even in a coma. Theoretically, it is possible to make a system that projects people's dreams to the computer screens. That thing requires "only" the system that decodes the EEG to the computer. The ability to communicate with brains while the person is sleeping can play a key role in the future spaceflights to the outer solar system. The computer can input data into the brains of the crew members. 

While they are on the extremely long journey to Jupiter. In that kind of journey, long-term anesthesia is required for keeping the social problems minimal. So in that kind of system, the brains of the crew members can use to communicate with computers. That means those people would control the craft by using EEG waves while they are sleeping. Jupiter is maybe the limit for the nuclear rockets. 


xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


How a futuristic probe brings samples from Jupiter to Earth?


There is possible to combine flyby and the orbital mission by dropping the sub-craft to the planetary orbiter from the flyby craft. But that sub-craft requires the breaking time. The idea is that the Jupiter craft will make a flyby of Jupiter. 

During that part of the mission, it drops the orbiter craft. Then it can fly to Saturn and use that planet as the gravitational sling that takes it back to Jupiter. There the smaller probe will dock to the main flyby probe. And return to Earth.


xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

And the journey to there and back requires near 4000 days. The flight time to Jupiter is 600 days if the system makes a flyby. But positioning the craft to the orbital trajectory requires 2000 days. And that is one of the biggest problems in space flight. The journey to Jupiter and back requires 11 years. And that means the craft would just visit that planet. That is an extremely long time. And that is one of the limits for the plans to fly to other planets. 

But if we want to fly farther the problem is aging. That means the aging of the DNA must deny. If we want to send living astronauts to another planet or solar system. There is the possibility that in the distant future. Biotechnology can fix the damages to the DNA of the crew. In that system, the DNA of the crew has been stored in the digital memory of the artificial intelligence. 

And then, the nanomachines can replace the damaged DNA while crew members are aging. If the aging of the crew members can deny. By using some other method than cooling them to zero kelvin degrees. That thing makes it possible to use their brains as part of the computer system. 




Image 2) Daedalus spacecraft concept (Wikipedia)


The Daedalus 3 plan. 


As I am written many times in the 1970s British Interplanetary Society created the Daedalus project. The hypothetical unmanned spacecraft can use to travel to Proxima Centauri or Barnard's star. Maybe Daedalus can ever take astronauts to other solar systems. But that craft and its fusion engine could be a powerful tool for traveling inside the solar system. 


But there are made some theoretical mission profiles for visiting the Proxima and Alpha Centauri. 


The thing is that the flight time to Proxima Centauri would be extremely long if the craft must station to the orbit of that star. The Daedalus can reach 25% of the speed of light by using the antimatter engine. But that is very low. That means the hypothetical Daedalus might be an unmanned system controlled by very advanced artificial intelligence. The flyby would be a very good possibility. Because in that case, the craft must not break when it comes to the Proxima system. 

The idea is that the craft would make flyby but in that case, it shoots the sub-probe to the orbiter of the Proxima Centauri. The sub probe would collect samples from the Proxima system. And then the main craft or another Daedalus would use Alpha Centauri as the gravitation sling that would turn in back to Earth. During that process, the sub-probe will get back and dock to Daedalus. 


https://www.independent.co.uk/news/science/dreams-talk-lucid-messages-b1804671.html


https://en.wikipedia.org/wiki/Project_Daedalus


Image 1 ) https://www.jpl.nasa.gov/news/nasa-selects-contractor-for-first-prometheus-mission-to-jupiter


Image 2) https://en.wikipedia.org/wiki/Project_Daedalus


https://thoughtsaboutsuperpositions.blogspot.com/

New robots bring new tricks.

  

 New robots bring new tricks.




Robots that emulate fish are more effective than robots that are using propellers. If the robot has similar fins and tails with fish. It won't get stuck to the algae and water vegetables so easily as propellers using robots. The problem with small-size robots is that they have limited power sources. The thing is that also larger nuclear-powered submarines can have fins and structures like pike fish that kind of submarine might have stealth drive along with propellers. 

Structures that are used and tested in miniature submarines. Can turn to use in larger-scale submarines. There is also tested the cuttlefish-looking submarine construction. Those kinds of systems can use water jets for driving fast underwater. 

Then the robot can use those fins in the area where the system needs to operate accurately. Those robots can be of various sizes. And they can use to recover the material from the bottom of the oceans by using manipulator's arms. 




Those little bit spider-looking machines can use to return things like nuclear reactors and secretive things like dropped nuclear bombs in secretive missions. As well as they can use to return chemical waste or something like that. 

The flying man-shaped robots can also be modern technology. The same technology that is created for those systems can also use to lift people. The man-looking flying robots can someday research planets like Venus and Saturn moon Titan. But they can also drop to research the jungles in Papua-New Guinea. 

New technology like large-size quadcopters can use to lift people from the ground. And sometimes is tested the backpack, that has a helicopter structure. The tail rotor would be at the end of the telescope tail. And the main rotor would be in the middle of the backpack. Those kinds of systems have problems with power supply. 




So the quadcopters are easier to control. And they can be connected to the wireless game control sticks. Those systems are almost safe. Because of the battery is ended. That robot can have autorotation mode. The autorotation guarantees that the system can return to the ground safely. 

The quadcopters that are lifting humans can use to land the special operations teams to the ground. And then fly back to VTOL aircraft. The same quadcopters can also rise window cleaners to the roof of buildings. But in the same way, things like burglars can use quadcopters to lift them and their equipment to the roofs of their targets. The fact is that this kind of technology is open source. Everybody can buy a quadcopter from shops. And then the person needs only the employment skills to enlarge that system. 

Italian "iron man robot". (https://www.engadget.com/)

https://interestingengineering.com/cuttlefish-like-robots-are-far-more-efficient-than-propeller-powered-machines


Saturday, January 1, 2022

Next-generation nanotechnology can be independent-operating molecule-sized robots.

 Next-generation nanotechnology can be independent-operating molecule-sized robots.



"A DNAzyme (red) uses its binding arms to dock at a specific location on an RNA strand (yellow) and then cleaves it at its core. High-resolution, real-time NMR, Electron Paramagnetic Resonance, and Fluorescence Spectroscopy."

"As well as Molecular Dynamics Simulations are used to identify the structure. And catalytic mechanisms of the DNAzyme. Credit: HHU/Manuel Etzkorn" (https://scitechdaily.com/dnazymes-how-active-dna-biocatalysts-that-destroy-unwanted-rna-molecules-work/)


The active DNA catalysts are the gate to DNA-controlled, independently-operating, molecular-size nanomachines. 


The DNAzymes or active DNA biocatalysts are next-generation tools for destroying unwanted RNA. The DNA molecule is one of the tools. That can use for controlling molecular-size machines. DNA is like a chemical computer program. And if the researchers can make synthetic DNA that can make exactly what they want they have the ultimate tool for making genetically engineered cells. 

Those synthetic DNA bites can use to make cells immune against AIDS. And also the DNA-controlled enzymes can use to destroy dangerous viruses in the blood and tissues. DNA-controlled enzymes are the new and powerful solution of nanotechnology. The thing is that those enzymes can program to make many things that are making the medical more effective than ever before. 

The use of DNA-controlled enzymes can make it possible to remove the virus genomes from the cells that are already infected. The same enzymes can carry the ricin molecules in the cancer cells. But the problem is that the DNA-controlled enzymes are just under development. 

The DNAzymes can also be used to carry the small pieces of metal or some other material to the cells. And then that thing would make resonate by using EM-radiation. The thing is that intelligent enzymes can use to destroy things like poisons. That means this type of technology can use to destroy things like nerve gases and other chemical weapons. This is the reason why this kind of research is interesting for the military as well as civilians. 

The DNA-controlled molecules can clean water. They can destroy the oil and other things like explosives from the ground. The artificial DNA molecules can use to make very independently operating molecular-size nanomachines. 

The DNA-controlled and radio wave-activated enzymes are next-generation Bellefon and Khaimara in one package. In the wrong hands, those kinds of tools are making legendary and horrifying Novichok-chemical seem like some kind of candy. That technology can use to create weapons that are more horrifying than ever before. 

But the same way those new intelligent chemicals are making it possible to save the life of people. The DNA-controlled nanorobots can clean the body from chemicals. And they can also make it possible to destroy things like cancer from areas where it is impossible to destroy. 


https://scitechdaily.com/dnazymes-how-active-dna-biocatalysts-that-destroy-unwanted-rna-molecules-work/


Image:https://scitechdaily.com/dnazymes-how-active-dna-biocatalysts-that-destroy-unwanted-rna-molecules-work/


Primitive animals like insects can use for modeling new types of data handling tools for small-size robots.

   

 Primitive animals like insects can use for modeling new types of data handling tools for small-size robots. 


Image 1:

Why scientists are interested in how the fly is navigating in 3D space? Primitive animals make many things by using a couple of neurons. And that process can copy to the computers which are used to navigate things like drones. The fact is that also things like flies and butterflies are migrating. And that ability can copy to things like drones. The ability to understand the operands of the neurons makes it possible to make more autonomic and more independent robots. The problem with computers is that the things like butterflies can make many things that computers are making. 

But the problem is that butterflies or flies are needed less space than computers. So the system that some monarch butterflies are using in their lifetime is more compact than some supercomputers. If we want to make a long-range drone that can make the same things with monarch butterflies it requires outsourced computers. Theoretically is quite easy to make a robot butterfly that has an operation range even thousands of kilometers. 

Daytime that robot can use miniaturized solar panels- And night time it can use miniaturized fuel cells which are using ethanol or methanol as the fuel. But the problem is that this type of robot needs outsourced computers. So the thing is that the butterflies might have fewer neurons than in some supercomputer has microchips. But the butterfly makes more things with its neurons. 

Image 2:



xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


Modern microchips can turn any neuron into a biocomputer. 


There is the possibility to turn the brains of the bugs into the biological computer. The system requires "only" that the similar microchips that are used to control the actions of bacteria will install in the neurons of those bugs. 

The data that is wanted to process in those neurons can be input to them by using those microchips. That kind of system can use to turn living bugs into zombies or cyborgs that can be remote controlled. Also, that kind of system can use to create more powerful data handling units. 

The computer has used lights to control cyborg bacteria. Similar technology could use to transmit data to neurons. But new nanotechnical microchips are more powerful tools. And they can transmit data to neurons benefiting magnetite crystals in the neurons. 

Or those microchips are interacting with axons. They can send a nano-size electric wire to the axon and transmit data in there by using electric impulses. 

https://www.futurity.org/cyborg-bacteria-cybergenetics-1275212/

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


Researchers can use knowledge of how the nervous system of insects is working to create more economical and powerful computers. 


But the times are changing. There is the possibility to implant nano-technical microchips in the bug's nervous system. Those systems can give orders to them. And then, they can store the nervous signals what those bugs are creating. After that, the bigger drone can call those bugs to it and it can download the memory of those microchips. The ability to encode and decode the EEG of the bugs is one of the amazing visions in the world. That thing makes it possible to use those things as biological surveillance systems. 

The thing is that making hybrid computers where living neurons are connected with microchips is one of the most fascinating ideas in computer science.

The computers that are connected with living neurons are the most powerful tools in history. If we want to create a system that uses fuzzy logic use of living neurons is the best opportunity. The problem is that the abilities of that kind of hybrid computer are unknown. The thing that makes regular computers safe is simple. Operators can select and download controlled data to it. 

If we are connecting computers to living neurons. We are creating a cyborg. A cybernetical organism means that the system might have consciousness and will like all other creatures. This kind of system has also the ability to defend itself. How the system interacts with people is depending what kind of tools it has. And the second thing is that the system must know how to use those tools. 

Living neurons are the thing that is giving so-called wild card ability for computers. Genomes are controlling the behavior of the species. And if those neurons are taken from the insects are put in the combat drones. That drone can believe that it's the monarch butterfly. So before that kind of system is put in use neurons must be cleaned. 

That means the heritable memories and fears must remove from their DNA. Researchers must make this type of thing. Before we can start to make the hybrid microchips there is an organic part. That connected to the regular microchips. The regular microchips needed to act as the port between neurons and quantum systems. The problem with quantum computers is that they are large-size systems. 


https://scitechdaily.com/navigation-neuroscience-how-a-flys-brain-calculates-its-position-in-space/


https://thenextweb.com/news/scientists-created-biological-quantum-circuit-grisly-experiment-tardigrades


https://en.wikipedia.org/wiki/Monarch_butterfly



Image 1:)https://scitechdaily.com/navigation-neuroscience-how-a-flys-brain-calculates-its-position-in-space/


Image 2:) https://en.wikipedia.org/wiki/Monarch_butterfly


https://thoughtsaboutsuperpositions.blogspot.com/



Computer researchers published a new algorithm that revolutionizes web management.

The new database structures require new and powerful tools to manage databases in non-centralized solutions. The new data structures can be ...