How to make a computer program, what checks facts?
The fact-checking computer program is basically very easy to make, and the idea is when the person says something, the voice recognition program would separate things like annual report of some organization, and then the artificial intelligence would look the data from the web. Then the data would compare with the numbers, what that speaker uses, and the software can use the homepages to separate the numbers from other data, and then the interface would show the real numbers to the operators.
But when we are talking about things, what are quite stable, what might be something like the surface area of some country or something like that, there is a possibility to drive that kind of data to the own database of that program. The thing is that somebody must make those databases, and that is taking time.
The same thing is the abbreviations like WLAN or some other technical term. The program can use the text, what is written by a well-known expert, and then that text can compile to the text, what some student is written in the thesis, and then that system would tell if there are something problems with technical details, and of course, the inspection process of the thesis can also be observed.
This thing might be a big advantage in the Urkund, which is used to detect the plagiarism. The Urkund can also collect data, what is the common line with grammatical and other things in some school, and that thing would uncover if some mentor has a different line with some students.
The thing in artificial intelligence is that there no way to flatter the computer, and that makes possible to handle only the data. The exceptions are uncovering things, what somebody would want to keep silent, and those are slipping over time and also nitpicking every space-button push is telling that there are problems with the relationship with mentor and student.
Machine learning is to collect data about interesting things, and then the computer program can create medians and average values about the things, like how long time the mentor keeps the thesis on the desk. Or what kind of mistakes causes normally the abandonment for the job? That kind of things would be interesting to see if some student would face a totally different line than others. And this kind of information would send to the supervisors, that they can give comments about the things, what is going on in the estimating process.
The fact-checking computer program is basically very easy to make, and the idea is when the person says something, the voice recognition program would separate things like annual report of some organization, and then the artificial intelligence would look the data from the web. Then the data would compare with the numbers, what that speaker uses, and the software can use the homepages to separate the numbers from other data, and then the interface would show the real numbers to the operators.
But when we are talking about things, what are quite stable, what might be something like the surface area of some country or something like that, there is a possibility to drive that kind of data to the own database of that program. The thing is that somebody must make those databases, and that is taking time.
The same thing is the abbreviations like WLAN or some other technical term. The program can use the text, what is written by a well-known expert, and then that text can compile to the text, what some student is written in the thesis, and then that system would tell if there are something problems with technical details, and of course, the inspection process of the thesis can also be observed.
This thing might be a big advantage in the Urkund, which is used to detect the plagiarism. The Urkund can also collect data, what is the common line with grammatical and other things in some school, and that thing would uncover if some mentor has a different line with some students.
The thing in artificial intelligence is that there no way to flatter the computer, and that makes possible to handle only the data. The exceptions are uncovering things, what somebody would want to keep silent, and those are slipping over time and also nitpicking every space-button push is telling that there are problems with the relationship with mentor and student.
Machine learning is to collect data about interesting things, and then the computer program can create medians and average values about the things, like how long time the mentor keeps the thesis on the desk. Or what kind of mistakes causes normally the abandonment for the job? That kind of things would be interesting to see if some student would face a totally different line than others. And this kind of information would send to the supervisors, that they can give comments about the things, what is going on in the estimating process.
Comments
Post a Comment