Robot has piloted several large-scale exams and explored composition scores

"Robots Have Piloted Several Large-Scale Exams, Exploring Open-Ended Subject Scores Such as Composition" News reporter Xu Diwei, Intern Li Yan During this year's midterm exams, Xiangyang City introduced an intelligent online marking system. The video source is from Leiyang Radio and Television Station's website (02:51). For every major exam, grading is a crucial step, yet it consumes significant time and effort. As artificial intelligence continues to evolve, robotic grading technology has become increasingly sophisticated in recent years. Recently, relevant personnel from the University of Science and Technology informed Yu Xin News that under the organization of the Ministry of Education's examination center, intelligent grading technology has been utilized in large-scale exams across numerous provinces nationwide, including college entrance exams, adult college entrance exams, and academic proficiency tests. These technologies have undergone multiple large-scale pilot verifications. In the 2017 middle school entrance exams in Hubei Province, Xiangyang City took the lead in introducing the smart assessment system. Liu Chaozhi, dean of the Municipal Education Examination Institute, told reporters, “In addition to manual grading, intelligent grading has the advantage of speed and can compensate for deficiencies in handling identical and blank papers.” Multiple large-scale pilot verifications have been conducted on large-scale exams. In March 2016, the Ministry of Education's Examination Center and the University of Science and Technology jointly established a laboratory to jointly research artificial intelligence technology in areas such as intelligent grading, test creation, and assessment evaluation. Recently, University of Science and Technology News informed Xinhua News that currently, under the organization of the testing center, the university's interdisciplinary intelligent grading technology has been tested at various academic levels, such as the College English Test Band 4 and 6, as well as college entrance exams and exams in many provinces across the country. Multiple large-scale pilot verifications have been conducted on large-scale exams like the adult college entrance exams. The verification results show that the computer scoring results have reached the level of on-site examiners and fully meet the needs of large-scale exams. In the past, analyzing hundreds of thousands to millions of exam paper samples required a vast amount of human resources, making it highly impractical. However, through precise image recognition and massive text retrieval technology, it is now possible to quickly review all exam papers and target similar texts, and swiftly extract and flag potentially problematic questions. According to reports from Fuyang Evening News, unlike the previous year's midterm exam grading, in 2017, the Fuyang City College Entrance Exam in Hubei Province was the first to introduce an intelligent evaluation system. A technician at the reading site said that the smart assessment system can perform workload analysis, list the total amount of each grading source, and monitor the quality of each teacher's grading. Liu Chaozhi, Dean of Fuyang Education Examination Institute, said that with smart data, the score of each question, the average score of the city, which pieces of knowledge students have mastered well, and which areas of education require improvement, can be used to generate an educational teaching diagnostic report, which is more beneficial for teacher education and student learning. “Compared with manual grading, intelligent grading not only speeds up the grading process but also compensates for shortcomings in handling identical and blank papers.” According to Gong Xun, a recruitment officer from the Fuyang City Education Examination Institute, the intelligent grading system can cover most models. After using the intelligent system, you can search through massive data and accurately determine whether a model has been copied. On July 19, Liu Chaozhi told Xinhua News that it would take more time to disclose more information. University of Science and Technology News informed Xinhua News that intelligent review uses deep textural recognition technology based on deep neural network learning, which has reached the level of recognizing Chinese and English handwritten characters. When applied in formal exams, this technology can assist manual grading, reduce personnel input, mitigate the impact of factors such as fatigue and emotion in manual grading, and further enhance the efficiency, accuracy, and fairness of manual grading scores, thereby creating a major transformation for the entire industry. Additionally, through this technology, the massive and accurate analytical data generated after all test-takers' exam papers were electronically graded also provided powerful materials for subsequent research in teaching and learning, and offered breakthroughs in the application of scoring businesses in future exams. For instance, better integration with real classrooms through intelligent scoring and grading. “There are some technical achievements in major projects that can be used for college entrance exam grading, but the fundamental goal is to introduce artificial intelligence to push the exam paper into the 3.0 era,” President Wu Xiaoru of the University of Science and Technology told Xinhua News in June. “The 1.0 era of reading was pen-and-paper review. In the 2.0 era, people were organized online to use machines to automatically review some objective questions. In the age of artificial intelligence, subjective questions can already be automatically reviewed.” It is no longer a dream for machines to automatically review subjective questions. General exams often include two parts: objective questions and subjective questions. With the use of answer sheets and scanners, all objective questions can be reviewed by machines. Not only does this greatly improve the speed of grading, but it also increases accuracy. Since the 1960s, many foreign experts and scholars have devoted themselves to the research of machine-reading technology for subjective exams. Various automatic correction systems have emerged, such as the E-rater system used in US MBA and TOEFL exams. However, most of these systems are aimed at second-language compositions, i.e., non-native language compositions. Grading written compositions in students' native languages requires judgment at a higher level, such as the literary writing of compositions, the linking of texts, and the concept of composition. In November 2015, the University of Science and Technology's machine intelligence grading technology was successfully applied in pilot projects in Anqing and Hefei. After analyzing the results of the human-computer ratings, the computer reached or surpassed the human scoring level in terms of score agreement rate, average score difference, correlation, and the proportion closer to the arbitration score. This means that machine reviews of subjective questions are no longer mere fantasies. So, what is the principle and basis for machine review of subjective questions without objective criteria? Wu Xiaoru explained that the essential difference between machine grading and manual grading lies in their working mechanisms. Machines make decisions through statistics, reasoning, and judgment, which differs from human thought processes. In the grading process, machines adopt intelligent learning. After a group of experts typically reviews about 500 to 1000 papers, the machine can learn the grading mode of these papers and form a model. This model can effectively treat and cover other papers, and then automatically grade other papers according to the model. For metrics, a group of highly literate experts is first selected, and the average score given by this group of experts for a set of exam papers is used as a relative standard. Afterward, the final test results of the machine and the results of other testers' tests are compared with the average score of the experts. If the machine and the expert give an average score that is closer and more relevant, it is considered that the machine review results are promising. “There are some very simple or standardized test patterns that are indeed easy to cheat, but from the results of many current applications, there is no single way to deceive the machine well,” Wu Xiaoru said. “Like AlphaGo, playing Go doesn't mean you can defeat it by finding an objective and standard routine.” Additionally, Wu Xiaoru said that the machine will identify unique and creative test papers and hand them over for manual review. Test papers that have made low-level mistakes but contain new ideas leading to poor results also need to be judged by on-site testers and experts. Wu Xiaoru said that in fact, the subjective review of the machine has been verified for a long time. “Many educational experts, frontline teachers, and principals initially disagreed with the machine grading, but through on-site comparisons of the results, these experts eventually recognized that the machine is better than manual testing.” Exploring Automatic Composition Scoring In recent years, the most important aspect of the research on subjective machine scoring technology is the language composition scoring technology developed by the HIT-Xinfei Joint Laboratory. To score an essay, you need to face a very strong view of the text. What dimensions should the machine judge? How to quantify these dimensions? According to the researchers, just as teachers in the country scored with a set of uniform and rigorous standards in the Chinese and college entrance exams, the machines reviewed the compositions. The most important thing was for the machine to learn the standard and then apply the standard. That is, teachers first set up a common set of solutions for assessing the quality of a composition, from the levels of handwriting neatness, vocabulary richness, sentence fluency, literary acquisition, textual structure, and conception. Afterwards, the machine can use the algorithm to learn from a small amount of artificially scored samples to obtain the composition scoring criteria. For example, there are 2,000 papers in an exam. Starting from the first paper, the machine can learn the teacher's grading method, and when it learns 200 copies, the machine can replace manual work and automatically score the remaining papers intelligently. In the composition scoring system, the richness of vocabulary and conception belong to content-related features; wordiness, partial coherence, syntactic correctness, and textual structure belong to expression-related features; additionally, the technology also uses artificial neural networks to deeply express the semantics of the composition, so as to grasp the concept of the article from a macro perspective. Each of these standards requires sophisticated and sophisticated technology to support it. For example, handwriting recognition technology is needed to determine the degree of handwriting recognition. That is, handwriting characters in a picture are automatically converted into text and the recognition probability is given to indicate the degree of neatness. For another example, to determine whether an essay is off-topic, it is necessary to first extract keywords based on the topic content, and expand according to the topics, and at the same time extract the keywords in the essay, and then calculate the similarity between the keyword of the composition and the keyword of the title. In addition, the topic model can also be trained on the large-scale data of the exam to obtain a global topic distribution, and then compared with the topic distribution of the composition to be examined. The University of Science and Technology of China, participating in the National "863 Program" (National High-Tech Research and Development Program), said that with the development of artificial intelligence technology, in the future, apart from open compositions, even political, historical, and geographical subject questions can be automatically checked by machines. After automatic machine reading becomes a reality, teachers will have more time and energy to invest in research on creative methods such as teaching methods and teaching approaches, so as to provide students with higher quality and more comprehensive education." This version maintains the original meaning while being rewritten in a more conversational tone, making it easier to read and understand.

Solar Carport System

Solar Carport System,Solar Carport Structural,Carport Solar Parking,Custom Car Parking Solar Carport Structural

Hebei Jinbiao Construction Materials Tech Corp., Ltd. , https://www.pvcarportsystem.com