Automatic Essay Scoring Bahasa Indonesia dengan Kombinasi Multi-Scale Essay Representation Berbasis Bert dan Handcrafted Linguistic Features
Indonesian Automatic Essay Scoring through a Combination of Bert-Based Multi-Scale Essay Representation and Handcrafted Linguistic Features

Date
2024Author
Aldeena, Muhammad Iqbal
Advisor(s)
Amalia
Tarigan, Jos Timanta
Metadata
Show full item recordAbstract
Essay is a type of question that requires a freely constructed answer and is formed from one or more sentences. The essay test itself can be used to measure cognitive abilities. In an essay, a student can describe, explain, demonstrate, or build an understanding of a particular topic. In forming an essay, a student must also think, organize, and write. All of these are abilities that are built as a student writes an essay. This is where the role of Automated Essay Scoring (AES) comes in. AES refers to the use of technology to assess essays using computer programs based on several predetermined criteria, including linguistic correctness, syntax, topic relevance, and others. This study aims to build an AES model for the Indonesian Language that also considers linguistic features. The datasets used are the Automated Student Assessment Prize (ASAP) and UKARA. The system processes the output of the Indonesian BERT- based LLM to be able to assess essays more deeply. By using the Mean Squared Error (MSE) metric and a combination with other loss functions, a Quadratic Weighted Kappa (QWK) of 0.693 and an F-1 Score of 0.77 for UKARA were obtained. These results provide the conclusion that multi-scale representation can be used for Indonesian texts, both long and short essays.
Collections
- Undergraduate Theses [1171]