Show simple item record

dc.contributor.advisorSuyanto
dc.contributor.advisorSihombing, Poltak
dc.contributor.authorRiady, Muhammad Alvin
dc.date.accessioned2024-08-28T04:17:20Z
dc.date.available2024-08-28T04:17:20Z
dc.date.issued2024
dc.identifier.urihttps://repositori.usu.ac.id/handle/123456789/96260
dc.description.abstractReinforcement learning is a prominent method in the development of artificial intelligence for computer games, but its application in board games, especially traditional Indonesian board games, is still relatively limited. In reinforcement learning, there are two factors that influence algorithm performance, namely learning rate and discount factor, which determine the optimal model to produce the best intelligent agent. This research compares two reinforcement learning algorithms, Q-Learning and SARSA, in Javanese chess, by comparing the winning rate of each agent using various combinations of learning rate and discount factor values, and determining the combination of these two parameters that is suitable for an agent to win a round of a match. The test results show that in the scenario of the Q-Learning algorithm against SARSA or vice versa, the SARSA algorithm outperforms Q-Learning with a win rate of 58.4% compared to 58.1% for agent P1. Meanwhile, for P2 agents, the Q-Learning algorithm outperformed SARSA with a win rate of 28.9% versus 28.8% of the total win percentage. The optimal parameters for Q-Learning are a learning rate of 0.06 and a discount factor of 0.8 for agent P1, and 0.08 and 0.1 for agent P2. For the SARSA algorithm, the optimal parameters are 0.03 and 0.5 for agent P1, and 0.06 and 0.7 for agent P2. These findings provide valuable insights into selecting intelligent agents in board games other than Javanese chess, thereby contributing to the advancement of Artificial Intelligence in the context of these games.en_US
dc.language.isoiden_US
dc.publisherUniversitas Sumatera Utaraen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectMachine Learningen_US
dc.subjectReinforcement Learningen_US
dc.subjectQLearningen_US
dc.subjectSARSAen_US
dc.subjectJavanese Chessen_US
dc.subjectSDGsen_US
dc.titleKomparasi Performa Algoritma Q-Learning dan Algoritma SARSA dalam Perancangan Agen pada Permainan Catur Jawaen_US
dc.title.alternativeComparison of The Performance of The Q-Learning Algorithm and The SARSA Algorithm in Agent Design for Javanese Chess Gameen_US
dc.typeThesisen_US
dc.identifier.nimNIM217038019
dc.identifier.nidnNIDN0013085903
dc.identifier.nidnNIDN0017036205
dc.identifier.kodeprodiKODEPRODI55101#Teknik Informatika
dc.description.pages146 Pagesen_US
dc.description.typeTesis Magisteren_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record