Show simple item record

dc.contributor.advisorMawengkang, Herman
dc.contributor.advisorTulus
dc.contributor.advisorSutarman
dc.contributor.authorNababan, Maxtulus Junedy
dc.date.accessioned2024-10-29T06:49:14Z
dc.date.available2024-10-29T06:49:14Z
dc.date.issued2024
dc.identifier.urihttps://repositori.usu.ac.id/handle/123456789/98417
dc.description.abstractSchool Communities interact dynamically in a school system, therefore they can be considered as agents in a Multi-Agent System. It is known that the need to manage relationships between agents in a multi-agent system is to achieve co- ordinated behavior. Coordinated behavior can be achieved by managing the role of agents in developing knowledge, attitudes and practices, namely the determinants that shape environmental behavior. However, managing this relationship is a com- plex job because it contains uncertainty. An approach that can be taken is to use modeling and explicit description of the relationships between agents. Markov mod- els based on influence diagrams are needed for modeling coordination mechanisms because agents can show, as well as consider how their activities influence the ac- tivities of other agents to achieve environmental behavioral goals as expected. This deliberation extends the framework of partially observable Markov decision process- es (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs about the physical state of the environ- ment and other agents’ models, and they use Bayesian updating to maintain their beliefs over time. The solution maps belief states to actions. Other agent models can include their belief states and are related to the types of agents considered in incomplete information games. We express agents’ autonomy by postulating that their models cannot be manipulated or directly observed by other agents. We show that important properties of POMDP, such as value iteration convergence, rate of convergence, and piecewise linearity and convexity of value functions carry over to our framework. Our approach complements a more traditional approach to interac- tive settings that uses Nash equilibria as a solution paradigm. A balance weakness occurs that may not be unique and cannot be captured by unbalanced behavior. We do this at the expense of having to represent, process, and continually revise other agents’ models. Because agents’ beliefs may be arbitrarily nested, the optimal solu- tion to a decision-making problem can only be calculated asymptotically. However, approximate belief updates and approximately optimal plans can be calculated. We illustrate our framework using a simple application domain, and we show examples of updating belief and value functions.en_US
dc.language.isoiden_US
dc.publisherUniversitas Sumatera Utaraen_US
dc.subjectMarkov modelen_US
dc.subjectHidden markov modelen_US
dc.subjectMulti-agenten_US
dc.subjectInfluence diagramen_US
dc.titleModel Hidden Markov untuk Menyajikan Hubungan Koordinasi Perkembangan Perilaku Pembelajaran di Era Reformasi Industrien_US
dc.title.alternativeHidden Markov Model for Representing Relationships Coordination of Learning Behavior Development in The Era of Industrial Reformen_US
dc.typeThesisen_US
dc.identifier.nimNIM178110003
dc.identifier.nidnNIDN8859540017
dc.identifier.nidnNIDN0001096202
dc.identifier.nidnNIDN0026106305
dc.identifier.kodeprodiKODEPRODI44002#Ilmu Matematika
dc.description.pages83 Pagesen_US
dc.description.typeDisertasi Doktoren_US
dc.subject.sdgsSDGs 4. Quality Educationen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record