dc.description.abstract | School Communities interact dynamically in a school system, therefore they
can be considered as agents in a Multi-Agent System. It is known that the need
to manage relationships between agents in a multi-agent system is to achieve co-
ordinated behavior. Coordinated behavior can be achieved by managing the role of
agents in developing knowledge, attitudes and practices, namely the determinants
that shape environmental behavior. However, managing this relationship is a com-
plex job because it contains uncertainty. An approach that can be taken is to use
modeling and explicit description of the relationships between agents. Markov mod-
els based on influence diagrams are needed for modeling coordination mechanisms
because agents can show, as well as consider how their activities influence the ac-
tivities of other agents to achieve environmental behavioral goals as expected. This
deliberation extends the framework of partially observable Markov decision process-
es (POMDPs) to multi-agent settings by incorporating the notion of agent models
into the state space. Agents maintain beliefs about the physical state of the environ-
ment and other agents’ models, and they use Bayesian updating to maintain their
beliefs over time. The solution maps belief states to actions. Other agent models
can include their belief states and are related to the types of agents considered in
incomplete information games. We express agents’ autonomy by postulating that
their models cannot be manipulated or directly observed by other agents. We show
that important properties of POMDP, such as value iteration convergence, rate of
convergence, and piecewise linearity and convexity of value functions carry over to
our framework. Our approach complements a more traditional approach to interac-
tive settings that uses Nash equilibria as a solution paradigm. A balance weakness
occurs that may not be unique and cannot be captured by unbalanced behavior. We
do this at the expense of having to represent, process, and continually revise other
agents’ models. Because agents’ beliefs may be arbitrarily nested, the optimal solu-
tion to a decision-making problem can only be calculated asymptotically. However,
approximate belief updates and approximately optimal plans can be calculated. We
illustrate our framework using a simple application domain, and we show examples
of updating belief and value functions. | en_US |