Zum Hauptinhalt springen Zur Suche springen Zur Hauptnavigation springen
Herzlich Willkommen!
In a distributed system, a number of individually acting agents coexist. In order to achieve a common goal, coordinated cooperation between the agents is crucial. Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. Multi-agent reinforcement learning (RL) methods allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behavior of the whole system. However, the decentralization of the control and observation of the system among independent agents has a significant impact on problem complexity. The author Thomas Gabel addresses the intricacy of learning and acting in multi-agent systems by two complementary approaches. He identifies a subclass of general decentralized decision-making problems that features provably reduced complexity. Moreover, he presents various novel model-free multi-agent RL algorithms that are capable of quickly obtaining approximate solutions in the vicinity of the optimum. All algorithms proposed are evaluated in the scope of various established scheduling benchmark problems.
Thomas Gabel, Dr. rer. nat., studied Computer Scienceat the University of Kaiserslautern. Subsequently, hewas working as scientific researcher at the University of Osnabrück with focus on learning in multi-agent systems, reinforcement learning, as well as knowledge management and case-based reasoning. He received his doctoral degree in 2009.