ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

InMAT 2016 - Workshop on Interactions with Mixed Agent Types (InMAT)

Date2016-07-09 - 2016-07-15

Deadline2016-04-18

VenueNew York City, USA - United States USA - United States

Keywords

Websitehttps://ccc.inaoep.mx/inmat

Topics/Call fo Papers

Artificial intelligence is becoming ubiquitous. It is increasingly being used in video games, smart phones, and even in our appliances and cars. With these advances comes the urgent need to build software and devices that can reliably interact with other artificial intelligent machines. Such settings have long been hypothesized and studied across different fields like game playing and game theory, multiagent systems, robotics, machine learning and related areas. This workshop will call upon these researchers to assemble and share their perspectives to the problem.
When such agents are situated in the real world, they will most likely encounter agents that deviate from optimality or rationality and whose objectives, learning dynamics and representation of the world are usually unknown. Consequently, one seeks to design agents that can interact with other agents by making important assumptions or hypothesis about their rationality, objectives, observability, optimality and possibly their learning dynamics. Agents might even behave randomly (due by faulty sensors and actuators or by design), and robust techniques should come in play when dealing with these types of uncertainty about their types.
Highlights
The core of this workshop will center about discussing if single agent techniques can be extended/adapted to the multiagent setting, and if so, how. Questions that will be of special interest (but are not limited to) include the following:
Is it imperative to learn (explicitly) models of the other agents? Or, can the other agents be marginalized as part of the environment.
If no assumption is made about the type of agents encountered, is one better off assuming rational (game theoretic) or optimal (decision theoretic) models to plan the interactions?
Should exploration to learn the models be performed separately and off-line or together as part of the policy computing (online learning)?
Topics
multiplayer games and smart AI in games
game theory involving incomplete information about player types
multiagent systems
multiagent reinforcement learning
multiagent planning under partial observability (Markovian models such as (partially observable) Markov decision processes (PO)MDP and their extensions, multiagent (PO)MDP, HMMs, interactive POMDPs, interactive dynamic influence diagrams, decentralized (PO)MDP)
other probabilistic models
robotics
dynamical systems
graphical models and networks
knowledge representation involving interactions

Last modified: 2016-02-11 22:33:02