IMS 2017 - AAAI Spring Symposium - Interactive Multi-sensory Object Perception for Embodied Agents
Date2017-03-27 - 2017-03-29
Deadline2016-11-11
VenueStanford University, Palo Alto, California, USA - United States
Keywords
Topics/Call fo Papers
AAAI Spring Symposium - Interactive Multi-sensory Object Perception for Embodied Agents
March 27-29, 2017
Stanford University, Palo Alto, California
Website: http://www.cs.utexas.edu/~jsinapov/AAAI-SSS-2017/
Deadline extended: November 11, 2016 (due to multiple requests)
---Symposium Summary ---
For a robot to perceive object properties with multiple sensory modalities, it needs to interact with the object through action. This interaction requires that an agent be embodied (i.e., the robot interacts with the environment through a physical body within that environment). A major challenge is to get a robot to interact with the scene in a way that is quick and efficient, and utilize multiple sensory modalities to perceive and reason about objects. The fields of psychology and cognitive science have demonstrated that humans rely on multiple senses (e.g., audio, haptics, tactile, etc.) in a broad variety of contexts ranging from language learning to learning manipulation skills.
How do we collect large datasets from robots exploring the world with multi-sensory inputs and what algorithms can we use to learn and act with this data? While the community has focused on how to deal with visual information (e.g., deep learning for visual features), there has been far fewer explorations of how to utilize and learn from the very different scales of data collected from very different sensors.
The goal of this symposium is to bring together researchers from the fields of AI, Robotics, and Psychology who share the goal of advancing the state-of-the-art in robotic perception of objects. The symposium will consist of invited speakers, poster and breakout sessions, and panels over 2 days.
--- Submission/Topics of Interest ---
We welcome abstract submissions of prior or ongoing work related to multi-sensory perception and embodied agents. Submissions should be 2-4 pages in length, plus an extra page for references, in AAAI format. Topics include (but are not limited to):
Multi-sensory perception
Psychology of sensory inputs
Robot learning using multi-sensory data
Representations of multisensory perception and action
Real-time perception and decision making using multi-sensory input
Learning algorithms for auditory, visual, and haptic data
Multi-sensory data collection
Algorithms for embodied agents to interact with the real-world
Submission deadline: November 11, 2016
For additional submission details, please visit the symposium website (listed above)
--- Organizing Committee ---
Vivian Chu, Georgia Institute of Technology
Dr. Jivko Sinapov, University of Texas at Austin
Dr. Jeannette Bohg, MPI for Intelligent Systems
Dr. Sonia Chernova, Georgia Institute of Technology
Dr. Andrea L. Thomaz, University of Texas at Austin
March 27-29, 2017
Stanford University, Palo Alto, California
Website: http://www.cs.utexas.edu/~jsinapov/AAAI-SSS-2017/
Deadline extended: November 11, 2016 (due to multiple requests)
---Symposium Summary ---
For a robot to perceive object properties with multiple sensory modalities, it needs to interact with the object through action. This interaction requires that an agent be embodied (i.e., the robot interacts with the environment through a physical body within that environment). A major challenge is to get a robot to interact with the scene in a way that is quick and efficient, and utilize multiple sensory modalities to perceive and reason about objects. The fields of psychology and cognitive science have demonstrated that humans rely on multiple senses (e.g., audio, haptics, tactile, etc.) in a broad variety of contexts ranging from language learning to learning manipulation skills.
How do we collect large datasets from robots exploring the world with multi-sensory inputs and what algorithms can we use to learn and act with this data? While the community has focused on how to deal with visual information (e.g., deep learning for visual features), there has been far fewer explorations of how to utilize and learn from the very different scales of data collected from very different sensors.
The goal of this symposium is to bring together researchers from the fields of AI, Robotics, and Psychology who share the goal of advancing the state-of-the-art in robotic perception of objects. The symposium will consist of invited speakers, poster and breakout sessions, and panels over 2 days.
--- Submission/Topics of Interest ---
We welcome abstract submissions of prior or ongoing work related to multi-sensory perception and embodied agents. Submissions should be 2-4 pages in length, plus an extra page for references, in AAAI format. Topics include (but are not limited to):
Multi-sensory perception
Psychology of sensory inputs
Robot learning using multi-sensory data
Representations of multisensory perception and action
Real-time perception and decision making using multi-sensory input
Learning algorithms for auditory, visual, and haptic data
Multi-sensory data collection
Algorithms for embodied agents to interact with the real-world
Submission deadline: November 11, 2016
For additional submission details, please visit the symposium website (listed above)
--- Organizing Committee ---
Vivian Chu, Georgia Institute of Technology
Dr. Jivko Sinapov, University of Texas at Austin
Dr. Jeannette Bohg, MPI for Intelligent Systems
Dr. Sonia Chernova, Georgia Institute of Technology
Dr. Andrea L. Thomaz, University of Texas at Austin
Other CFPs
- 20th conference in the long tradition of Scandinavian Conferences on Image Analysis
- 2nd IEEE International Conference on Intelligent Data and Security (IEEE IDS 2017)
- 3rd IEEE International Conference on Cyber Security and Cloud Computing (IEEE SSC 2017)
- 2017 International Conference on Smart Science
- Workshop on Energy Harvesting and Remotely Powered Wireless Communication for the IoT 2017
Last modified: 2016-10-26 22:59:04