Robots 2018 - 2018 Workshop Robots, Morality, and Trust through the Verification Lens
Topics/Call fo Papers
Robots are becoming members of our society. Complex algorithms have rendered robots increasingly sophisticated machines with rising levels of autonomy, enabling them to leave behind their traditional work places in factories. They have begun to enter our society, which is consisted of convoluted social rules, relationships, and expectations. Driverless cars, home assistive robots, and unmanned aerial vehicles are just a few examples. As the level of involvement of such systems increases in our daily lives, their decisions affect us more directly. Therefore, we instinctively expect robots to behave morally and make ethical decisions. For instance, we expect a firefighter robot to follow ethical principles when it is faced with a choice of saving one’s life over another’s in a rescue mission, and we expect an eldercare robot to take a moral stance in following the instructions of its owner when they are in conflict with the interest of others (unlike the robot in the movie "Robot & Frank"). Such expectations give rise to the notion of trust in the context of human-robot relationship and to questions such as “how can I trust a driverless car to take my child to school?” and “how can I trust a robot to help my elderly parent?”
Formal methods, specifically computer aided verification and synthesis, have been receiving a lot of attention from the robotics community in the recent years. These methods are explicitly adapted by this community to provide guarantees for the correctness of the robot behaviors. Therefore, these methods can also potentially provide a venue to approach the above questions, which introduces a challenge for the verification community. A simple example of such a challenge is the question of trust in a verification result of “yes”. That is, how can an unexperienced human user trust that the decision suggested by the verification algorithm is correct? In this case, a simple presentation of the end result, e.g., “yes” or “no”, is no longer sufficient. Unexperienced users might require explanations and understandability of the methods and underlying formalism.
Therefore, in order to be able to verify or to design synthesis algorithms that guarantee morally- aware and ethical decision making and hence creating trustworthy robots, we need to understand the conceptual theory of morality in machine autonomy in addition to understanding, formalizing, and expressing trust itself. This is a tremendously challenging (yet necessary) task because it involves many aspects including philosophy, sociology, psychology, cognitive reasoning, logic, and computation. In this workshop, we aim to continue with the discussions initiated in the workshops on "Social Trust in Autonomous Robots" and "Morality and Social Trust in Autonomous Robots" at the Robotics: Science and Systems (RSS) conferences in 2016 and 2017, respectively, specifically through the lens of computer aided verification to shed light on these multifaceted concepts and notions from various perspectives through a series of talks and panel discussions.
Formal methods, specifically computer aided verification and synthesis, have been receiving a lot of attention from the robotics community in the recent years. These methods are explicitly adapted by this community to provide guarantees for the correctness of the robot behaviors. Therefore, these methods can also potentially provide a venue to approach the above questions, which introduces a challenge for the verification community. A simple example of such a challenge is the question of trust in a verification result of “yes”. That is, how can an unexperienced human user trust that the decision suggested by the verification algorithm is correct? In this case, a simple presentation of the end result, e.g., “yes” or “no”, is no longer sufficient. Unexperienced users might require explanations and understandability of the methods and underlying formalism.
Therefore, in order to be able to verify or to design synthesis algorithms that guarantee morally- aware and ethical decision making and hence creating trustworthy robots, we need to understand the conceptual theory of morality in machine autonomy in addition to understanding, formalizing, and expressing trust itself. This is a tremendously challenging (yet necessary) task because it involves many aspects including philosophy, sociology, psychology, cognitive reasoning, logic, and computation. In this workshop, we aim to continue with the discussions initiated in the workshops on "Social Trust in Autonomous Robots" and "Morality and Social Trust in Autonomous Robots" at the Robotics: Science and Systems (RSS) conferences in 2016 and 2017, respectively, specifically through the lens of computer aided verification to shed light on these multifaceted concepts and notions from various perspectives through a series of talks and panel discussions.
Other CFPs
- International Workshop on EXTERNAL AND INTERNAL CALCULI FOR NON-CLASSICAL LOGICS
- Third Workshop Fun With Formal Methods
- Reasoning with Preferences, Uncertainty and Vagueness
- 18TH INTERNATIONAL WORKSHOP ON AUTOMATED VERIFICATION OF CRITICAL SYSTEMS
- 10th Working Conference on Verified Software: Theories, Tools, and Experiments
Last modified: 2017-12-13 10:50:49