ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

WAISE 2018 - First International Workshop on Artificial Intelligence Safety Engineering (WAISE)

Date2018-09-18

Deadline2018-05-22

VenueVästerås, Sweden Sweden

Keywords

Websitehttps://www.waise2018.com/scope-topics

Topics/Call fo Papers

Research, engineering and regulatory frameworks are needed to achieve the full potential of Artificial Intelligence (AI) because they will guarantee a standard level of safety and settle issues such as compliance with ethical standards and liability for accidents involving, for example, autonomous cars. Designing AI-based systems for operation in proximity to and/or in collaboration with humans implies that current safety engineering and legal mechanisms need to be revisited to ensure that individuals –and their properties– are not harmed and that the desired benefits outweigh the potential unintended consequences.

The different approaches taken to AI safety go from pure theoretical (moral philosophy or ethics) to pure practical (engineering) planes. It appears as essential to combine philosophy and theoretical science with applied science and engineering in order to create safe machines. This should become an interdisciplinary approach covering technical (engineering) aspects of how to actually create, test, deploy, operate and evolve safe AI-based systems, as well as broader strategic, ethical and policy issues.

Increasing levels of AI in “smart” sensory-motor loops allow intelligent systems to perform in increasingly dynamic uncertain complex environments with increasing degrees of autonomy, with human being progressively ruled out from the control loop. Adaptation to the environment is being achieved by Machine Learning (ML) methods rather than more traditional engineering approaches, such as system modelling and programming. Recently, certain ML methods are proving themselves specially promising, such as deep learning, reinforcement learning and their combination. However, the inscrutability or opaqueness of the statistical models for perception and decision-making we build through them pose yet another challenge. Moreover, the combination of autonomy and inscrutability in these AI-based systems is particularly challenging in safety-critical applications, such as autonomous vehicles, personal care or assistive robots and collaborative industrial robots.

The WAISE workshop is intended to explore new ideas on safety engineering for AI-based systems, ethically aligned design, regulation and standards for AI-based systems. In particular, WAISE will provide a forum for thematic presentations and in-depth discussions about safe AI architectures, bounded morality, ML safety, safe human-machine interaction and safety considerations in automated decision making systems, in a way that makes AI-based systems more trustworthy, accountable and ethically aligned.

WAISE aims at bringing together experts, researchers, and practitioners, from diverse communities, such as AI, safety engineering, ethics, standardization and certification, robotics, cyber-physical systems, safety-critical systems, and application domain communities such as automotive, healthcare, manufacturing, agriculture, aerospace, critical infrastructures, and retail.
Contributions are sought in (but are not limited to) the following topics:

Avoiding negative side effects
Safety in AI-based system architectures: safety by design
Runtime monitoring and (self-)adaptation of AI safety
Safe machine learning and meta-learning
Safety constraints and rules in decision making systems
Continuous Verification and Validation (V&V) of safety properties
AI-based system predictability
Model-based engineering approaches to AI safety
Ethically aligned design of AI-based systems
Machine-readable representations of ethical principles and rules
The values alignment problem
The goals alignment problem
Accountability, responsibility and liability of AI-based systems
Uncertainty in AI
AI safety risk assessment and reduction
Loss of values and the catastrophic forgetting problem
Confidence, self-esteem and the distributional shift problem
Reward hacking and training corruption
Weaponization of AI-based systems
Self-explanation, self-criticism and the transparency problem
Simulation for safe exploration and training
Human-machine interaction safety
AI applied to safety engineering
Zero-sum and the trolley problem
Regulating AI-based systems: safety standards and certification
Human-in-the-loop and the scalable oversight problem
Algorithmic bias and AI discrimination
AI safety education and awareness
Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others

Last modified: 2018-03-25 21:58:32