AI 2016 - Symposium on AI AND THE MITIGATION OF HUMAN ERROR: ANOMALIES, TEAM METRICS AND THERMODYNAMICS
Topics/Call fo Papers
AI has the potential to mitigate human error by reducing car accidents, airplane accidents, and other mistakes made mindfully or inadvertently by humans. One worry about this bright future is that jobs may be lost. Another is the loss of human control. Despite the loss of all aboard several commercial airliners in recent years, commercial airline pilots reject being replaced by AI.
An even greater, existential threat posed by AI is to the existence of humanity, raised by physicist Stephen Hawkings, entrepreneur Elon Musk and computer billionaire Bill Gates. Garland, the director of the new film Ex Machina, has countered their warnings.
Across a wide range of occupations and industries, human error is the primary cause of accidents. In general aviation, the FAA attributed accidents primarily to stalls and controlled flights into terrain. Exacerbating the sources of human error, safety is one area an organization often skimps to save money. The diminution of safety coupled with human error led to the explosion in 2010 that doomed the Deepwater Horizon in the Gulf of Mexico. Human error emerges as a top safety risk in the management of civilian air traffic control. Human error was the cause attributed to the recent sinking of Taiwan's Ocean Researcher V last fall. Human behavior is a leading cause of cyber breaches.
Humans cause accidents by lacking situational awareness, by a convergence to incomplete beliefs, or by emotional decision making (for example, the Iranian Airbus flight erroneously downed by the USS Vincennes in 1988). Other factors contributing to human error include poor problem diagnoses; poor planning, communication and execution; and poor organizational functioning.
We want to explore the human's role in the cause of accidents and the use of AI in mitigating human error; in reducing problems with teams, like suicide (for example, the German copilot, Libutz, who killed 150 aboard his Germanwings commercial aircraft); and in reducing the mistakes by military commanders (for example, the 2001 sinking of the Japanese tour boat by the USS Greeneville).
In this symposium, we want a rigorous view of AI with possible applications to mitigate human error, to find anomalies in human operations, and to discover, when, for example, teams have gone awry, whether and how AI should intercede in the affairs of humans.
Papers should address "AI and the mitigation of human error," specify the relevance of their topic to AI, or state how AI can be used to address their issue.
Organizing Committee
Ranjeev Mittu, ranjeev.mittu-AT-nrl.navy.mil; Gavin Taylor, taylor-AT-usna.edu; Don Sofge, Naval Research Laboratory, don.sofge-AT-nrl.navy.mil; and W. F. Lawless, Technical Consultant, w.lawless-AT-icloud.com
An even greater, existential threat posed by AI is to the existence of humanity, raised by physicist Stephen Hawkings, entrepreneur Elon Musk and computer billionaire Bill Gates. Garland, the director of the new film Ex Machina, has countered their warnings.
Across a wide range of occupations and industries, human error is the primary cause of accidents. In general aviation, the FAA attributed accidents primarily to stalls and controlled flights into terrain. Exacerbating the sources of human error, safety is one area an organization often skimps to save money. The diminution of safety coupled with human error led to the explosion in 2010 that doomed the Deepwater Horizon in the Gulf of Mexico. Human error emerges as a top safety risk in the management of civilian air traffic control. Human error was the cause attributed to the recent sinking of Taiwan's Ocean Researcher V last fall. Human behavior is a leading cause of cyber breaches.
Humans cause accidents by lacking situational awareness, by a convergence to incomplete beliefs, or by emotional decision making (for example, the Iranian Airbus flight erroneously downed by the USS Vincennes in 1988). Other factors contributing to human error include poor problem diagnoses; poor planning, communication and execution; and poor organizational functioning.
We want to explore the human's role in the cause of accidents and the use of AI in mitigating human error; in reducing problems with teams, like suicide (for example, the German copilot, Libutz, who killed 150 aboard his Germanwings commercial aircraft); and in reducing the mistakes by military commanders (for example, the 2001 sinking of the Japanese tour boat by the USS Greeneville).
In this symposium, we want a rigorous view of AI with possible applications to mitigate human error, to find anomalies in human operations, and to discover, when, for example, teams have gone awry, whether and how AI should intercede in the affairs of humans.
Papers should address "AI and the mitigation of human error," specify the relevance of their topic to AI, or state how AI can be used to address their issue.
Organizing Committee
Ranjeev Mittu, ranjeev.mittu-AT-nrl.navy.mil; Gavin Taylor, taylor-AT-usna.edu; Don Sofge, Naval Research Laboratory, don.sofge-AT-nrl.navy.mil; and W. F. Lawless, Technical Consultant, w.lawless-AT-icloud.com
Other CFPs
- Symposium on CHALLENGES AND OPPORTUNITIES IN MULTIAGENT LEARNING FOR THE REAL WORLD
- Symposium on ENABLING COMPUTING RESEARCH IN SOCIALLY INTELLIGENT HUMAN-ROBOT INTERACTION: A COMMUNITY-DRIVEN MODULAR RESEARCH PLATFORM
- Symposium on ETHICAL AND MORAL CONSIDERATIONS IN NONHUMAN AGENTS
- Symposium on INTELLIGENT SYSTEMS FOR SUPPORTING DISTRIBUTED HUMAN TEAMWORK
- Symposium on OBSERVATIONAL STUDIES THROUGH SOCIAL MEDIA AND OTHER HUMAN-GENERATED CONTENT
Last modified: 2015-09-01 23:14:20