IAAI'09 2009 - 21st International Conference on Innovative Applications of Artificial Intelligence (IAAI'09)
Topics/Call fo Papers
Emerging Application or Methodologies Papers
The goal of the emerging application track is to "bridge the gap" between basic AI research and deployed AI applications, by discussing efforts to apply AI tools, techniques, or methods to real world problems. Emerging applications are on aspects of AI applications that are not appropriate for deployed application case studies, or are not sufficiently deployed to be submitted as case studies. This track is distinguished from reports of scientific AI research appropriate for the IJCAI-09 Conference in that the objective of the efforts reported here should be the engineering of AI applications.
Deployed Application Case Study Papers
Case-study papers must describe deployed applications with measurable benefits that include some aspect of AI technology. The application needs to have been in production use by their final endusers (typically for at least three months). The case study may evaluate either a stand-alone application or a component of a complex system.
The Twenty-First Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-09) will focus on successful applications of AI technology. The conference will use technical papers, invited talks, and panel discussions to explore issues, methods, and lessons learned in the development and deployment of AI applications; and to promote an interchange of ideas between basic and applied AI.
IAAI-09 will consider papers in two tracks: (1) emerging applications or methodologies and (2) deployed application case studies. Submissions should clearly identify which track they are intended for, as the two tracks are judged on different criteria. Applications are defined as deployed once they are in production use by their final end users (not the people who created the application) for sufficiently long that experience can be reported (usually greater than three months of use by the end-users). All submissions must be original.
Emerging Application or Methodologies Papers
The goal of the emerging application track is to "bridge the gap" between basic AI research and deployed AI applications, by discussing efforts to apply AI tools, techniques, or methods to real world problems. Emerging applications are on aspects of AI applications that are not appropriate for deployed application case studies, or are not sufficiently deployed to be submitted as case studies. This track is distinguished from reports of scientific AI research appropriate for the IJCAI-09 Conference in that the objective of the efforts reported here should be the engineering of AI applications.
Emerging application papers may include any aspects of the technology, engineering, or deployment of AI applications, including discussions of prototype applications; performance evaluation of AI applications; ongoing efforts to develop large-scale or domain-specific knowledge bases or ontologies; development of domain or task focused tools, techniques, or methods; evaluations of AI tools, techniques or methods for domain suitability; unsuccessful attempts to apply particular tools, techniques or methods to specific domains (which shed insight on the applicability and limitations of the tool, technique or method); system architectures that work; scalability of techniques; integration of AI with other technologies; development methodologies; validation and verification; lessons learned; social and other technology transition issues.
The following questions will appear on the review form for emerging technology papers. Authors are advised to bear these questions in mind while writing their papers. Reviewers will look for papers that meet at least some (although not necessarily all) of the criteria in each category.
Significance: How important is the problem being addressed? Is it a difficult or simple problem? Is it central or peripheral to a category of applications? Is the tool or methodology presented generally applicable or domain specific? Does the tool or methodology offer the potential for new or more powerful applications of AI?
AI Technology: Does the paper identify AI research needed for a particular application or class of applications? Does the paper characterize the needs of application domains for solutions of particular AI problems? Does the paper evaluate the applicability of an AI tool or methodology for an application domain? Does the paper describe AI technology that could enable new or more powerful AI applications?
Innovation: Does the tool, technique, or method advance the state of the art or state of the practice of AI technology? Does the tool, technique, or method address a new or previously reported problem? If it is a previously reported problem, does the tool, technique, or method solve it in a different, new, more effective, or more efficient way? Does the reported AI technologies in a new way? Does the work provide a new perspective on an application domain? Does the work apply AI to a new domain?
Content: Does the paper motivate the need for the tool or methodology? Does the paper adequately describe the task it performs or the problem it solves? Does it provide technical details about the design and implementation of the tool or methodology? Does the paper clearly identify the AI research results on which the tool or methodology depends? Does it relate the tool or methodology to the needs of application domains? Does it provide insights about the use of AI technology in general or for a particular application domain? Does it describe the development process and costs? Does it discuss estimated or measured benefits? Does it detail the evaluation method and results?
Evaluation: Has the tool or methodology been tested on real data? Has it been evaluated by end users? Has it been incorporated into a deployed application? Has it been compared to other competing tools or methods?
Technical Quality: Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? Are the results described and evaluated? Are its claims backed up? Does it identify and describe relevant previous work?
Clarity: Is the paper clearly written? Is it organized logically? Are there sufficient figures and examples to illustrate the key points? Is the paper accessible to those outside the application domain? Is it accessible to those in other technical specialties?
Deployed Application Case Study Papers
Case-study papers must describe deployed applications with measurable benefits that include some aspect of AI technology. The application needs to have been in production use by their final endusers (typically for at least three months). The case study may evaluate either a stand-alone application or a component of a complex system. In addition to the criteria listed above for the Emerging Track papers, the deployed applications will also be evaluated on the following:
Task or Problem Description: Describe the task the application performs or the problem it solves. State the objectives of the application and explain why an AI solution was important. If other solutions were tried and failed outline these solutions and the reasons for their failure.
Application Description: Describe the application, providing key technical details about design and implementation. What are the system components, what are their functions, and how do they interact? What languages and tools are used in the application? How is knowledge represented? What is the hardware and software environment in which the system is deployed? Provide examples to illustrate how the system is used.
Uses of AI Technology: On what AI research results does the application depend? What key aspects of AI technology allowed the application to succeed? How were the techniques modified to fit the needs of the application? If applicable, describe how AI technology is integrated with other technology. If a commercial tool is used, explain the decision criteria used to select it. Describe any insights gained about the application of AI technology. What AI approaches or techniques were tried and did not work? Why not?
Application Use and Payoff: How long has this application been deployed? Explain how widely, how often, and by whom the application is being used. Also describe the application's payoff. What measurable benefits have resulted from its use? What additional benefits do you expect over time? What impacts has it had on the users' business processes?
Application Development and Deployment: Describe the development and deployment process. How long did they take? How many developers were involved? What were the costs? What were the difficulties, and how were they overcome? What are the lessons learned? What, if any, formal development methods were used?
Maintenance: Describe your experience with and plans for maintenance of the application. Who maintains the application? How often is update needed? Is domain knowledge expected to change over time? How does the design of the application facilitate update?
Original papers on the deployment issues in AI applications are welcome, even if other papers on the AI technology have been presented at or submitted to other conferences. We encourage updates on applications that have been in use for an extended period of time (i.e., multiple years). Each of the accepted deployed application papers will receive the IAAI "Innovative Application" Award.
Timetable for Authors
December 1, 2008 ¨C January 20, 2009: Authors register on the IAAI web site
January 20, 2009: Electronic papers due
March 31, 2009: Camera-ready copy due at AAAI office
Submission Format
Electronic submissions are required. Papers must be in trouble-free, high resolution PDF format and formatted for United States Letter (8.5" x 11") paper. Submissions need to be in AAAI two-column format. Deployed papers can be up to eight (8) pages. Emerging papers are limited to six (6) complimentary pages and two (2) optional additional pages at $275 each. The title page (described below) does not count as one of these pages.
Papers must have a title page, including the title of the paper; the track to which it is submitted; the names, affiliations, postal and e-mail addresses, and telephone and fax numbers of all authors; a designation of the application domain(s); identification of AI techniques employed or issues addressed; an indication of application status (e.g., feasibility analysis, research prototype, operational prototype, deployed application, etc.); and an abstract of fewer than 200 words.
Authors should register on the IAAI- 09 web-based paper submission software at www.aaai.org/Conferences/IAAI/iaai09.php. A login and password, as well as detailed instructions about how to submit an electronic paper, will be sent to the author in a subsequent email message. Authors must then submit a formatted electronic version of their paper through this software no later than Tuesday, January 20, 2009. We cannot accept papers submitted by email or fax.
Submissions received after the deadlines or that do not meet the length or formatting requirements detailed previously and at the IAAI-09 web site will not be accepted for review. Notification of receipt of the electronic paper will be mailed to the first author (or designated author) soon after receipt. If there are problems with the electronic submission, AAAI will contact the primary author by email. Papers will be reviewed by the Program Committee and notification of acceptance or rejection will be mailed to the contact author in early March. PDFs of accepted papers will be due on March 31, 2009. Authors will be required to transfer copyright.
Inquiries
Registration or clarification inquiries may be sent to AAAI at iaai09 [at] aaai [dot] org, 650-328-3123, or 650-321-4457 (fax).
IAAI-08 Program Chairs
Karen Haigh, Conference Chair
BBN Technologies
Nestor Rychtyckyj, Conference Cochair
Ford Motor Company
The goal of the emerging application track is to "bridge the gap" between basic AI research and deployed AI applications, by discussing efforts to apply AI tools, techniques, or methods to real world problems. Emerging applications are on aspects of AI applications that are not appropriate for deployed application case studies, or are not sufficiently deployed to be submitted as case studies. This track is distinguished from reports of scientific AI research appropriate for the IJCAI-09 Conference in that the objective of the efforts reported here should be the engineering of AI applications.
Deployed Application Case Study Papers
Case-study papers must describe deployed applications with measurable benefits that include some aspect of AI technology. The application needs to have been in production use by their final endusers (typically for at least three months). The case study may evaluate either a stand-alone application or a component of a complex system.
The Twenty-First Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-09) will focus on successful applications of AI technology. The conference will use technical papers, invited talks, and panel discussions to explore issues, methods, and lessons learned in the development and deployment of AI applications; and to promote an interchange of ideas between basic and applied AI.
IAAI-09 will consider papers in two tracks: (1) emerging applications or methodologies and (2) deployed application case studies. Submissions should clearly identify which track they are intended for, as the two tracks are judged on different criteria. Applications are defined as deployed once they are in production use by their final end users (not the people who created the application) for sufficiently long that experience can be reported (usually greater than three months of use by the end-users). All submissions must be original.
Emerging Application or Methodologies Papers
The goal of the emerging application track is to "bridge the gap" between basic AI research and deployed AI applications, by discussing efforts to apply AI tools, techniques, or methods to real world problems. Emerging applications are on aspects of AI applications that are not appropriate for deployed application case studies, or are not sufficiently deployed to be submitted as case studies. This track is distinguished from reports of scientific AI research appropriate for the IJCAI-09 Conference in that the objective of the efforts reported here should be the engineering of AI applications.
Emerging application papers may include any aspects of the technology, engineering, or deployment of AI applications, including discussions of prototype applications; performance evaluation of AI applications; ongoing efforts to develop large-scale or domain-specific knowledge bases or ontologies; development of domain or task focused tools, techniques, or methods; evaluations of AI tools, techniques or methods for domain suitability; unsuccessful attempts to apply particular tools, techniques or methods to specific domains (which shed insight on the applicability and limitations of the tool, technique or method); system architectures that work; scalability of techniques; integration of AI with other technologies; development methodologies; validation and verification; lessons learned; social and other technology transition issues.
The following questions will appear on the review form for emerging technology papers. Authors are advised to bear these questions in mind while writing their papers. Reviewers will look for papers that meet at least some (although not necessarily all) of the criteria in each category.
Significance: How important is the problem being addressed? Is it a difficult or simple problem? Is it central or peripheral to a category of applications? Is the tool or methodology presented generally applicable or domain specific? Does the tool or methodology offer the potential for new or more powerful applications of AI?
AI Technology: Does the paper identify AI research needed for a particular application or class of applications? Does the paper characterize the needs of application domains for solutions of particular AI problems? Does the paper evaluate the applicability of an AI tool or methodology for an application domain? Does the paper describe AI technology that could enable new or more powerful AI applications?
Innovation: Does the tool, technique, or method advance the state of the art or state of the practice of AI technology? Does the tool, technique, or method address a new or previously reported problem? If it is a previously reported problem, does the tool, technique, or method solve it in a different, new, more effective, or more efficient way? Does the reported AI technologies in a new way? Does the work provide a new perspective on an application domain? Does the work apply AI to a new domain?
Content: Does the paper motivate the need for the tool or methodology? Does the paper adequately describe the task it performs or the problem it solves? Does it provide technical details about the design and implementation of the tool or methodology? Does the paper clearly identify the AI research results on which the tool or methodology depends? Does it relate the tool or methodology to the needs of application domains? Does it provide insights about the use of AI technology in general or for a particular application domain? Does it describe the development process and costs? Does it discuss estimated or measured benefits? Does it detail the evaluation method and results?
Evaluation: Has the tool or methodology been tested on real data? Has it been evaluated by end users? Has it been incorporated into a deployed application? Has it been compared to other competing tools or methods?
Technical Quality: Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? Are the results described and evaluated? Are its claims backed up? Does it identify and describe relevant previous work?
Clarity: Is the paper clearly written? Is it organized logically? Are there sufficient figures and examples to illustrate the key points? Is the paper accessible to those outside the application domain? Is it accessible to those in other technical specialties?
Deployed Application Case Study Papers
Case-study papers must describe deployed applications with measurable benefits that include some aspect of AI technology. The application needs to have been in production use by their final endusers (typically for at least three months). The case study may evaluate either a stand-alone application or a component of a complex system. In addition to the criteria listed above for the Emerging Track papers, the deployed applications will also be evaluated on the following:
Task or Problem Description: Describe the task the application performs or the problem it solves. State the objectives of the application and explain why an AI solution was important. If other solutions were tried and failed outline these solutions and the reasons for their failure.
Application Description: Describe the application, providing key technical details about design and implementation. What are the system components, what are their functions, and how do they interact? What languages and tools are used in the application? How is knowledge represented? What is the hardware and software environment in which the system is deployed? Provide examples to illustrate how the system is used.
Uses of AI Technology: On what AI research results does the application depend? What key aspects of AI technology allowed the application to succeed? How were the techniques modified to fit the needs of the application? If applicable, describe how AI technology is integrated with other technology. If a commercial tool is used, explain the decision criteria used to select it. Describe any insights gained about the application of AI technology. What AI approaches or techniques were tried and did not work? Why not?
Application Use and Payoff: How long has this application been deployed? Explain how widely, how often, and by whom the application is being used. Also describe the application's payoff. What measurable benefits have resulted from its use? What additional benefits do you expect over time? What impacts has it had on the users' business processes?
Application Development and Deployment: Describe the development and deployment process. How long did they take? How many developers were involved? What were the costs? What were the difficulties, and how were they overcome? What are the lessons learned? What, if any, formal development methods were used?
Maintenance: Describe your experience with and plans for maintenance of the application. Who maintains the application? How often is update needed? Is domain knowledge expected to change over time? How does the design of the application facilitate update?
Original papers on the deployment issues in AI applications are welcome, even if other papers on the AI technology have been presented at or submitted to other conferences. We encourage updates on applications that have been in use for an extended period of time (i.e., multiple years). Each of the accepted deployed application papers will receive the IAAI "Innovative Application" Award.
Timetable for Authors
December 1, 2008 ¨C January 20, 2009: Authors register on the IAAI web site
January 20, 2009: Electronic papers due
March 31, 2009: Camera-ready copy due at AAAI office
Submission Format
Electronic submissions are required. Papers must be in trouble-free, high resolution PDF format and formatted for United States Letter (8.5" x 11") paper. Submissions need to be in AAAI two-column format. Deployed papers can be up to eight (8) pages. Emerging papers are limited to six (6) complimentary pages and two (2) optional additional pages at $275 each. The title page (described below) does not count as one of these pages.
Papers must have a title page, including the title of the paper; the track to which it is submitted; the names, affiliations, postal and e-mail addresses, and telephone and fax numbers of all authors; a designation of the application domain(s); identification of AI techniques employed or issues addressed; an indication of application status (e.g., feasibility analysis, research prototype, operational prototype, deployed application, etc.); and an abstract of fewer than 200 words.
Authors should register on the IAAI- 09 web-based paper submission software at www.aaai.org/Conferences/IAAI/iaai09.php. A login and password, as well as detailed instructions about how to submit an electronic paper, will be sent to the author in a subsequent email message. Authors must then submit a formatted electronic version of their paper through this software no later than Tuesday, January 20, 2009. We cannot accept papers submitted by email or fax.
Submissions received after the deadlines or that do not meet the length or formatting requirements detailed previously and at the IAAI-09 web site will not be accepted for review. Notification of receipt of the electronic paper will be mailed to the first author (or designated author) soon after receipt. If there are problems with the electronic submission, AAAI will contact the primary author by email. Papers will be reviewed by the Program Committee and notification of acceptance or rejection will be mailed to the contact author in early March. PDFs of accepted papers will be due on March 31, 2009. Authors will be required to transfer copyright.
Inquiries
Registration or clarification inquiries may be sent to AAAI at iaai09 [at] aaai [dot] org, 650-328-3123, or 650-321-4457 (fax).
IAAI-08 Program Chairs
Karen Haigh, Conference Chair
BBN Technologies
Nestor Rychtyckyj, Conference Cochair
Ford Motor Company
Other CFPs
- 8th International Workshop on Multiple Classifier Systems (MCS'09)
- 9th Industrial Conference on Data Mining (Leipzig-ICDM'09)
- 21st International Joint Conference on Artificial Intelligence (IJCAI'09)
- 8th International Conference on Machine Learning and Data Mining MLDM 2012
- PAKDD'09 Workshop on Data Mining When Imbalanced Classes and Errors in Costs (ICEC'09)
Last modified: 2010-06-04 19:32:22