Competitive Demo Call
Call for Applications for Support to Attend the Second AAAI Workshop on Reasoning and Learning for Human-Machine Dialogues (DEEP-DIAL 2019)
Partially sponsored by AI Journal
We invite applications from students and early researchers for support to attend the workshop. Using sponsorship from AI Journal, we will offer up to two awards and two travel support grants.
There are two ways to apply:
- [Preferred] By demonstrating a TAsk-oriented Data-driven conversation bot/agent (called TADBot or just chatbot) that works with open data. A 1-page description needs to be submitted. See further details about demonstration below, or
- submitting a 1-page description of your research interest relevant to conversation systems in general and the demonstration setting in particular.
Prizes and support
- Prize-based support: 1st prize - $700, 2nd prize - $500
- Student support (two) - $ 500 each
Submission Site: https://easychair.org/conferen ces/?conf=deepdial19
Deadline: Jan 15, 2019
Demonstration Details: Task Oriented Chatbot using Open Data
Motivation
There is a long-tradition of giving momentum to new ideas by appealing to a community’s competitive spirit. We seek to promote research and best-practices in dialog by encouraging building and sharing of chatbots and facilitate participation of young researchers.
Task-oriented Chatbots
Task-oriented conversation agents, that help a user look for information or complete a task, represent a large segment of chatbots that are deployed and used in practice. However, they have been largely ignored by mainstream competitions in the field [1, 2]. We will focus on chatbots that help a person find information recorded in a repository, potentially also encoding a tree-like hierarchy. Examples of such information are: a university’s catalog of courses, a hospital’s directory of staff, a product catalog [4], a transportation agency’s route network [5] and customer support FAQ. To make the data source accessible, we will focus on open data [3], i.e., the data is made available for reuse.
The user may be a normal person or an elderly, a child, a computer illiterate or someone with disability. The user may use single or mixture of languages, not know the exact spelling and change their intent about what they want mid-way. The aim of the chatbot is to retrieve the information the user seeks unambiguously and efficiently. Since the ground truth answer about user’s request is contained in the repository, dialogs can be evaluated for the efficiency (“ability to answer correctly, quickly”) as well as effectiveness (“to cognitive satisfaction of user”).
Preferred / Example Scenario:
So, consider a chatbot that helps a user find information about subway stations. We will consider the case of New York City subways where information about stations, their facilities and train routes serving them is available publicly [5].
Update (March 7, 2019): See demonstration chatbot created by Madhavan Pallan for this scenario - supported by workshop grant. [Link]
Alternative Scenario:
Participants may alternatively submit any chatbot which uses any open dataset.
Related Research
There is an active area of research on Question Answering (QA) directly from online sources and knowledge bases using learning and reasoning methods [7,8,9]. However, such systems model data but not users and their iterative, multi-turn, nature of interrogation. The initiative is in the direction of making open data more accessible to users by automating generation of conversation interfaces [10].
Review Criteria
- Ability of chatbot to answer queries related to subject matter (e.g., train stations in preferred scenario)
- Ability of chatbot to handle users of different backgrounds leading to dialogs of different lengths (e.g., exact terms, partial matches, switching intents)
- Ability of chatbot to handle multiple turns
- Ability to handle abusive and discriminatory language
- Response time and error handling
- Any special feature. E.g., Ability to handle mixture of languages, showing multi-modal response like maps or graphs when appropriate
Submission: A team may submit a 1-page with information about
- Team name and members. Identify student members and also indicate if support is needed for student to attend DEEP-DIAL 19 workshop
- Information about open dataset
- A demonstration video of using the chatbot
- Link to source code on github
- URL of actual chatbot that can be tested
- Details of implemented approach. Link to a paper is allowed.
Rules
- Source code should be made available on github
- Chatbot should be available publicly for demonstration for at least 3 months. For example, hosted on any cloud platform.
- Data used by chatbot should be open and hence downloadable for free.
References
- Loebner Challenge, https://en.wikipedia.org/wiki/Loebner_Prize
- Conversational Intelligence challenge, http://convai.io/
- Open Data, https://en.wikipedia.org/wiki/Open_data
- United Nations Standard Products and Services Code® (UNSPSC®), http://www.unspsc.org/
- NYC Transit Subway Entrance And Exit Data, Data: https://data.ny.gov/Transportation/NYC-Transit-Subway-Entrance-And-Exit-Data/i9wp-a4ja ; Data dictionary: https://data.ny.gov/api/assets/6D60FF9E-9143-40CF-8FC9-9D237C6AC864?download=true
- P. Pasupat and P. Liang. Compositional semantic parsing on semi-structured tables. In Proc. of ACL, 2015
- Till Haug, Octavian-Eugen Ganea and Paulina Grnarova, Neural Multi-Step Reasoning for Question Answering on Semi-Structured Tables, arXiv:1702.06589v2, March, 2018,
- Xuchen Yao, Benjamin Van Durme Information Extraction over Structured Data: Question Answering with Freebase, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics , pages 956–966, Baltimore, Maryland, USA, June 23-25 2014
- Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao, An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global, Knowledge, Proc. ACL, 2017.
- Biplav Srivastava, Decision-support for the Masses by Enabling Conversations with Open Data, At https://arxiv.org/abs/1809.06723, Sep 2018.