-
Notifications
You must be signed in to change notification settings - Fork 3
/
speakql.jemdoc
58 lines (41 loc) · 3.63 KB
/
speakql.jemdoc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# jemdoc: menu{MENU2}{speakql.html}
= ADA Lab @ UCSD
~~~
{}{img_left}{images/speakql.jpg}{}{}{80px}{}
== Project SpeakQL
~~~
=== Overview
Natural language and touch-based interfaces are making data querying significantly easier. But typed SQL remains the gold standard for query sophistication although it is painful in querying environments that are touch-oriented (e.g., iPad or iPhone) and essentially impossible in speech-driven environments (e.g., Amazon Echo). Recent advancements in automatic speech recognition (ASR) raise the tantalizing possibility of bridging this gap by enabling /spoken queries/ over structured data.
In this project, we envision and prototype a series of new spoken data querying systems. Going beyond the current capability of personal digital assistants such as Alexa in answering simple natural language queries over well-curated in-house knowledge base schemas, we aim to enable more sophisticated spoken queries over arbitrary application database schemas.
Our first and current focus is on designing and implementing a new speech-driven query interface and system for a useful subset of regular SQL. Our goal is near-perfect accuracy and near-real-time latency for transcribing spoken SQL queries. Our plan to achieve this goal is by synthesizing and innovating upon ideas from ASR, natural language processing (NLP), information retrieval, database systems, and HCI to devise a modular end-to-end system architecture that combines new automated algorithms with user interactions.
#~~~
#*Note*: We are looking for volunteers and/or collaborators, especially enterprise users of SQL, say, in consulting or banking or insurance, to try the alpha version of SpeakQL by participating in a simple one-time user study and survey. Please contact us if you are interested in participating or even if you are just interested in trying out SpeakQL.
#~~~
=== Downloads (Paper, Code, Data, etc.)
- Database-Aware ASR Error Correction for Speech-to-SQL Parsing\n
Yutong Shao, Arun Kumar, and Ndapandula Nakashole\n
IEEE ICASSP 2023 | [papers/2023_SpeakQL_ICASSP.pdf Paper PDF]
- Design and Evaluation of an SQL-Based Dialect for Spoken Querying\n
Kyle Luoma and Arun Kumar\n
Under Submission | [papers/TR_2023_SpeakQL_Dialect.pdf TechReport]
- Structured Data Representation in Natural Language Interfaces\n
Yutong Shao, Arun Kumar, and Ndapandula Nakashole\n
IEEE Data Engineering Bulletin 2022 (Invited) | [papers/2022_SpeakQL_DataEngBulletin.pdf Paper PDF]
- SpeakQL: Towards Speech-driven Multimodal Querying of Structured Data\n
Vraj Shah, Side Li, Arun Kumar, and Lawrence Saul\n
ACM SIGMOD 2020 | [papers/2020_SpeakQL_SIGMOD.pdf Paper PDF] and [papers/2020_SpeakQL_SIGMOD.txt BibTeX] |
[papers/TR_2020_SpeakQL.pdf TechReport] |
[https://adalabucsd.github.io/research-blog/research/2020/06/14/speakql.html Blog post] |
[https://drive.google.com/drive/folders/1tSxUTu2A7qy8fPtB81RnwkyakgykZ3iw?usp=sharing Dataset on Drive]
- Demonstration of SpeakQL: Speech-driven Multimodal Querying of Structured Data\n
Vraj Shah, Side Li, Kevin Yang, Arun Kumar, and Lawrence Saul\n
ACM SIGMOD 2019 Demo | [papers/2019_SpeakQL_SIGMOD.pdf Paper PDF] and [papers/2019_SpeakQL_SIGMOD.txt BibTeX] | [https://vimeo.com/295693078 Video]
- SpeakQL: Towards Speech-driven Multi-modal Querying\n
Dharmil Chandarana, Vraj Shah, Arun Kumar, and Lawrence Saul\n
ACM SIGMOD 2017 HILDA Workshop |
[papers/2017_SpeakQL_HILDA.pdf Paper PDF] and [papers/2017_SpeakQL_SIGMOD.txt BibTeX]
=== Student Contact
Kyle Luoma: kluoma \[at\] ucsd \[dot\] edu\n
#Vraj Shah: vps002 \[at\] eng \[dot\] ucsd \[dot\] edu
=== Acknowledgments
This project is funded in part by the NSF under award IIS-1816701.