Meta Learning for Human Language Technology

Special session at INTERSPEECH 2020, Shanghai, China

Description

Deep learning based human language technology (HLT), such as automatic speech recognition, intent and slot recognition, or dialog management, has become the mainstream of research in recent years and significantly outperforms conventional methods. The technology also has widespread applications in the industry. Several most famous examples include Siri, Alexa, Google Assistant, and Cortana. However, deep learning models are notorious for being data and computation hungry. These downsides limit the application of such models from deployment to different languages, domains, or styles, since collecting in-genre data and model training from scratch are costly.

Meta learning, or Learning to Learn, is one way to mitigate the above problems. Meta learning learns better learning algorithms, including better parameter initialization, optimization strategy, network architecture, distance metrics, etc., from multiple learning tasks. Meta learning has been showed the potential to allow faster fine-tuning, converge to better performance than model pretraining, and even achieve few-shot learning in several areas, including computer vision and translation.

The goal of this special session is to bring together researchers and practitioners working on meta learning in different HLT fields to discuss the state-of-the-art and new approaches, and to share their innovation, insights, and challenges, and to shed light on future research directions. We will explore how to improve learning efficiency in data usage and in computation with meta learning for HLT tasks. We also aim to align academic efforts with industrial challenges, to bridge the gap between research and real-world product deployment.

Call for Papers

The special session of Meta Learning for Human Language Technology invites papers of a theoretical and experimental nature on human language technology tasks with meta learning methodologies and their applications. The special session is part of the main INTERSPEECH conference in Shanghai, China. Relevant meta learning topics include (but are not limited to):

  • Network architecture search
  • Learning optimizer
  • Learning model initialization
  • Learning metrics or distance measurement
  • Learning training algorithm
  • Few shot learning

Human language technology topics include (but are not limited to):

  • Automatic speech recognition
  • Speaker adaptation
  • Speaker identification
  • Speech synthesis
  • Voice conversion
  • Noise robustness
  • Spoken language understanding
  • Intent or slot recognition
  • Dialog management
Important Dates

  • Submission portal opens: February 15, 2020
  • Paper Submission: May 8, 2020
  • Notification of Acceptance: July 24, 2020
  • Camera-ready Paper Due: TBD
  • Special Session Date: TBD

Submissions

This special session is part of the main INTERSPEECH conference. Thus it utilizes the same submission portal, and follows the same submission policy, paper format, and review process. More information can be found:

  • Submission portal: TBD
  • Submission policy: TBD
  • Paper format: TBD
  • Author ethics: TBD

Following the same policy as INTERSPEECH main conference, double-submissions is not allowed if the entire works have been published in other peer-reviewed conferences or transaction. However, we invite authors to submit their work to multiple sessions in addition to this session in the submission portal. Conference and session committee will determine session assignment after acceptance.

If you have more questions about submission, please feel free to contact us via is.2020.meta.learning@gmail.com .

Reading

Meta learning is one of the fastest growing research areas in the deep learning scope. However there is no standard definition for meta learning. Usually the main goal is to design models that can learn new tasks rapidly with few in domain training examples, by having models to pre-learn from many, relevant or not, training tasks in a way that the models are easy to be generalized to new tasks. For better understanding the scope of meta learning, we provide several online courses and papers describing the works falling into the area. These works are just for showcasing, and we definitely encourage people with research not covered here but sharing the same goal mentioned above to submit.

Online Courses

Papers

Meta Learning Technology

  • Learning to Initialize:
    • Chelsea Finn, Pieter Abbeel, and Sergey Levine, “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks”, ICML, 2017
    • Sebastian Flennerhag, Pablo G. Moreno, Neil D. Lawrence, Andreas Damianou, Transferring Knowledge across Learning Processes, ICLR, 2019
  • Learning to optimize:
    • Sachin Ravi, Hugo Larochelle, Optimization as a model for few-shot learning, ICLR, 2017
    • Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, Nando de Freitas, Learning to learn by gradient descent by gradient descent, NIPS, 2016
  • Learning to compare
    • Jake Snell, Kevin Swersky, Richard S. Zemel, Prototypical Networks for Few-shot Learning, NIPS, 2017
    • Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra, Matching Networks for One Shot Learning, NIPS, 2016
    • Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip H.S. Torr, Timothy M. Hospedales, Learning to Compare: Relation Network for Few-Shot Learning, CVPR, 2018
  • Learning the whole learning algorithm
    • Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap, Meta-Learning with Memory-Augmented Neural Networks, ICML, 2016
    • Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel, A Simple Neural Attentive Meta-Learner, ICLR, 2018
  • Network architecture search:
    • RL based
      • Barret Zoph, Quoc V. Le, Neural Architecture Search with Reinforcement Learning, ICLR 2017
      • Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le, Learning Transferable Architectures for Scalable Image Recognition, CVPR, 2018
      • Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, Jeff Dean, Efficient Neural Architecture Search via Parameter Sharing, ICML, 2018
    • Evolution based
      • Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin, Large-Scale Evolution of Image Classifiers, ICML 2017
      • Esteban Real, Alok Aggarwal, Yanping Huang, Quoc V Le, Regularized Evolution for Image Classifier Architecture Search, AAAI, 2019
      • Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, Koray Kavukcuoglu, Hierarchical Representations for Efficient Architecture Search, ICLR, 2018
    • Supernetwork based
      • Hanxiao Liu, Karen Simonyan, Yiming Yang, DARTS: Differentiable Architecture Search, ICLR, 2019

Applications to Human Language Technology:

Learning to initialize Learning to compare Other


Speech Recognition

[Hsu, et al., ICASSP’20]

[Klejch, et al., ASRU’19]
[Klejch , et al., INTERSPEECH’18]
(Learning to optimize)

[Baruwa , et al., IJSER’19]
(Network architecture search)


Voice Cloning
[Chen, et al., ICLR’19]
(Learning the learning algorithm)

[Serrà, et al., NeurIPS’19]
(Learning the learning algorithm)
Speaker Recognition [Anand, et al., arXiv’19]
Keyword Spotting [Chen, et al., arXiv’18] [Mazzawi, et al., INTERSPEECH’19]
(Network architecture search)


Sound Event Detection
[Shimada, et al., arXiv’19]

[Chou, et al., ICASSP’19]

[Zhang, et al., INTERSPEECH’19]

Machine Translation
[Gu, et al., EMNLP’18]

[Indurthi, et al., arXiv’19]



Dialogue
[Qian, et al., ACL’19]

[Madotto, et al., ACL’19]

[Mi, et al., IJCAI’19]

[Song, et al., arXiv’19]



[Chien, et al., INTERSPEECH’19]
(Learning to optimize)



Relation Classification
[Obamuyide, et al., ACL’19]

[Bose, et al., arXiv’19]

[Lv, et al., EMNLP’19]

[Wang, et al., EMNLP’19]
[Ye, et al., ACL’19]

[Chen, et al., EMNLP’19]

[Xiong, et al., EMNLP’18]

[Gao, et al., AAAI’19]
Word Embedding [Hu, et al., ACL’19] [Sun, et al., EMNLP’18]





More NLP Applications
[Guo, et al., ACL’19]

[Wu, et al., AAAI’20]

[Zhao, EMNLP’19]

[Bansal, et al., arXiv’19]

[Dou, et al., EMNLP’19]

[Huang, et al., NAACL’18]


[Sun, et al., EMNLP’19]

[Geng, et al., EMNLP’19]

[Yu, et al., ACL’18]

[Tan, et al., EMNLP’19]





[Wu, et al., EMNLP’19]
(Learning the learning algorithm)
Multi-model [Eloff, et al., ICASSP’19] [Surís, et al., arXiv’19]
(Learning the learning algorithm)
  • [Anand, et al., arXiv’19] Prashant Anand, Ajeet Kumar Singh, Siddharth Srivastava, Brejesh Lall, Few Shot Speaker Recognition using Deep Neural Networks, arXiv, 2019
  • [Bansal, et al., arXiv’19] Trapit Bansal, Rishikesh Jha, Andrew McCallum, Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks, arXiv, 2019
  • [Bose, et al., arXiv’19] Avishek Joey Bose, Ankit Jain, Piero Molino, William L. Hamilton, Meta-Graph: Few shot Link Prediction via Meta Learning, arXiv, 2019
  • [Chen, et al., arXiv’18] Yangbin Chen, Tom Ko, Lifeng Shang, Xiao Chen, Xin Jiang, Qing Li, An Investigation of Few-Shot Learning in Spoken Term Classification, arXiv, 2018
  • [Chen, et al., ICLR’19] Yutian Chen, Yannis Assael, Brendan Shillingford, David Budden, Scott Reed, Heiga Zen, Quan Wang, Luis C. Cobo, Andrew Trask, Ben Laurie, Caglar Gulcehre, Aäron van den Oord, Oriol Vinyals, Nando de Freitas, Sample Efficient Adaptive Text-to-Speech, ICLR, 2019
  • [Chen, et al., EMNLP’19] Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, Huajun Chen, Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs, EMNLP 2019
  • [Chien, et al., INTERSPEECH’19] Jen-Tzung Chien, Wei Xiang Lieow, Meta Learning for Hyperparameter Optimization in Dialogue System, INTERSPEECH, 2019
  • [Chou, et al., ICASSP’19] Szu-Yu Chou, Kai-Hsiang Cheng, Jyh-Shing Roger Jang, Yi-Hsuan Yang, Learning to match transient sound events using attentional similarity for few-shot sound recognition, ICASSP 2019
  • [Dou, et al., EMNLP’19] Zi-Yi Dou, Keyi Yu, Antonios Anastasopoulos, Investigating Meta-Learning Algorithms for Low-Resource Natural Language Understanding Tasks, EMNLP 2019
  • [Eloff, et al., ICASSP’19] Ryan Eloff, Herman A. Engelbrecht, Herman Kamper, MULTIMODAL ONE-SHOT LEARNING OF SPEECH AND IMAGES, ICASSP 2019
  • [Geng, et al., EMNLP’19] Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, Jian Sun, Induction Networks for Few-Shot Text Classification, EMNLP, 2019
  • [Gao, et al., AAAI’19] Tianyu Gao, Xu Han, Zhiyuan Liu, Maosong Sun, Hybrid Attention-Based Prototypical Networks for Noisy Few-Shot Relation Classification, AAAI 2019
  • [Gu, et al., EMNLP’18] Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, Victor O.K. Li, Meta-Learning for Low-Resource Neural Machine Translation, EMNLP, 2018
  • [Guo, et al., ACL’19] Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, Jian Yin, Coupling Retrieval and Meta-Learning for Context-Dependent Semantic Parsing, ACL, 2019
  • [Hsu, et al., ICASSP’20] Jui-Yang Hsu, Yuan-Jui Chen, Hung-yi Lee, Meta Learning for End-to-End Low-Resource Speech Recognition, ICASSP 2020
  • [Hu, et al., ACL’19] Ziniu Hu, Ting Chen, Kai-Wei Chang, Yizhou Sun, Few-Shot Representation Learning for Out-Of-Vocabulary Words, ACL 2019
  • [Huang, et al., NAACL’18] Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen-tau Yih, Xiaodong He, Natural Language to Structured Query Generation via Meta-Learning, NAACL 2018
  • [Indurthi, et al., arXiv’19] Sathish Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, Chanwoo Kim, Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning, arXiv’19
  • [Klejch, et al., INTERSPEECH’18] Ondřej Klejch, Joachim Fainberg, Peter Bell, Learning to adapt: a meta-learning approach for speaker adaptation, INTERSPEECH 2018
  • [Klejch, et al., ASRU’19] Ondřej Klejch, Joachim Fainberg, Peter Bell, Steve Renals, Speaker Adaptive Training using Model Agnostic Meta-Learning, ASRU 2019
  • [Lv, et al., EMNLP’19] Xin Lv, Yuxian Gu, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu,Adapting Meta Knowledge Graph Information for Multi-Hop Reasoning over Few-Shot Relations, EMNLP 2019
  • [Madotto, et al., ACL’19] Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Pascale Fung, Personalizing Dialogue Agents via Meta-Learning, ACL 2019
  • [Mi, et al., IJCAI’19] Fei Mi, Minlie Huang, Jiyong Zhang, Boi Faltings, Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems, IJCAI 2019
  • [Obamuyide, et al., ACL’19] Abiola Obamuyide, Andreas Vlachos, Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision, ACL 2019
  • [Qian, et al., ACL’19] Kun Qian, Zhou Yu, Domain Adaptive Dialog Generation via Meta Learning, ACL 2019
  • [Serrà, et al., NeurIPS’19] Joan Serrà, Santiago Pascual, Carlos Segura, Blow: a single-scale hyperconditioned flow for non-parallel raw-audio voice conversion, NeurIPS 2019
  • [Sun, et al., EMNLP’18] Jingyuan Sun, Shaonan Wang, Chengqing Zong, Memory, Show the Way: Memory Based Few Shot Word Representation Learning, EMNLP 2018
  • [Shimada, et al., arXiv’19] Kazuki Shimada, Yuichiro Koyama, Akira Inoue, Metric Learning with Background Noise Class for Few-shot Detection of Rare Sound Events, arXiv, 2019
  • [Song, et al., arXiv’19] Yiping Song, Zequn Liu, Wei Bi, Rui Yan, Ming Zhang, Learning to Customize Language Model for Personalized Conversation Systems, arXiv, 2019
  • [Sun, et al., EMNLP’19] Shengli Sun, Qingfeng Sun, Kevin Zhou, Tengchao Lv, Hierarchical Attention Prototypical Networks for Few-Shot Text Classification, EMNLP 2019
  • [Surís, et al., arXiv’19] Dídac Surís, Dave Epstein, Heng Ji, Shih-Fu Chang, Carl Vondrick, Learning to Learn Words from Narrated Video, arXiv, 2019
  • [Tan, et al., EMNLP’19] Ming Tan, Yang Yu, Haoyu Wang, Dakuo Wang, Saloni Potdar, Shiyu Chang, Mo Yu, Out-of-Domain Detection for Low-Resource Text Classification Tasks, EMNLP 2019
  • [Wang, et al., EMNLP’19] Zihao Wang, Kwun Ping Lai, Piji Li, Lidong Bing, Wai Lam, Tackling Long-Tailed Relations and Uncommon Entities in Knowledge Graph Completion, EMNLP 2019
  • [Wu, et al., EMNLP’19] Jiawei Wu, Wenhan Xiong, William Yang Wang, Learning to Learn and Predict: A Meta-Learning Approach for Multi-Label Classification, EMNLP 2019
  • [Wu, et al., AAAI’20] Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, Börje F. Karlsson, Biqing Huang, Chin-Yew Lin, Enhanced Meta-Learning for Cross-lingual Named Entity Recognition with Minimal Resources, AAAI 2020
  • [Xiong, et al., EMNLP’18] Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang, One-Shot Relational Learning for Knowledge Graphs, EMNLP 2018
  • [Ye, et al., ACL’19] Zhi-Xiu Ye, Zhen-Hua Ling, Multi-Level Matching and Aggregation Network for Few-Shot Relation Classification, ACL 2019
  • [Yu, et al., ACL’18] Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, Bowen Zhou, Diverse Few-Shot Text Classification with Multiple Metrics, ACL 2018
  • [Zhang, et al., INTERSPEECH’19] Shilei Zhang, Yong Qin, Kewei Sun, Yonghua Lin, Few-Shot Audio Classification with Attentional Graph Neural Networks, INTERSPEECH 2019
  • [Zhao, EMNLP’19] Zhenjie Zhao, Xiaojuan Ma, Text Emotion Distribution Learning from Small Sample: A Meta-Learning Approach, EMNLP 2019

Organizers

 

Avatar

Hung-Yi Lee

National Taiwan University

Associate Professor

Avatar

Ngoc Thang Vu

University of Stuttgart

Professor

Avatar

Shang-Wen Li

Amzaon AWS AI

Senior Applied Scientist

Avatar

Yu Zhang

Google Brain

Research Scientist

Program

TBD

Accepted Papers

TBD

Contact

If you have any questions or feedback, please feel free to contact us