Representation Learning

This project explores the fundamental role of embeddings in neural network architectures. It focuses on deepening our understanding of how these dense vector representations capture and encode information. The research aims to improve embedding quality, interpretability, and versatility across diverse natural language processing tasks. By analyzing and refining embedding techniques, the project seeks to enhance model performance, transfer learning capabilities, and the overall efficiency of neural network-based systems in capturing semantic and contextual information.


Director

  • Jinho Choi - Associate Professor at Emory University

Related Project


Publications

  1. NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation. Dhole, K. D.; Gangal, V.; Gehrmann, S.; Gupta, A.; Li, Z.; Mahamood, S.; Mahendiran, A.; Mille, S.; Srivastava, A.; Tan, S.; Wu, T.; Sohl-Dickstein, J.; Choi, J. D.; Hovy, E. H.; Dusek, O.; Ruder S. Northern European Journal of Language Technology  (NEJLT), 2023.
  2. Enhancing Cognitive Models of Emotions with Representation Learning. Guo, Y.; and Choi, J. D. Proceedings of the NAACL Workshop on Cognitive Modeling and Computational Linguistics (CMCL) 2021.
  3. Incremental Sense Weight Training for In-depth Interpretation of Contextualized Word Embeddings. Jiang, X.; Yang, Z.; and Choi, J. D. Proceedings of the AAAI Conference on Artificial Intelligence: Student Abstract and Poster Program (AAAI:SAP), 2020.
  4. The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning. Shin, B.; Yang, H.; and Choi, J. D. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2019.
  5. Intrinsic and Extrinsic Evaluations of Word Embeddings. Zhai, M.; Tan, J.; and Choi, J. D. Proceedings of the AAAI Conference on Artificial Intelligence: Student Abstract and Poster Program (AAAI:SAP), 2016.