Diyi Yang
@Diyi_Yang
Assistant Professor @Stanford CS @StanfordNLP @StanfordAILab. Formerly @GeorgiaTech. Computational Social Science & NLP
ID:812127939663974407
http://www.diyiyang.com 23-12-2016 02:49:24
1,2K Tweets
14,2K Followers
1,8K Following
Follow People
Don't miss out! 👉 Prof. Chris Manning Christopher Manning is giving a keynote talk on 🌟Academic NLP research in the Age of LLMs: Nothing but blue skies🌟 at 2pm today #EMNLP2023 #NLProc #LLM
LLMs can serve as great data annotators but are not always reliable. With an instance at hand, should we give it to LLM for annotation or turn to human annotators?🤔
Excited to present our CoAnnotating poster at 9am today at #EMNLP2023 . Come and say hi!
Today #EMNLP2023 at 10am in West Ballroom 3! ez (in Korea/Singapore/NOLA) and I will be presenting a cool human-centered NLP application to identify & progress monitor struggling readers in K-12! w/Diyi Yang JET Jason Yeatman Nick Haber
arxiv.org/abs/2310.06837
📢Existing LMs trained on standard languages, such as standard American English often fail on the dialect variants.😭
🗞️In our #EMNLP2023 paper, we propose a compositional approach to improve (multi-)dialectal robustness from a FINE-GRAINED perspective - linguistic rules!
[1/n]
Kicking off the #emnlp2023 #hai tutorial, with Diyi Yang giving an overview and Sebastin Santy talking about good/bad designs -- Glad to see a full room of people who beat jet lag!
If you'll be at #EMNLP2023 then you're in luck because Ella Minzhi Li, Taiwei Shi, and most of our 'CoAnnotating' team will be around to chat about scalable #CSS + #NLP data annotation through efficient human + LLM collaboration.
Anticipating how audiences may interpret the visual medium is critical for multimodal communication. However, can multimodal language models faithfully simulate human impressions of images?
In our #EMNLP2023 paper, we present Impressions, a dataset of human perceptions of images