Shuichiro Shimizu / 清水 周一郎(@cromz22) 's Twitter Profile Photo

tutorial links:

T1: Goal Awareness for Conversational AI: Proactivity, Non-collaborativity, and Beyond dengyang17.github.io/files/ACL2023-…
T2: Complex Reasoning in Natural Language wenting-zhao.github.io/complex-reason…

account_circle
zhiyang xu(@zhiyangx11) 's Twitter Profile Photo

We introduce the first multimodal instruction tuning dataset: 🌟MultiInstruct🌟 in our 🚀 🚀 paper. MultiInstruct consists of 62 diverse multimodal tasks and each task is equipped with 5 expert-written instructions.
🚩arxiv.org/abs/2212.10773🧵[1/3]

We introduce the first multimodal instruction tuning dataset: 🌟MultiInstruct🌟 in our 🚀#ACL2023NLP🚀 paper. MultiInstruct consists of 62 diverse multimodal tasks and each task is equipped with 5 expert-written instructions.
🚩arxiv.org/abs/2212.10773🧵[1/3]
account_circle
Prasann Singhal(@prasann_singhal) 's Twitter Profile Photo

New paper!

Reranking generation sets with transformer-based metrics can be slow. What if we could rerank everything at once? We propose EEL: Efficient Encoding of Lattices for fast reranking!

Paper: arxiv.org/abs/2306.00947 w/ Jiacheng Xu Xi Ye Greg Durrett

New #ACL2023NLP paper!

Reranking generation sets with transformer-based metrics can be slow. What if we could rerank everything at once? We propose EEL: Efficient Encoding of Lattices for fast reranking!

Paper: arxiv.org/abs/2306.00947 w/ @JiachengNLP @xiye_nlp @gregd_nlp
account_circle
Genta Winata(@gentaiscool) 's Twitter Profile Photo

Does an LLM forget when it learns a new language?

We systematically study catastrophic forgetting in a massively multilingual continual learning framework in 51 languages.

Preprint: arxiv.org/abs/2305.16252
⬇️🧵
The paper was accepted at findings [1/4]

Does an LLM forget when it learns a new language?

We systematically study catastrophic forgetting in a massively multilingual continual learning framework in 51 languages.

Preprint: arxiv.org/abs/2305.16252
⬇️🧵
The paper was accepted at #acl2023nlp findings #NLProc [1/4]
account_circle
Zeming Chen(@eric_zemingchen) 's Twitter Profile Photo

📢New paper to appear at : arxiv.org/abs/2212.10534

Human-quality counterfactual data with no humans! Introduce DISCO, our novel distillation framework that automatically generates high-quality, diverse, and useful counterfactual data at scale using LLMs.

📢New paper to appear at #acl2023nlp: arxiv.org/abs/2212.10534

Human-quality counterfactual data with no humans! Introduce DISCO, our novel distillation framework that automatically generates high-quality, diverse, and useful counterfactual data at scale using LLMs.
account_circle
Afra Amini(@afra_amini) 's Twitter Profile Photo

Are you a big fan of structure?

Have you ever wanted to apply the latest and greatest large language model out-of-the-box to parsing?

Are you a secret connoisseur of linear-time dynamic programs?

If you answered yes, our outstanding paper may be just right for you!

Are you a big fan of structure?

Have you ever wanted to apply the latest and greatest large language model out-of-the-box to parsing?

Are you a secret connoisseur of linear-time dynamic programs?

If you answered yes, our outstanding #ACL2023NLP paper may be just right for you!
account_circle
Fanny Jourdan(@Fannyjrd_) 's Twitter Profile Photo

I'm glad to share that our paper 'COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP' (arxiv.org/abs/2305.06754) was accepted at Findings of ! ❤️🦜

NLP 1/6🧵

I'm glad to share that our paper 'COCKATIEL: COntinuous Concept ranKed ATtribution with Interpretable ELements for explaining neural net classifiers on NLP' (arxiv.org/abs/2305.06754) was accepted at Findings of #ACL2023 ! ❤️🦜

#ACL2023NLP #NLProc #XAI 1/6🧵
account_circle
Brihi Joshi(@BrihiJ) 's Twitter Profile Photo

Super excited to share our paper! 🙌🏽

📢 Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

📑: arxiv.org/abs/2305.07095

🧵👇 [1/n]

Super excited to share our #ACL2023NLP paper! 🙌🏽

📢 Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales

📑: arxiv.org/abs/2305.07095

🧵👇 [1/n]

#NLProc #XAI
account_circle
Martin Ziqiao Ma(@ziqiao_ma) 's Twitter Profile Photo

🎉Thrilled to share that our paper 'World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models' was selected for the outstanding paper award at ! Thanks ACL 2024 :-)
Let's take grounding seriously in VLMs because...
🧵[1/n]

🎉Thrilled to share that our paper 'World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models' was selected for the outstanding paper award at #ACL2023NLP! Thanks @aclmeeting :-)
Let's take grounding seriously in VLMs because...
🧵[1/n]
account_circle
Michael Saxon(@m2saxon) 's Twitter Profile Photo

Grateful and proud to learn I was an outstanding reviewer at !

1% of reviewers are recognized for effort in giving thoughtful and detailed peer reviews of submissions

Grateful and proud to learn I was an outstanding reviewer at #ACL2023NLP!

1% of reviewers are recognized for effort in giving thoughtful and detailed peer reviews of submissions
account_circle
Michael Saxon(@m2saxon) 's Twitter Profile Photo

Inspirational words from Jack Hessel in his best paper talk:

'If you're a PhD student working on a long-term project that you're excited about but still isn't coming together, keep it up...I want to read your paper!'

Slide shows the '4 year journey' from idea to award

Inspirational words from @jmhessel in his #ACL2023NLP best paper talk:

'If you're a PhD student working on a long-term project that you're excited about but still isn't coming together, keep it up...I want to read your paper!'

Slide shows the '4 year journey' from idea to award
account_circle
Caiming Xiong(@CaimingXiong) 's Twitter Profile Photo

🤔Which words in your prompt are most helpful to language models? In our paper, we explore which parts of task instructions are most important for model performance.
🔗 arxiv.org/abs/2306.01150
Code: github.com/fanyin3639/Ret…

🤔Which words in your prompt are most helpful to language models? In our #ACL2023NLP paper, we explore which parts of task instructions are most important for model performance.
🔗 arxiv.org/abs/2306.01150
Code: github.com/fanyin3639/Ret…
account_circle
Ben Tang(@bennyjtang) 's Twitter Profile Photo

Chart captioning is hard, both for humans & AI.

Today, we’re introducing VisText: a benchmark dataset of 12k+ visually-diverse charts w/ rich captions for automatic captioning (w/ Angie Boggust Arvind Satyanarayan)

📄: vis.csail.mit.edu/pubs/vistext.p…
💻: github.com/mitvis/vistext

Chart captioning is hard, both for humans & AI.

Today, we’re introducing VisText: a benchmark dataset of 12k+ visually-diverse charts w/ rich captions for automatic captioning (w/ @angie_boggust @arvindsatya1)

📄: vis.csail.mit.edu/pubs/vistext.p…
💻: github.com/mitvis/vistext

#ACL2023NLP
account_circle
Yuval Reif(@YuvalReif) 's Twitter Profile Photo

Is dataset debiasing the right path to robust models?

In our work, “Fighting Bias with Bias”, we argue that in order to promote model robustness, we should in fact amplify biases in training sets.

w/ Roy Schwartz
In Findings
Paper: arxiv.org/abs/2305.18917
🧵👇

Is dataset debiasing the right path to robust models?

In our work, “Fighting Bias with Bias”, we argue that in order to promote model robustness, we should in fact amplify biases in training sets.

w/ @royschwartzNLP
In #ACL2023NLP Findings
Paper: arxiv.org/abs/2305.18917
🧵👇
account_circle
Rosa Zhou(@qiaoyu_rosa) 's Twitter Profile Photo

🔥Paper Alert!!

How can we effectively learn from natural language explanations while leveraging LLMs?

Read our paper: 🔥FLamE: Few-shot Learning from Natural Language Explanations

📄: arxiv.org/abs/2306.08042
📽️: youtu.be/rnSIFCeDq_Y

Details in 🧵(1/n)

🔥Paper Alert!! #NLProc

How can we effectively learn from natural language explanations while leveraging LLMs?

Read our #ACL2023NLP paper: 🔥FLamE: Few-shot Learning from Natural Language Explanations

📄: arxiv.org/abs/2306.08042
📽️: youtu.be/rnSIFCeDq_Y

Details in 🧵(1/n)
account_circle
Mehran Kazemi(@kazemi_sm) 's Twitter Profile Photo

Paper Alert
Large Language Models ( ) still struggle at multi-hop deductive reasoning. We propose LAMBADA, an approach that achieves a massive performance boost by combining LLMs with the classical backward chaining algorithm.
arxiv.org/pdf/2212.13894…

#ACL2023NLP Paper Alert
Large Language Models (#LLMs) still struggle at multi-hop deductive reasoning. We propose LAMBADA, an approach that achieves a massive performance boost by combining LLMs with the classical backward chaining algorithm.
arxiv.org/pdf/2212.13894…
account_circle
Zayne Sprague(@ZayneSprague) 's Twitter Profile Photo

LLMs are used for reasoning tasks in NL but lack explicit planning abilities. In arxiv.org/abs/2307.02472, we see if vector spaces can enable planning by choosing statements to combine to reach a conclusion. Joint w/ Kaj Bostrom Swarat Chaudhuri & Greg Durrett NLRSE workshop at

LLMs are used for reasoning tasks in NL but lack explicit planning abilities. In arxiv.org/abs/2307.02472, we see if vector spaces can enable planning by choosing statements to combine to reach a conclusion. Joint w/ @alephic2 @swarat & @gregd_nlp NLRSE workshop at #ACL2023NLP
account_circle