tsvetshop(@tsvetshop) 's Twitter Profileg
tsvetshop

@tsvetshop

Group account for Prof. Yulia Tsvetkov's lab at @uwnlp. We work on low-resource, multilingual, social-oriented NLP. Details on our website:

ID:1376970544390750214

linkhttp://tsvetshop.github.io calendar_today30-03-2021 18:54:04

88 Tweets

798 Followers

131 Following

tsvetshop(@tsvetshop) 's Twitter Profile Photo

Check out this new work on efficiently quantifying+understanding the (often massive) differences in few-shot performance depending on format used!

account_circle
Melanie Sclar(@melaniesclar) 's Twitter Profile Photo

Did you know that depending on the format used in few-shot prompting, you may get accuracies ranging 4%-88% for a given task w/LLaMA-2-70B 5-shot? or 47%-85% w/GPT3.5?🤯

We explore this variance in FormatSpread, or: How I learned to start worrying about prompt formatting.

1/n

Did you know that depending on the format used in few-shot prompting, you may get accuracies ranging 4%-88% for a given task w/LLaMA-2-70B 5-shot? or 47%-85% w/GPT3.5?🤯 We explore this variance in FormatSpread, or: How I learned to start worrying about prompt formatting. 1/n
account_circle
JHU Computer Science(@JHUCompSci) 's Twitter Profile Photo

Get to know Anjalie Field (Anjalie Field), who joins Johns Hopkins as an assistant professor of computer science and a member of JHU CLSP. cs.jhu.edu/news/new-facul…

account_circle
Allen School(@uwcse) 's Twitter Profile Photo

There are many dimensions to bias. University of Washington & Language Technologies Institute | @CarnegieMellon researchers led by professor Yulia Tsvetkov earned the Wikimedia Foundation 2023 Research Award of the Year for advancing a novel methodology for analyzing them in Wikipedia biographies: news.cs.washington.edu/2023/08/21/wik…

There are many dimensions to bias. @UW & @LTIatCMU researchers led by #UWAllen professor Yulia Tsvetkov earned the @Wikimedia 2023 Research Award of the Year for advancing a novel methodology for analyzing them in @Wikipedia biographies: news.cs.washington.edu/2023/08/21/wik… #NLProc #CSforGood
account_circle
MIT Technology Review(@techreview) 's Twitter Profile Photo

New: Researchers tested 14 large language models and found that each model exhibited a different kind of political bias. trib.al/4BxEDJf

account_circle
Chris Dyer(@redpony) 's Twitter Profile Photo

Best part of having spent time in academia: seeing your former students (and their students) doing great things. Congrats tsvetshop on the best paper award at . Brilliant careful and important work!

Best part of having spent time in academia: seeing your former students (and their students) doing great things. Congrats @tsvetshop on the best paper award at #ACL2023NLP. Brilliant careful and important work!
account_circle
Anjalie Field(@anjalie_f) 's Twitter Profile Photo

Here's the pre-print for our FAccT'23 paper 'Examining risks of racial biases in NLP tools for child protective services' with Amanda Coston, Nupoor Gandhi, Alexandra Chouldechova, Emily Putnam-Hornstein, David Steier, and Yulia Tsvetkov tsvetshop

arxiv.org/abs/2305.19409

account_circle
Shangbin Feng(@shangbinfeng) 's Twitter Profile Photo

Do LLMs have inherent political leanings? How do their political biases impact downstream tasks?

We answer these questions in our paper: 'From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models'

Do LLMs have inherent political leanings? How do their political biases impact downstream tasks? We answer these questions in our #ACL2023 paper: 'From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models'
account_circle
Shangbin Feng(@shangbinfeng) 's Twitter Profile Photo

LLMs are adopted in tasks and contexts with implicit graph structures, but ...

Are LMs graph reasoners?

Can LLMs perform graph-based reasoning in natural language?

Introducing NLGraph, a comprehensive testbed of graph-based reasoning designed for LLMs

arxiv.org/abs/2305.10037

account_circle
Shangbin Feng(@shangbinfeng) 's Twitter Profile Photo

Looking for a summarization factuality metric? Are existing ones hard-to-use, require re-training, or not compatible with HuggingFace?

Introducing FactKB, an easy-to-use, shenanigan-free, and state-of-the-art summarization factuality metric!

arxiv.org/abs/2305.08281

account_circle
Shangbin Feng(@shangbinfeng) 's Twitter Profile Photo

Ever felt hopeless when LLMs make factual mistakes? Always waiting for big companies to release LLMs with improved knowledge abilities?

Introducing CooK, a community-driven initiative to empower black-box LLMs with modular and collaborative knowledge

arxiv.org/abs/2305.09955

account_circle
Wiki Workshop 2024(@wikiworkshop) 's Twitter Profile Photo

The first Wikimedia Foundation Research Award Research Award of the Year 2023 goes to Anjalie Field Chan Young Park Kevin Lin tsvetshop for their work 'Controlled Analyses of Social Biases in Wikipedia Bios'

⭐️ arxiv.org/abs/2101.00078

The first @Wikimedia Research Award Research Award of the Year 2023 goes to @anjalie_f @chan_young_park @linnyKos @tsvetshop for their work 'Controlled Analyses of Social Biases in Wikipedia Bios' ⭐️ arxiv.org/abs/2101.00078
account_circle
Anjalie Field(@anjalie_f) 's Twitter Profile Photo

We are so honored by this award and so grateful to the amazing Wikimedia community that made our research possible! This was a real team effort journey with fantastic co-authors Chan Young Park Kevin Lin tsvetshop, and we hope our work contributes to reducing bias on Wikipedia!

account_circle
tsvetshop(@tsvetshop) 's Twitter Profile Photo

Congratulations to Anjalie Field Chan Young Park Kevin Lin for receiving “Research Award of the Year” by WikiResearch 🎉🎉🎉

A special bonus: Jimmy Wales (@jimmy_wales), the founder of Wikipedia, joined the ceremony and gave the award!!!

account_circle