Sanmi Koyejo(@sanmikoyejo) 's Twitter Profileg
Sanmi Koyejo

@sanmikoyejo

I lead @stai_research at Stanford.

ID:2790739088

calendar_today04-09-2014 22:44:16

180 Tweets

1,5K Followers

85 Following

Flower(@flwrlabs) 's Twitter Profile Photo

⏰ 26 days until ICLR 2024. 1️⃣ of the many FL 👩‍🔬 papers there is: 'Principled Federated Domain Adapation: Gradient Projection➕Auto-weighting' -- frm Enyi Jiang Yibo Jacky Zhang Sanmi Koyejo University of Illinois Stanford University. Built w/ 🌼 Flower! See you in Austria 🚀

📝 Paper: buff.ly/3vXZVpv

⏰ 26 days until @iclr_conf. 1️⃣ of the many FL 👩‍🔬 papers there is: 'Principled Federated Domain Adapation: Gradient Projection➕Auto-weighting' -- frm @enyij2 @Ybo_Z @sanmikoyejo @UofIllinois @Stanford. Built w/ 🌼 Flower! See you in Austria 🚀 📝 Paper: buff.ly/3vXZVpv
account_circle
Arthur Gretton(@ArthurGretton) 's Twitter Profile Photo

Proxy methods: not just for causal effect estimation!

Adapt to domain shifts in an unobserved latent, with either
-concept variables
-multiple training domains

arxiv.org/abs/2403.07442

Tsai Stephen Pfohl Olawale Salaudeen Nicole Chiou (she/her) Kusner Alexander D'Amour ([email protected]) Sanmi Koyejo

account_circle
Curt Langlotz(@curtlanglotz) 's Twitter Profile Photo

A new report from Sanmi Koyejo & @stanfordHAI: The effect of AI on Black Americans, including health: 'AI in medical imaging & diagnostics excels at reducing unnecessary deaths, but employing diverse training datasets will be key to ensure equal performance across racial groups.'

account_circle
Stanford HAI(@StanfordHAI) 's Twitter Profile Photo

New white paper: Stanford HAI and Black in AI join forces to present considerations for The Black Caucus’s policy initiatives by highlighting where AI can exacerbate racial inequalities and where it can benefit Black communities. Read here: hai.stanford.edu/white-paper-ex…

New white paper: @StanfordHAI and @black_in_ai join forces to present considerations for @TheBlackCaucus’s policy initiatives by highlighting where AI can exacerbate racial inequalities and where it can benefit Black communities. Read here: hai.stanford.edu/white-paper-ex…
account_circle
Stanford HAI(@StanfordHAI) 's Twitter Profile Photo

HAI faculty affiliate Sanmi Koyejo presented the white paper to Rep. Steven Horsford, Yvette D. Clarke, Rep. Barbara Lee, Rep. Emanuel Cleaver at the inaugural The Black Caucus’s AI Policy Series meeting last week & met with Senator @corybooker in Washington along w/ Black in AI CEO Gelyn Watkins.

HAI faculty affiliate @sanmikoyejo presented the white paper to @RepHorsford, @RepYvetteClarke, @RepBarbaraLee, @repcleaver at the inaugural @TheBlackCaucus’s AI Policy Series meeting last week & met with Senator @corybooker in Washington along w/ @black_in_ai CEO Gelyn Watkins.
account_circle
Sanmi Koyejo(@sanmikoyejo) 's Twitter Profile Photo

Are you at and interested in Trustworthy Large Language Models? Come check out my tutorial with Secure Learning Lab (SLL) in Room 22B, starting at 8:30 AM.

account_circle
Judy Shen(@judyhshen) 's Twitter Profile Photo

Are you hiring top AI talent?

Here is a list of Ph.D. students affiliated with Stanford AI Lab who are on the industry and academic job markets this year! This list showcases diverse research areas and 41% of these graduates are URMs!

Check it out: ai.stanford.edu/blog/sail-grad…

account_circle
sijia.liu(@sijialiu17) 's Twitter Profile Photo

[1/3] 🌟 Excited to share our latest work 'Rethinking Machine Unlearning for LLMs' on arXiv! 🚀 Check it out: arxiv.org/abs/2402.08787

📢Our mission? To cleanse LLMs of harmful, sensitive, or illegal data influences and refine their capabilities.

[1/3] 🌟 Excited to share our latest work 'Rethinking Machine Unlearning for LLMs' on arXiv! 🚀 Check it out: arxiv.org/abs/2402.08787 📢Our mission? To cleanse LLMs of harmful, sensitive, or illegal data influences and refine their capabilities.
account_circle
The TWIML AI Podcast(@twimlai) 's Twitter Profile Photo

Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University to discuss his award-winning papers: ‘Are Emergent Abilities of Large Language Models a Mirage?’ & 'Comprehensive Assessment of Trustworthiness in GPT Models'.

🎧/📷: twimlai.com/go/671

🔑 takeaways (1/5)

Today we’re joined by @sanmikoyejo, assistant professor at @Stanford to discuss his award-winning papers: ‘Are Emergent Abilities of Large Language Models a Mirage?’ & 'Comprehensive Assessment of Trustworthiness in GPT Models'. 🎧/📷: twimlai.com/go/671 🔑 takeaways (1/5)
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

Scaling Laws for Downstream Task Performance of Large Language Models

paper page: huggingface.co/papers/2402.04…

Scaling laws provide important insights that can guide the design of large language models (LLMs). Existing work has primarily focused on studying scaling laws for…

Scaling Laws for Downstream Task Performance of Large Language Models paper page: huggingface.co/papers/2402.04… Scaling laws provide important insights that can guide the design of large language models (LLMs). Existing work has primarily focused on studying scaling laws for…
account_circle
Arnu Pretorius(@ArnuPretorius) 's Twitter Profile Photo

From Africa to the 'World Cup of AI' = NeurIPS! decisiveagents.com/capetocarthage

Thanks for a beautiful journey Rihab Gorsane, Omayma Mahjoub, Ruan de Kock, Siddarth Singh, Zohra Slim and Karim Beguir. ❤️ Thanks Sara Hooker Sanmi Koyejo, Shakir Mohamed for your voice 🙏 I hope it inspires many! 🌍

account_circle
Berivan Isik(@BerivanISIK) 's Twitter Profile Photo

Very excited to share the paper from my last
Google AI internship: Scaling Laws for Downstream Task Performance of LLMs.

arxiv.org/pdf/2402.04177…

w/ Natalia Ponomareva, Hussein Hazimeh, Dimitris Paparas, Sergei Vassilvitskii, and Sanmi Koyejo
1/6

Very excited to share the paper from my last @GoogleAI internship: Scaling Laws for Downstream Task Performance of LLMs. arxiv.org/pdf/2402.04177… w/ Natalia Ponomareva, @hazimeh_h, Dimitris Paparas, Sergei Vassilvitskii, and @sanmikoyejo 1/6
account_circle
Secure Learning Lab (SLL)(@uiuc_aisecure) 's Twitter Profile Photo

Super excited to set up the LLM safety & trustworthiness leaderboard on Huggingface, and we will keep adding new safety perspectives. Here we evaluate (open & close) LLMs and compressed LLMs. Looking forward to more exciting evaluations to assess and enhance LLM safety!!! 🥳

account_circle
chilconference(@CHILconference) 's Twitter Profile Photo

Exciting News! CHIL 2024 features a stellar lineup of speakers and panelists. Get ready to dive into the latest innovations in computing, healthcare, and AI with Samantha Kleinberg, Sanmi Koyejo, Deb Raji, Leo Anthony Celi, David Meltzer, Kyra Gan, and Girish Nadkarni! 🌟

Exciting News! CHIL 2024 features a stellar lineup of speakers and panelists. Get ready to dive into the latest innovations in computing, healthcare, and AI with Samantha Kleinberg, @sanmikoyejo, @rajiinio, Leo Anthony Celi, @davidomeltzer, Kyra Gan, and @girish_nadkarni! 🌟
account_circle
Chulin Xie(@ChulinXie) 's Twitter Profile Photo

✨Excited to share ACM CCS 2024 about our work on unraveling the connections between Differential Privacy and Certified Robustness in Federated Learning against poisoning attacks!🛡️🤖
🗓️ Join our talk this afternoon. Happy to discuss if you are around!
Paper: arxiv.org/abs/2209.04030

✨Excited to share @acm_ccs about our work on unraveling the connections between Differential Privacy and Certified Robustness in Federated Learning against poisoning attacks!🛡️🤖 🗓️ Join our talk this afternoon. Happy to discuss if you are around! Paper: arxiv.org/abs/2209.04030
account_circle
Chulin Xie(@ChulinXie) 's Twitter Profile Photo

How many adversarial users (instances) that a user-level (instance-level) DPFL mechanism can tolerate with certified robustness? 🧐
Check out our paper! arxiv.org/abs/2209.04030

👥 Collaboration with Yunhui Long, Pin-Yu Chen, Qinbin Li, Sanmi Koyejo, Secure Learning Lab (SLL) 🥳

account_circle
Rylan Schaeffer(@RylanSchaeffer) 's Twitter Profile Photo

Excited to announce new preprint led by Minhao Jiang Ken Liu Illinois Computer Science Stanford AI Lab !!

Q: What happens when language models are pretrained on data contaminated w/ benchmarks, ratcheting up amount of contamination?

A: LMs do better, but curves can be U-shaped!

1/2

Excited to announce new preprint led by @minhaoj_uiuc @kenziyuliu @IllinoisCS @StanfordAILab !! Q: What happens when language models are pretrained on data contaminated w/ benchmarks, ratcheting up amount of contamination? A: LMs do better, but curves can be U-shaped! 1/2
account_circle
Ken Liu(@kenziyuliu) 's Twitter Profile Photo

We trained some GPT-2 models *from scratch* where evaluation data are deliberately added to/removed from pre-training to study the effects of data contamination!
Three takeaways below 🧵:

Paper: arxiv.org/abs/2401.06059
Led by Minhao Jiang & with Rylan Schaeffer Sanmi Koyejo

account_circle