Giang Nguyen(@giangnguyen2412) 's Twitter Profileg
Giang Nguyen

@giangnguyen2412

PhD Fellow @AuburnEngineers, Prev. @kaistcsdept
Making AIs understandable & friendly to humans via XAI 🤖🤝👨‍💻

ID:1125655606084243456

linkhttps://giangnguyen2412.github.io/ calendar_today07-05-2019 06:56:24

403 Tweets

186 Followers

359 Following

Haoyi Qiu(@HaoyiQiu) 's Twitter Profile Photo

🔍Hallucination or informativeness? 🤔Our latest research unveils a multi-dimensional benchmark and an LLM-based metric for measuring faithfulness and coverage in LVLMs. Explore our new method for a more reliable understanding of model outputs! 📣arxiv.org/pdf/2404.13874…

🔍Hallucination or informativeness? 🤔Our latest research unveils a multi-dimensional benchmark and an LLM-based metric for measuring faithfulness and coverage in LVLMs. Explore our new method for a more reliable understanding of model outputs! 📣arxiv.org/pdf/2404.13874…
account_circle
Upol Ehsan(@UpolEhsan) 's Twitter Profile Photo

🚨 New pre-print alert! 🚨
Excited to share “The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations”

w/ the amazing team: Samir Passi,Vera Liao,Larry Chan,Ethan Lee,Michael Muller (he/him) @[email protected],Mark Riedl

🔗arxiv.org/abs/2107.13509

💡Findings at a glance...
1/n

🚨 New pre-print alert! 🚨 Excited to share “The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations” w/ the amazing team: @samirpassi,@QVeraLiao,Larry Chan,Ethan Lee,@michael_muller,@mark_riedl 🔗arxiv.org/abs/2107.13509 💡Findings at a glance... 1/n
account_circle
Vlado Boza(@bozavlado) 's Twitter Profile Photo

Kolmogorov-Arnold Network is just an ordinary MLP.
Here is the Colab, which explains:
colab.research.google.com/drive/1v3AHz5J…

The main point is, that if we consider KAN interaction as a piece-wise linear function, it can be rewritten like this:

1/n

Kolmogorov-Arnold Network is just an ordinary MLP. Here is the Colab, which explains: colab.research.google.com/drive/1v3AHz5J… The main point is, that if we consider KAN interaction as a piece-wise linear function, it can be rewritten like this: 1/n
account_circle
Michael Black(@Michael_J_Black) 's Twitter Profile Photo

The best work I've done has felt like play. I get almost a giddy excitement from new ideas. There is nothing better than working with good people who share your excitement. If you can find a place where work feels like play, you're very lucky. 4/10

account_circle
Jacob Pfau(@jacob_pfau) 's Twitter Profile Photo

Do models need to reason in words to benefit from chain-of-thought tokens?

In our experiments, the answer is no! Models can perform on par with CoT using repeated '...' filler tokens.
This raises alignment concerns: Using filler, LMs can do hidden reasoning not visible in CoT🧵

Do models need to reason in words to benefit from chain-of-thought tokens? In our experiments, the answer is no! Models can perform on par with CoT using repeated '...' filler tokens. This raises alignment concerns: Using filler, LMs can do hidden reasoning not visible in CoT🧵
account_circle
Jason Wei(@_jasonwei) 's Twitter Profile Photo

In AI research there is tremendous value in intuitions on what makes things work. In fact, this skill is what makes “yolo runs” successful, and can accelerate your team tremendously.

However, there’s no track record on how good someone’s intuition is. A fun way to do this is

account_circle
Jason Wei(@_jasonwei) 's Twitter Profile Photo

One thing that I started doing at OpenAI is that I created a policy for myself to be *100% transparent* with my manager about everything. It seems obvious and weird to say aloud, but I bet most people don’t actually do this. But once I started doing it, I realized there are a lot

account_circle
Anthropic(@AnthropicAI) 's Twitter Profile Photo

New Anthropic research: we find that probing, a simple interpretability technique, can detect when backdoored 'sleeper agent' models are about to behave dangerously, after they pretend to be safe in training.

Check out our first alignment blog post here: anthropic.com/research/probe…

New Anthropic research: we find that probing, a simple interpretability technique, can detect when backdoored 'sleeper agent' models are about to behave dangerously, after they pretend to be safe in training. Check out our first alignment blog post here: anthropic.com/research/probe…
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

BLINK

Multimodal Large Language Models Can See but Not Perceive

We introduce Blink, a new benchmark for multimodal language models (LLMs) that focuses on core visual perception abilities not found in other evaluations. Most of the Blink tasks can be solved by humans

BLINK Multimodal Large Language Models Can See but Not Perceive We introduce Blink, a new benchmark for multimodal language models (LLMs) that focuses on core visual perception abilities not found in other evaluations. Most of the Blink tasks can be solved by humans
account_circle
Ethan Mollick(@emollick) 's Twitter Profile Photo

The age at which scientists or inventors achieve their moment of genius increasing: Half of all pioneering contributions in science now happen after age 40, it used to be younger.

Why? There is much more to master before making a contribution to a field. nber.org/papers/w19866

The age at which scientists or inventors achieve their moment of genius increasing: Half of all pioneering contributions in science now happen after age 40, it used to be younger. Why? There is much more to master before making a contribution to a field. nber.org/papers/w19866
account_circle
OpenAI(@OpenAI) 's Twitter Profile Photo

Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding.
Source: github.com/openai/simple-…

Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding. Source: github.com/openai/simple-…
account_circle
Miltos Kofinas(@MiltosKofinas) 's Twitter Profile Photo

🔍How can we design neural networks that take neural network parameters as input?
🧪Our oral on 'Graph Neural Networks for Learning Equivariant Representations of Neural Networks' answers this question!

📜: arxiv.org/abs/2403.12143
💻: github.com/mkofinas/neura…
🧵 [1/9]

🔍How can we design neural networks that take neural network parameters as input? 🧪Our #ICLR2024 oral on 'Graph Neural Networks for Learning Equivariant Representations of Neural Networks' answers this question! 📜: arxiv.org/abs/2403.12143 💻: github.com/mkofinas/neura… 🧵 [1/9]
account_circle
Anthropic(@AnthropicAI) 's Twitter Profile Photo

Our experiment found that larger, newer AI models tended to be more persuasive - a finding with important implications as LMs continue to scale.

Read more about our research here: anthropic.com/news/measuring…, and access the data from our experiment here: huggingface.co/datasets/Anthr…

Our experiment found that larger, newer AI models tended to be more persuasive - a finding with important implications as LMs continue to scale. Read more about our research here: anthropic.com/news/measuring…, and access the data from our experiment here: huggingface.co/datasets/Anthr…
account_circle