Nicolas Zucchet(@NicolasZucchet) 's Twitter Profileg
Nicolas Zucchet

@NicolasZucchet

PhD student in NeuroAI @CSatETH | prev. @Polytechnique

ID:936865625233817600

calendar_today02-12-2017 07:52:25

83 Tweets

226 Followers

242 Following

Charlotte Frenkel(@C_Frenkel) 's Twitter Profile Photo

📢 Wondering how the neocortex works, how it is related to modern machine learning algorithms, and how this insight can be used to fuel next-gen neuromorphic hardware?
Have a look at this PhD opening in my team: tudelft.nl/over-tu-delft/…
Position open until filled, apply early!

📢 Wondering how the neocortex works, how it is related to modern machine learning algorithms, and how this insight can be used to fuel next-gen neuromorphic hardware? Have a look at this PhD opening in my team: tudelft.nl/over-tu-delft/… Position open until filled, apply early!
account_circle
Mackenzie Mathis, PhD(@TrackingActions) 's Twitter Profile Photo

Interested in Computational Neuroscience in 🇨🇭?

Looking for a PhD or masters thesis project? 👀 Check out the impressive network of labs!



swisscompneuro.org

account_circle
Antonio Orvieto(@orvieto_antonio) 's Twitter Profile Photo

S4, Mamba, and Hawk/Griffin are great – but do we really understand how they work? We fully characterize the power of gated (selective) SSMs mathematically using powerful tools from Rough Path Theory. All thanks to our math magician Nicola Muça Cirone

arxiv.org/pdf/2402.19047…
🧵

S4, Mamba, and Hawk/Griffin are great – but do we really understand how they work? We fully characterize the power of gated (selective) SSMs mathematically using powerful tools from Rough Path Theory. All thanks to our math magician @MucaCirone arxiv.org/pdf/2402.19047… 🧵
account_circle
AK(@_akhaliq) 's Twitter Profile Photo

Google presents Griffin

Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models

Recurrent neural networks (RNNs) have fast inference and scale efficiently on long sequences, but they are difficult to train and hard to scale. We propose Hawk, an RNN

Google presents Griffin Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models Recurrent neural networks (RNNs) have fast inference and scale efficiently on long sequences, but they are difficult to train and hard to scale. We propose Hawk, an RNN
account_circle
Mohamady El-Gaby(@GabyMohamady) 's Twitter Profile Photo

We’ve all heard of place cells: neurons that form a “cognitive map” representing the structure of the outside world. But what about our own structured behaviours? We took a deep mechanistic dive, out now on bioRxiv : biorxiv.org/content/10.110… 🧵below:

account_circle
François Fleuret(@francoisfleuret) 's Twitter Profile Photo

So (1) RNNs were not used anymore for computational reasons, (2) the recurrent process does not have to be 'smart', (3) parallel scan is the stuff.

account_circle
Simon Schug(@ssmonsays) 's Twitter Profile Photo

Curious how modern RNNs/ state-space models (à la MAMBA) enable online learning of long-range dependencies?

I will be presenting our poster in two hours (#719), come check it out!

account_circle