̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝(@Quantum_Stat) 's Twitter Profileg
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝

@Quantum_Stat

҈̿ ̶̢̢̧̡̼̜̝̬͍̜̘̙͉̘͎͓͍̣̰͖̹͖͚̭̘̖̟͕̬̠̬͇̹̮͎̣̱̎̾͌̊́̇͛̅͂̀̀͑̃̈̀̓̏͌͌͋̐̾̒͋͋̏͋̽̈́͐͑̐̂͆̊̈́̾͌̓͌̕̚͝͝͠͠͝ͅ

Repos: https://t.co/3x0DctKuld

ID:970498980403712000

calendar_today05-03-2018 03:19:22

3,6K Tweets

1,9K Followers

138 Following

Raj KB(@rajkbnp) 's Twitter Profile Photo

I love the Cheat Sheet by Ricky Costa (̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝)

which includes
🔹NLP Tasks
🔹Code
🔹Structured Output Styles
🔹Unstructured Output Styles
🔹Media Types
🔹Meta ChatGPT
🔹Expert Prompting

Get your hands on this amazing resource at:i.mtr.cool/ehyhxpfexx

I love the #ChatGPT Cheat Sheet by Ricky Costa (@Quantum_Stat) which includes 🔹NLP Tasks 🔹Code 🔹Structured Output Styles 🔹Unstructured Output Styles 🔹Media Types 🔹Meta ChatGPT 🔹Expert Prompting Get your hands on this amazing resource at:i.mtr.cool/ehyhxpfexx
account_circle
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝(@Quantum_Stat) 's Twitter Profile Photo

Exciting News! 🚀 DeepSparse is now integrated with LangChain , opening up a world of possibilities in Generative AI on CPUs. Langchain, known for its innovative design paradigms for large language model (LLM) applications, was often constrained by expensive APIs or cumbersome…

account_circle
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝(@Quantum_Stat) 's Twitter Profile Photo

🌟First, want to thank everyone for pushing this model past 1,000 downloads in only a few days!! Additionally, I added bge-base models to MTEB.

Most importantly, code snippets were added for running inference in the model cards for everyone to try out!

huggingface.co/zeroshot/bge-s…

account_circle
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝(@Quantum_Stat) 's Twitter Profile Photo

🚀🚀 Explore Sparsify's One-Shot Experiment Guide!

Discover how to quickly optimize your models with post-training algorithms for a 3-5x speedup. Perfect for when you need to sparsify your model with limited time and improved inference speedups.🔥

**FYI, this is what I used to…

🚀🚀 Explore Sparsify's One-Shot Experiment Guide! Discover how to quickly optimize your models with post-training algorithms for a 3-5x speedup. Perfect for when you need to sparsify your model with limited time and improved inference speedups.🔥 **FYI, this is what I used to…
account_circle
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝(@Quantum_Stat) 's Twitter Profile Photo

🚀🚀 Hey, check out our blog on Hugging Face 🤗regarding running LLMs on CPUs!

The blog discusses how researchers at IST Austria & Neural Magic have cracked the code for fine-tuning large language models. The method, combining sparse fine-tuning and distillation-type losses,…

account_circle
̷̨̨̛̛̲͎̣̞̘͈̯͚͂̈́̂̄̽̎́̽̔͑̄̏̽̏͒̾́̅̐̈́̾̎̆͆̽́͌̽̀̚̕̚̕͠͝͝(@Quantum_Stat) 's Twitter Profile Photo

🚀✨ Run CodeGen on CPUs with this detailed Colab notebook! 📝

Explore how to sparsify and perform Large Language Model (LLM) inference using Neural Magic's stack, featuring Salesforce/codegen-350M-mono as an example.

Dive into these key steps:

1️⃣ **Installation**: Quickly set…

🚀✨ Run CodeGen on CPUs with this detailed Colab notebook! 📝 Explore how to sparsify and perform Large Language Model (LLM) inference using Neural Magic's stack, featuring Salesforce/codegen-350M-mono as an example. Dive into these key steps: 1️⃣ **Installation**: Quickly set…
account_circle