the tiny corp(@__tinygrad__) 's Twitter Profileg
the tiny corp

@__tinygrad__

We make tinygrad. Our mission is to commoditize the petaflop.

ID:1674187388825006082

linkhttps://tinygrad.org calendar_today28-06-2023 22:46:08

880 Tweets

33,1K Followers

61 Following

the tiny corp(@__tinygrad__) 's Twitter Profile Photo

tiny corp is now a 5 person company, and we are hiring more. Everyone here was hired by doing bounties.

Tech is an amazingly meritocratic space. If you have the skills, it's easy to prove and get a job!

account_circle
the tiny corp(@__tinygrad__) 's Twitter Profile Photo

tinygrad now supports NV=1 as a backend. While it still uses the NVRTC compiler, it can now use NVIDIA cards without the CUDA runtime.

This will allow runtime speedups beyond CUDA Graph, and is one step closer to a fully open stack. github.com/tinygrad/tinyg…

account_circle
the tiny corp(@__tinygrad__) 's Twitter Profile Photo

We have more money than we know what to do with.

If AMD wants us to help make their GPU better, they should open source firmware and documentation. No partnership required.

account_circle
the tiny corp(@__tinygrad__) 's Twitter Profile Photo

A couple new bounties dropped. mean/arange support with symbolic shape, removing slow dlopen from clang, and AMX support in LLVM/CLANG.

Also, still 3 MLPerf training bounties open. We have tinyboxes available for testing when you are ready.

account_circle
the tiny corp(@__tinygrad__) 's Twitter Profile Photo

We're starting to write docs/tutorials for tinygrad.

What ML framework do you use? What docs do you use as reference? And what docs/tutorials did you use to learn?

account_circle
the tiny corp(@__tinygrad__) 's Twitter Profile Photo

We're targeting ResNet-50, U-Net 3D, and BERT for the upcoming MLPerf training, all in tinygrad and on both tinybox red + green.

Bear with us, our first submission won't be super fast, but step 1 is getting on the board. Find us in discord if you want to help.

account_circle
the tiny corp(@__tinygrad__) 's Twitter Profile Photo

If you want to build your own training accelerator, you must have your own NN framework with adoption. Meta happens to have one, so this chip might work.

account_circle
Andrej Karpathy(@karpathy) 's Twitter Profile Photo

Have you ever wanted to train LLMs in pure C without 245MB of PyTorch and 107MB of cPython? No? Well now you can! With llm.c:
github.com/karpathy/llm.c

To start, implements GPT-2 training on CPU/fp32 in only ~1,000 lines of clean code. It compiles and runs instantly, and exactly

account_circle
Brian Roemmele(@BrianRoemmele) 's Twitter Profile Photo

“I don't need to change the world overnight, I am gonna change the world over the next 50 years”—Jensen, NVIDIA CEO, 2003

account_circle
the tiny corp(@__tinygrad__) 's Twitter Profile Photo

US/Canada preorders placed in 2023 have been contacted with next steps. If you have an international preorder in 2023 and want to use a forwarder, reach out to [email protected]

account_circle