Ollama reposted this
It's pretty wild that you can now run a 405b model on your own hardware. Meta is just killing it with these models.
ollama run llama3.1:405b This is running in TensorWave with AMD's MI300X Get started with Ollama on your cluster: https://lnkd.in/ePsqqsUm
Where can someone buy a MI300X?
I’m literally laughing when people seriously discuss private LLMs as if all of this is done on regular consumer budget hardware.. 🤦♂️
Enabling Automotive Enthusiasts to Reach Their Destinations 💜 | Specialist in Automotive Operations Management | Expertise in Dealership Management and Service Retail Operations
4moIt is awesome how Ollama is easily deploy these huge models, but I won't call Cloud Commuting with AMD's MI300X, your own hardware. :)