Today we are emerging from stealth to launch a new family of LLMs with a novel architecture. Unlike traditional LLMs, Fastino’s task-optimized models are engineered to deliver consistent, accurate output with near-instant inference on CPU and flexible deployment options. Our differentiated approach enables enterprises of all sizes to accelerate the adoption and deployment of generative AI. Our models are beating industry benchmarks for a wide variety of enterprise LLM use cases, including structuring of textual data, RAG systems, text summarization, agent task planning, and more. We’re proud to be backed by an amazing group of investors, including Insight Partners, M12, Microsoft's Venture Fund, New Enterprise Associates (NEA), Valor Equity Partners, CRV, Github CEO Thomas Dohmke, Deepwater Asset Management, SHACK15 Ventures, and others. A huge thanks to George Mathew, Michael Stewart, Lila Tretikov, Carli Stein, Ryan Cunningham, Jørn Lyseggen, and many others. Our Palo Alto-based research and engineering teams are hiring at all levels! To learn more about Fastino and be the first to access our models, visit fastino.ai. https://lnkd.in/gKWrkwrW
Like how the shift from mainframe and minicomputer to desktop PC exploded computer use, this move is going to explode the usage of GenAI. Congratulations!
🚀🚀🚀
Huge Congratulations!
Congratulations!!
Congratulations! Going in Chief AI Office today www.chiefaioffice.xyz/subscribe