Highlights
- Pro
Stars
- All languages
- Batchfile
- C
- C#
- C++
- CSS
- CoffeeScript
- Cuda
- Cython
- D
- Dart
- Erlang
- Fortran
- Go
- Groff
- HTML
- Haskell
- Java
- JavaScript
- Jsonnet
- Julia
- Jupyter Notebook
- Kotlin
- Lua
- MATLAB
- Makefile
- OpenEdge ABL
- Perl
- PostScript
- Processing
- Protocol Buffer
- PureScript
- Python
- R
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Scilab
- Shell
- Smalltalk
- Smarty
- SourcePawn
- TeX
- Terra
- TypeScript
- XSLT
A lightweight task engine for building stateful AI agents that prioritizes simplicity and flexibility.
OLMoE: Open Mixture-of-Experts Language Models
Model Context Protocol Servers
Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.
[NeurIPS 2024] Agent Planning with World Knowledge Model
Autonomous Agents (LLMs) research papers. Updated Daily.
List of language agents based on paper "Cognitive Architectures for Language Agents"
AG2 (formerly AutoGen): The Open-Source AgentOS. Join us at: https://discord.gg/pAbnFJrkgZ
A suite of image and video neural tokenizers
The collection of papers about Private Evolution
Data and tools for generating and inspecting OLMo pre-training data.
Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Refine high-quality datasets and visual AI models
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Generate transcripts for audio and video content with a user friendly UI, powered by Open AI's Whisper with automatic translations and download videos automatically with yt-dlp integration
Fast inference engine for Transformer models
Training and serving large-scale neural networks with auto parallelization.
Running large language models on a single GPU for throughput-oriented scenarios.
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image …
Code for "Extractive Memorization in Constrained Sequence Generation Tasks"
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
Robust Speech Recognition via Large-Scale Weak Supervision
Code to reproduce experiments in the paper "Constrained Language Models Yield Few-Shot Semantic Parsers" (EMNLP 2021).
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models