Lists (1)
Sort Name ascending (A-Z)
Stars
(COMING SOON) Official PyTorch implementation of MAtCha Gaussians: Atlas of Charts for High-Quality Geometry and Photorealism From Sparse Views
The world's first roller coaster SLAM dataset
The official github repo for "Test-Time Training with Masked Autoencoders"
An extremely fast Python package and project manager, written in Rust.
[ECCV2024] Official implementation of Crowd-SAM: SAM as a Smart Annotator for Object Detection in Crowded Scenes
LLMA = LLM + Arithmetic coder, which use LLM to do insane text data compression. LLMA=大模型+算术编码,它能使用LLM对文本数据进行暴力的压缩,达到极高的压缩率。
Authoring tools for scholarly communication. Create interactive web pages or formal research papers from markdown source.
🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
🔥 [ECCV2024] Official Implementation of "Learning Camouflaged Object Detection from Noisy Pseudo Label"
GPU programming related news and material links
The open source Meme Search Engine. Free and built to self-host locally with Python, Ruby, and Docker.
LaTeXML: a TeX and LaTeX to XML/HTML/ePub/MathML translator.
An open source implementation of CLIP.
aider is AI pair programming in your terminal
Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"
Official implementation of Bootstrapping Language Models via DPO Implicit Rewards
Align Anything: Training All-modality Model with Feedback
[NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$
A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
Your browser's reference manager: automatic paper detection (Arxiv, OpenReview & more), publication venue matching and code repository discovery! Also enhances ArXiv: BibTex citation, Markdown link…
[CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners