-
Toyota Technological Institute of Chicago
- Chicago,IL
- https://www.fangjiading.com/
- @jiading_fang
Highlights
- Pro
Stars
[IEEE T-PAMI 2024] All you need for End-to-end Autonomous Driving
[NeurIPS 2023 Track Datasets and Benchmarks] OpenLane-V2: The First Perception and Reasoning Benchmark for Road Driving
An open source lane detection toolbox based on PyTorch, including SCNN, RESA, UFLD, LaneATT, CondLane, etc.
DREAM: Deep Robot-to-Camera Extrinsics for Articulated Manipulators (ICRA 2020)
[3DV'25] 3D Reconstruction with Spatial Memory
Using Color Histogram, SVD and Dynamic Clustering Method obtained Key-Frames from a video. This analysis can be used to identify frames which make a shot. The code is well documented.
robomimic: A Modular Framework for Robot Learning from Demonstration
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
A testbed for comparing the learning abilities of newborn animals and autonomous artificial agents.
This repository collects research papers of large Vision Language Models in Autonomous driving and Intelligent Transportation System. The repository will be continuously updated to track the lates…
A curated list of awesome LLM for Autonomous Driving resources (continually updated)
corl-team / CORL
Forked from tinkoff-ai/CORLHigh-quality single-file implementations of SOTA Offline and Offline-to-Online RL algorithms: AWAC, BC, CQL, DT, EDAC, IQL, SAC-N, TD3+BC, LB-SAC, SPOT, Cal-QL, ReBRAC
High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
CoTracker is a model for tracking any point (pixel) on a video.
OpenEQA Embodied Question Answering in the Era of Foundation Models
🍽️ Annotations for the public release of the EPIC-KITCHENS-100 dataset
HaMeR: Reconstructing Hands in 3D with Transformers
Curated list of data science interview questions and answers
SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM (CVPR 2024)
The official PyTorch implementation of Google's Gemma models
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.