Towards (Medical) LLMs’ interpretability and interactivity
-
PHD@The Chinese University of Hong Kong, Shenzhen, BA@Beijing Institute of Technology,
- [email protected]
- https://scholar.google.com/citations?user=WJeSzQMAAAAJ
Pinned Loading
-
FreedomIntelligence/Medical_NLP
FreedomIntelligence/Medical_NLP PublicMedical NLP Competition, dataset, large models, paper
-
FreedomIntelligence/Huatuo-26M
FreedomIntelligence/Huatuo-26M PublicThe Largest-scale Chinese Medical QA Dataset: with 26,000,000 question answer pairs.
-
FreedomIntelligence/Apollo
FreedomIntelligence/Apollo PublicMultilingual Medicine: Model, Dataset, Benchmark, Code
-
FreedomIntelligence/LongLLaVA
FreedomIntelligence/LongLLaVA PublicLongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture
-
FreedomIntelligence/CMB
FreedomIntelligence/CMB PublicCMB, A Comprehensive Medical Benchmark in Chinese
-
FreedomIntelligence/FastLLM
FreedomIntelligence/FastLLM PublicFast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.