ZIP Lab
ZIP Lab
Tour
News
People
Publications
Jianfei Cai
Latest
MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-view Images
Stitched ViTs are Flexible Vision Backbones
GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI
ME-Switch: A Memory-Efficient Expert Switching Framework for Large Language Models
T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching
Efficient stitchable task adaptation
MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views
Pruning self-attentions into convolutional layers in single path
QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models
SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation
Sharpness-aware Quantization for Deep Neural Networks
Dynamic focus-aware positional queries for semantic segmentation
End-to-end one-shot human parsing
Sensitivity-aware visual parameter-efficient fine-tuning
Single-path bit sharing for automatic loss-aware model compression
Stitchable neural networks
FocusFormer: Focusing on What We Need via Architecture Sampler
Mesa: A Memory-saving Training Framework for Transformers
Ecoformer: Energy-saving attention with linear complexity
Fast vision transformers with hilo attention
Less is more: Pay less attention in vision transformers
Scalable vision transformers with hierarchical pooling
Cite
×