Pyramid
Sparse Attention

Pyramid Sparse Attention for Efficient Video Understanding and Generation

Xiaolong Li* Youping Gu* Xi Lin* Weijie Wang Bohan Zhuang

ZIP Lab, Zhejiang University

* Equal Contribution

Abstract

Attention mechanisms are the core of foundation models, but their quadratic complexity remains a critical bottleneck for scaling. This challenge has driven the development of efficient attention mechanisms, with sparsity emerging as the dominant paradigm. Current methods typically retain or discard entire key–value blocks with binary masks, resulting in substantial information loss under high sparsity.

We present Pyramid Sparse Attention (PSA), a versatile module applicable to both video understanding and generation tasks. Instead of binary masking, PSA introduces multi-level pooled KV representations, enabling finer mask granularity. Each query block dynamically allocates lower pooling levels to critical KV blocks and higher levels to less important ones, creating an informative interpolation between full retention and complete pruning.

This design effectively mitigates information loss while preserving computational efficiency under a low compute budget. Across video understanding and generation benchmarks, PSA preserves contextual information and visual fidelity, consistently outperforming or achieving comparable performance over existing sparse attention baselines with superior efficiency–quality trade-offs.

Interactive Demos

Experience the speed and quality improvements firsthand

Video Generation

PSA combined with step distillation

30.2x Faster

Side-by-side comparison: Full Attention (50 steps) vs PSA + TDM (4 steps, 85% sparsity)

Video Understanding

Long video comprehension with Qwen2.5-VL

26 min video 0.74 Sparsity

Tom and Jerry - 26 minute episode

Who was trying to eat the little duckling, and who was trying to save it?

Answer briefly: just say the names.

Full Attention
Ready
PSA (Ours) 0.74 Sparsity
Ready

Method

Pyramid Sparse Attention Framework

Framework Overview

PSA Framework

Overview of the Pyramid Sparse Attention (PSA) framework. PSA adaptively allocates attention computation across hierarchical KV representations (green; lighter shades denote coarser levels). The multi-level mask (blue) determines which KV level each query block attends to. As illustrated, the current attention block assigned to level 4 uses the coarsest KV representation $K_j^4$ and $V_j^4$.

Attention Mechanism Comparison

Comparison

Comparison of attention mechanisms under identical compute budget. Despite identical FLOPs (20% full), PSA allows each query block to attend to a much larger portion of KV blocks (70% active regions), whereas Block Sparse Attention restricts each query to only 20% active regions. PSA closely matches Full Attention with minimal relative error (<3%), while BSA shows noticeable distortions.

Pyramid KV Blocks

We build a hierarchical pyramid of $H$ levels by progressively pooling along the sequence dimension:

$K_i^{h+1} = \mathtt{MeanPool}(K_i^{h}, 2, 2)$

This creates a smooth continuum between full retention and complete pruning, analogous to Feature Pyramid Networks in computer vision.

Multi-Level Mask

Instead of binary 0/1 masking, PSA uses a multi-level mask $M \in \{0, 1, \ldots, H\}^{n_q \times n_k}$:

$M_{ij} = h > 0 \Rightarrow$ use $K_j^h, V_j^h$

This generalizes BSA's 1-bit binary mask into a multi-bit, fixed-point scheme for finer-grained compute allocation.

Algorithm

Core computation procedures

Algorithm 1 Computation of PSA

1Input: Query blocks $\{Q_i\}$, pyramid KV blocks $\{K_j^h, V_j^h\}$, mask $M$
2Output: Output blocks $\{O_i\}$
3for each query block $Q_i$ do
4Initialize $o_i \gets 0$, $m_i \gets -\infty$, $l_i \gets 0$
5for each key block $K_j$ do
6$h \gets M_{ij}$
7if $h = 0$ then continue // Skip pruned
8$S_{ij} \gets Q_i K_j^{h\top} / \sqrt{d} + (h-1)\ln 2$
9$m_{ij} \gets \max(\mathtt{rowmax}(S_{ij}), m_i)$
10$P_{ij} \gets \exp(S_{ij} - m_{ij})$
11$l_{ij} \gets l_i \exp(m_i - m_{ij}) + \mathtt{rowsum}(P_{ij})$
12$o_i \gets o_i \exp(m_i - m_{ij}) + P_{ij} V_j^h$
13$m_i \gets m_{ij}$, $l_i \gets l_{ij}$
14end for
15$O_i \gets o_i / l_i$
16end for
17return $\{O_i\}$

Algorithm 2 Multi-Level Mask Assignment

1Input: Importance map $S$, thresholds $T=\{\tau_1,\dots,\tau_H\}$
2Output: Multi-level mask $M$
3$M \gets \mathtt{zeros\_like}(S)$
4for $i = 1$ to $n_q$ do
5$E_i \gets S_i / \sum_j S_{ij}$ // Normalize row
6$(E_i', \pi_i) \gets \mathtt{sort\_desc}(E_i)$
7$\hat{E}_i \gets \mathtt{cumsum}(E_i')$
8for $j = 1$ to $n_k$ do
9$j' \gets \pi_i(j)$ // Map to sorted index
10if $\hat{E}_{ij'} > \tau_H$ then
11$M_{ij} \gets 0$
12else
13$M_{ij} \gets \min \{ t \mid \hat{E}_{ij'} \le \tau_t \}$
14end if
15end for
16end for
17return $M$

Experimental Results

Comprehensive evaluation on video generation and understanding

Video Generation Results on Wan-series Models

Quantitative comparison on Wan-series models in training-free video generation experiments. Similarity metrics (PSNR, SSIM, LPIPS) and perceptual quality measures from VBench.

Model Method PSNR↑ SSIM↑ LPIPS↓ Aes.↑ Bkg.↑ Img.↑ Sparsity Latency(s)
Wan 2.1 1.3B Full -- -- -- 0.6489 0.9645 0.6557 -- 327
SVG2 25.21 0.801 0.126 0.6185 0.9548 0.5545 0.91 187
SVG 17.57 0.567 0.399 0.5039 0.9444 0.5974 0.85 165
Sparge 22.83 0.736 0.177 0.6232 0.9476 0.6409 0.90 165
STA 20.56 0.677 0.197 0.6521 0.9419 0.6501 0.83 162
PSA (Ours) 24.36 0.788 0.121 0.6686 0.9612 0.6607 0.91 176
Wan 2.2 5B
(1280×704, 121f)
Full -- -- -- 0.6598 0.9564 0.6547 -- 168
SVG2 24.25 0.818 0.092 0.6495 0.9518 0.6025 0.90 149
SVG 18.89 0.645 0.266 0.5539 0.9386 0.5877 0.86 122
Sparge 19.53 0.660 0.229 0.5482 0.9289 0.5650 0.89 124
PSA (Ours) 23.03 0.794 0.096 0.6588 0.9569 0.6438 0.89 131
Wan 2.1 14B Full -- -- -- 0.6918 0.9639 0.6247 -- 1548
SVG2 24.79 0.807 0.085 0.6614 0.9439 0.5555 0.87 913
SVG 19.84 0.649 0.300 0.5337 0.9501 0.5479 0.85 830
Sparge 22.19 0.737 0.182 0.6083 0.8779 0.5977 0.88 855
STA 20.83 0.694 0.185 0.6544 0.9399 0.6489 0.83 815
PSA (Ours) 23.83 0.768 0.105 0.6776 0.9261 0.6400 0.88 887

PSA + TDM Distillation

Combining PSA with TDM on CogVideoX-5B achieves 30.2× speedup.

Method Sparsity Steps VBench
FullAttn -- 50 0.819
Distill-only -- 4 0.818
Ours 0.85 4 0.826

Video Understanding (Video-MME)

PSA achieves the best overall performance at 0.65 sparsity.

Method Short Med. Long Overall Sparsity
Full Attention 0.752 0.663 0.537 0.651 --
XAttention 0.748 0.661 0.544 0.651 0.58
SpargeAttn 0.749 0.663 0.539 0.650 0.37
PSA (Ours) 0.748 0.673 0.542 0.654 0.65