Pavlo Molchanov

I am a distinguished research scientist and team manager at NVIDIA Research since 2015. I work in the LPR team led by Jan Kautz, focusing on efficient deep learning and human-centric computer vision. Since 2023, I have been leading a research team dedicated to efficient deep learning, specifically in the areas of model compression, NAS-like acceleration, novel architectures, and adaptive/conditional inference.

I earned my PhD from Tampere University of Technology, Finland in 2014, specializing in signal processing under the supervision of Karen Eguiazarian. Prior to that, I obtained my master’s degree from the National Aerospace University in Kharkiv, Ukraine. My master’s research centered around radio systems, with a focus on high-order spectrum techniques for signal processing, mentored by Alexander Totsky.

We are always on the lookout for promising interns and full-time positions. Feel free to reach out to me for more details. I am also interested in connecting with individuals who share similar research interests.


Full list with filter

FasterViT: Fast Vision Transformers with Hierarchical Attention. (2023). In Arxiv.

PDF Cite Code

Heterogeneous Continual Learning. (2023). In CVPR 2023.

PDF Cite Code Video CVPR2023 Highlight

Recurrence without Recurrence: Stable Video Landmark Detection with Deep Equilibrium Models. (2023). In CVPR 2023.

PDF Cite Code Video CVPR2023

Global context vision transformers. (2023). In ICML 2023.

PDF Cite Code ICML2023

LANA: Latency Aware Network Acceleration. (2022). In CVPR 2022.

PDF Cite Video ECCV2022

Structural pruning via latency-saliency knapsack. (2022). In NeurIPS2022.

PDF Cite Code Video NeurIPS2022

Gradvit: Gradient inversion of vision transformers. (2022). In CVPR 2022.

PDF Cite Code CVPR2022

DRaCoN--Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars. (2022). arXiv preprint arXiv:2203.15798.


Do Gradient Inversion Attacks Make Federated Learning Unsafe?. (2022). arXiv preprint arXiv:2202.06924.

PDF Cite

AViT: Adaptive Tokens for Efficient Vision Transformer. (2021). In CVPR 2022.

PDF Cite Code CVPR2022 (Oral)