Pavlo Molchanov

I am a Distinguished Research Scientist and Team Manager at NVIDIA Research. I work in the LPR team led by Jan Kautz, focusing on efficient deep learning and human-centric computer vision. Since 2023, I have been leading a research team dedicated to efficient deep learning, specifically in the areas of model compression, NAS-like acceleration, novel architectures, and adaptive/conditional inference. Currently we are primary focused on LLM and vision-language models.

I earned my PhD from Tampere University of Technology, Finland in 2014, specializing in signal processing under the supervision of Karen Eguiazarian. Prior to that, I obtained my master’s degree from the National Aerospace University in Kharkiv, Ukraine. My master’s research centered around radio systems, with a focus on high-order spectrum techniques for signal processing, mentored by Alexander Totsky.

We are always on the lookout for promising interns and full-time positions in the area of LLM and VLM efficiency. Feel free to reach out to me for more details. I am also interested in connecting with individuals who share similar research interests.

Publications

Full list with filter

AM-RADIO: Reduce All Domains Into One. (2023). In Arxiv.

PDF Cite Code CVPR2024

VILA: On Pre-training for Visual Language Models. (2023). In Arxiv.

PDF Cite CVPR2024

FasterViT: Fast Vision Transformers with Hierarchical Attention. (2023). In Arxiv.

PDF Cite Code ICLR2024

Heterogeneous Continual Learning. (2023). In CVPR 2023.

PDF Cite Code Video CVPR2023 Highlight

Recurrence without Recurrence: Stable Video Landmark Detection with Deep Equilibrium Models. (2023). In CVPR 2023.

PDF Cite Code Video CVPR2023

Global context vision transformers. (2023). In ICML 2023.

PDF Cite Code ICML2023

LANA: Latency Aware Network Acceleration. (2022). In CVPR 2022.

PDF Cite Video ECCV2022

Structural pruning via latency-saliency knapsack. (2022). In NeurIPS2022.

PDF Cite Code Video NeurIPS2022

Gradvit: Gradient inversion of vision transformers. (2022). In CVPR 2022.

PDF Cite Code CVPR2022

Do Gradient Inversion Attacks Make Federated Learning Unsafe?. (2022). arXiv preprint arXiv:2202.06924.

PDF Cite