The main limitation of current artificial intelligence (AI) systems is that they do not have a causal generative understanding of our world. We believe that this deficiency is the root cause of foundational open problems in AI, such as the need for large-scale human supervision, the lack of interpretability and the lack of safety and security. Therefore, we integrate aspects from Machine Learning, Visual Computing, and Computer Security to develop generative AI systems that learn self-supervised while being a safe, secure, and human understandable component of our everyday lives.

Dr. Adam Kortylewski
Group Leader


Google Scholar
Twitter
Youtube


News

[Nov 2024] We have one paper accepted to 3DV 2025.
[Nov 2024] We have one paper accepted to WACV 2025.
[Oct 2024] We are organizing the Dagstuhl Seminar on Generative Models for 3D Vision.
[Sep 2024] We have one paper accepted to NeurIPS 2024.
[Aug 2024] Adam will serve as Area Chair for CVPR 2025. Great honor!
[Aug 2024] Our paper on OOD-CV-v2 got accepted to the TPAMI journal.
[Aug 2024] Basavaraj Sunagad joined our lab as a PhD student, welcome!
[Aug 2024] Adam will serve as Area Chair for ICLR 2025. Great honor!
[Jul 2024] We have four papers accepted to ECCV 2024.
[Apr 2024] We will organize the 3rd Workshop for Out-of-Distribution Generalization in Computer Vision and the AI for Visual Arts Workshop and Challenges at ECCV 2024.
[Mar 2024] We have four papers accepted to CVPR 2024.
[Jan 2024] We have three papers accepted to ICLR 2024 (one spotlight).
[Jan 2024] Haoran Wang joined our lab as a PhD student, welcome!
[Jan 2024] We will organize the 2nd edition of the Workshop on Generative Models for Computer Vision at CVPR 2024.