Skip to content
View Muzammal-Naseer's full-sized avatar
๐Ÿป
๐Ÿป

Block or report Muzammal-Naseer

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
Muzammal-Naseer/README.md

Hi there ๐Ÿ‘‹

  • ๐Ÿ”ญ I am interested in building Robust Intelligent Systems. My research focuses on robust visual-spatial and temporal perception, understanding and explaining AI behavior through adversarial machine learning, representation learning through self-learning ( self-supervision, self-distillation, self-critique, self-reflection), and configuring the role of large language models (LLMs) in building robust AI systems across applications of life sciences and security.
  • ๐ŸŒฑ You are welcome to explore my research work using the code provided below. Eight of the papers have been accepted as Oral/Spotlight at ICLR, NeurIPS, AAAI, CVPR, BMVC, and ACCV.
  • ๐Ÿ“ซ How to reach me: muz.pak@gmail.com

๐ŸŒฑ Repositories

Topic Application Paper Repo Venue
Visual-Spatial Perception Understanding Vision Models's Generalization ObjectCompose: Evaluating Resilience of Vision-Based Models on Object-to-Background Compositional Changes ObjectCompose ACCV'24-Oral
Adversarial Machine Learning Volumetric Medical Segmentation On Evaluating Adversarial Robustness of Volumetric Medical Segmentation Models RVMSM BMVC'24
Vision-Language Model Histopathology Representation Learning Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning HLSS MICCAI'24
Self-Learning Volumetric Medical Segmentation MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation MedContext MICCAI'24
Adversarial Machine Learning Adversarial Attack BAPLe: Backdoor Attacks on Medical Foundational Models using Prompt Learning baple MICCAI'24
Adversarial Machine Learning Certifiable Adversarial Defense PromptSmooth: Certifying Robustness of Medical Vision-Language Models via Prompt Learning promptsmooth MICCAI'24
Vision-Language Model Composed Video Retrieval Composed Video Retrieval via Enriched Context and Discriminative Embeddings composed-video-retrieval CVPR'24
Self-Learning Multi-Spectral Satellite Imagery Rethinking Transformers Pre-training for Multi-Spectral Satellite Imagery satmae_pp CVPR'24
Vision-Language Model Video grounding Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Grounding Video-GroundingDINO CVPR'24
Multi-modal Large Language Model VLM for Remote Sensing Geochat: Grounded large vision-language model for remote sensing GeoChat CVPR'24
Text-to-Image Model Leaverging LLM to generate complex scenes (Zero-Shot) LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts llmblueprint ICLR'24
Vision-Language Model Self-structural Alignment of Foundational Models (Zero-Shot) Towards Realistic Zero-Shot Classification via Self Structural Semantic Alignment S3A AAAI'24-Oral
Vision-Language Model Test-Time Alignment of Foundational Models (Zero-Shot) Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization PromptAlign NeurIPS'23
Vision-Language Model Regulating Foundational Models Self-regulating Prompts: Foundational Model Adaptation without Forgetting PromptSRC ICCV'23
Visual-Spatial and Temporal Perception Video Recognition Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition Video-FocalNets ICCV'23
Vision-Language Model Face Anti-spoofing FLIP: Cross-domain Face Anti-spoofing with Language Guidance FLIP ICCV'23
Adversarial Machine Learning Adversarial Training Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation VAFA MICCAI'23
Adversarial Machine Learning Facial Privacy CLIP2Protect: Protecting Facial Privacy Using Text-Guided Makeup via Adversarial Latent Search Clip2Protect CVPR'23
Vision-Language Model Video Recognition (Zero-shot) Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting Vita-CLIP CVPR'23
Self-Learning Image Recognition (Category Discovery) PromptCAL for Generalized Novel Category Discovery PromptCAL CVPR'23
Adversarial Machine Learning Adversarial Attack Boosting Adversarial Transferability using Dynamic Cues DCViT-AT ICLR'23
Self-Learning Video Recognition Self-Supervised Video Transformer SVT CVPR'22-Oral
Adversarial Machine Learning Adversarial Defense Stylized Adversarial Training SAT IEEE-TPAMI'22
Adversarial Machine Learning Adversarial Attack Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations ARP BMVC'22-Oral
Visual-Spatial Perception Image Recognition How to Train Vision Transformer on Small-scale Datasets? VSSD BMVC'22
Self-Learning Image Recognition (Domain Generalization) Self-Distilled Vision Transformer for Domain Generalization SDViT ACCV'22-Oral
Visual-Spatial Perception Understanding Vision Transformer Intriguing Properties of Vision Transformers IPViT NeurIPS'21-Spotlight
Adversarial Machine Learning Adversarial Attack On Improving Adversarial Transferability of Vision Transformers ATViT ICLR'21-Spotlight
Adversarial Machine Learning Adversarial Attack On Generating Transferable Targeted Perturbations TTP ICCV'21
Visual-Spatial Perception Image Recognition Orthogonal Projection Loss OPL ICCV'21
Adversarial Machine Learning Adversarial Defense A Self-supervised Approach for Adversarial Robustness NRP CVPR'20-Oral
Adversarial Machine Learning Adversarial Attack Cross-Domain Transferability of Adversarial Perturbations CDA NeurIPS'19
Adversarial Machine Learning Adversarial Defense Local Gradients Smoothing: Defense Against Localized Adversarial Attacks LGS WACV'19

Pinned Loading

  1. hananshafi/vits-for-small-scale-datasets hananshafi/vits-for-small-scale-datasets Public

    [BMVC 2022] Official repository for "How to Train Vision Transformer on Small-scale Datasets?"

    Python 141 13

  2. IPViT IPViT Public

    Official repository for "Intriguing Properties of Vision Transformers" (NeurIPS 2021--Spotlight)

    Python 176 19

  3. NRP NRP Public

    Official repository for "A Self-supervised Approach for Adversarial Robustness" (CVPR 2020--Oral)

    Python 95 17

  4. ATViT ATViT Public

    Official repository for "On Improving Adversarial Transferability of Vision Transformers" (ICLR 2022--Spotlight)

    Python 69 11

  5. TTP TTP Public

    Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)

    Python 59 9

  6. CDA CDA Public

    Official repository for "Cross-Domain Transferability of Adversarial Perturbations" (NeurIPS 2019)

    Python 57 11