Min Xu (@xumin100) 's Twitter Profile
Min Xu

@xumin100

Assistant Professor at Carnegie Mellon University

ID: 4086842539

linkhttps://www.cs.cmu.edu/~mxu1 calendar_today01-11-2015 04:01:56

92 Tweet

341 Followers

164 Following

Carnegie Mellon University (@carnegiemellon) 's Twitter Profile Photo

Carnegie Mellon University Trustee Ray Lane and his wife, Stephanie, have invested $25 million in support of the university’s boundary-breaking Computational Biology Department. cmu.is/Lane-gift-comp…

Carnegie Mellon University Trustee Ray Lane and his wife, Stephanie, have invested $25 million in support of the university’s boundary-breaking Computational Biology Department.

cmu.is/Lane-gift-comp…
Min Xu (@xumin100) 's Twitter Profile Photo

Welcome to check out our latest work at #AAAI2024: MedSegDiff-v2: Diffusion based Medical Image Segmentation with Transformers

Min Xu (@xumin100) 's Twitter Profile Photo

Welcome to take a look at the manuscript detailing our recent research in cryo-ET denoising and simulation. doi.org/10.1101/2024.0…

Min Xu (@xumin100) 's Twitter Profile Photo

Manuscript of our recent research available: Enhancing Weakly Supervised 3D Medical Image Segmentation through Probabilistic-aware Learning. Welcome to check out. arxiv.org/abs/2403.02566

Yashika Jain (@yashikajain201) 's Twitter Profile Photo

Our paper "Cross-Domain Learning for Video Anomaly Detection with Limited Supervision" got accepted at #ECCV2024! My first-ever paper :) Super grateful to Ali DB and Min Xu for the guidance and opportunity!

Min Xu (@xumin100) 's Twitter Profile Photo

We're pleased to introduce CryoSAM, our new approach to Cryo-ET segmentation. This training-free framework leverages foundation models for efficient segmentation, reducing the need for manual annotations. Accepted by #MICCAI arxiv.org/abs/2407.06833 @cryoem_papers #cryoem_papers

Yashika Jain (@yashikajain201) 's Twitter Profile Photo

Our #ECCV2024 paper, "Cross-Domain Learning for Video Anomaly Detection with Limited Supervision", demonstrates the effectiveness of uncertainty-aware integration of external, unlabeled data with weakly-labeled source data to enhance the cross-domain generalization of VAD models.

Our #ECCV2024 paper, "Cross-Domain Learning for Video Anomaly Detection with Limited Supervision", demonstrates the effectiveness of uncertainty-aware integration of external, unlabeled data with weakly-labeled source data to enhance the cross-domain generalization of VAD models.
JundeWu (@jundemorsenwu) 's Twitter Profile Photo

I’m excited to announce that our 1st AI for Medicine and Healthcare (AI4MedHealth) bridge program, will be held with AAAI AAAI in Philadelphia on February 25-26, 2025. Please consider submitting this year. Our freshly baked website is here😁: sites.google.com/view/aimedheal… For

I’m excited to announce that our 1st AI for Medicine and Healthcare (AI4MedHealth) bridge program, will be held with AAAI <a href="/RealAAAI/">AAAI</a>  in Philadelphia on February 25-26, 2025. Please consider submitting this year. Our freshly baked website is here😁: sites.google.com/view/aimedheal…

For
Jiayuan Zhu (@jiayuanzhu_) 's Twitter Profile Photo

We are excited to share our 1st AAAI AAAI Bridge Program - AI for Medicine and Healthcare (AI4MedHealth). This event will take place in Philadelphia on February 25-26, 2025. Feel free to check out our newborn website for details: sites.google.com/view/aimedheal… We are calling for

We are excited to share our 1st AAAI <a href="/RealAAAI/">AAAI</a> Bridge Program - AI for Medicine and Healthcare (AI4MedHealth). This event will take place in Philadelphia on February 25-26, 2025. Feel free to check out our newborn website for details: sites.google.com/view/aimedheal…

We are calling for
JundeWu (@jundemorsenwu) 's Twitter Profile Photo

I’m thrilled to announce that our work, MedSegDiff-V2, has been selected as AAAI 2024’s Most Influential Paper! 🥳 paperdigest.org/2024/09/most-i… In this paper, we build on MedSegDiff-V1 by integrating transformers into the diffusion model for medical image segmentation, with exciting

I’m thrilled to announce that our work, MedSegDiff-V2, has been selected as AAAI 2024’s Most Influential Paper! 🥳
paperdigest.org/2024/09/most-i…
In this paper, we build on MedSegDiff-V1 by integrating transformers into the diffusion model for medical image segmentation, with exciting
MICCAI (@miccai_society) 's Twitter Profile Photo

📢 We are thrilled to announce that #MICCAI2026 will be held in Abu Dhabi, UAE! 👏 This will be our first #MICCAI conference in the Middle East. Save the date! 🗓️  Oct 4-8, 2026 🔗miccai.org/index.php/news… MICCAI Students RISE MICCAI Women in MICCAI #ai #deeplearning #imaging

📢 We are thrilled to announce that #MICCAI2026 will be held in Abu Dhabi, UAE! 👏 This will be our first #MICCAI conference in the Middle East. 
Save the date!
🗓️  Oct 4-8, 2026
🔗miccai.org/index.php/news…
<a href="/MiccaiStudents/">MICCAI Students</a> <a href="/RMiccai/">RISE MICCAI</a> <a href="/WomenInMICCAI/">Women in MICCAI</a> 
#ai #deeplearning #imaging
Min Xu (@xumin100) 's Twitter Profile Photo

#CVPR2025 Happy to share that our paper “BOE-ViT: Boosting Orientation Estimation with Equivariance in Self-Supervised 3D Subtomogram Alignment” has been accepted at CVPR 2025! In this work, we introduce the first Vision Transformer for cryo-ET subtomogram alignment, addressing

#CVPR2025 Happy to share that our paper “BOE-ViT: Boosting Orientation Estimation with Equivariance in Self-Supervised 3D Subtomogram Alignment” has been accepted at CVPR 2025!
 In this work, we introduce the first Vision Transformer for cryo-ET subtomogram alignment, addressing
Min Xu (@xumin100) 's Twitter Profile Photo

Excited to share that we have four papers at CVPR 2025!   If you’ll be in Nashville, don’t miss the chance to stop by and chat with us! 🟡 DiffCAM: Data-Driven Saliency Maps by Capturing Feature Differences (**Highlight**)   📍 Poster Session 2 · ExHall D · Poster #472   📅 June

Excited to share that we have four papers at CVPR 2025!  
If you’ll be in Nashville, don’t miss the chance to stop by and chat with us!

🟡 DiffCAM: Data-Driven Saliency Maps by Capturing Feature Differences (**Highlight**)  
📍 Poster Session 2 · ExHall D · Poster #472  
📅 June