Medical Imaging
Academic Year 2025/2026 - Teacher: FRANCESCO RUNDOExpected Learning Outcomes
The student will acquire the physical and computational foundations of image formation across the main modalities (CT, MRI—T1/T2, DWI, fMRI—, Ultrasound B-mode/Doppler, PET/SPECT, digital radiography/X-ray), the DICOM standards and PACS pipelines, the basic principles of clinical interpretation, classical image-processing techniques, and key concepts of 2D/3D Deep Learning for segmentation, detection, and classification in medical imaging (CNNs, attention/Transformers, hybrid models, self-supervision, domain adaptation). Introductory mastery of Radiomics and outcome prediction; generative and foundation models (GANs, diffusion models, multimodal image–text models, LLMs, hybrid models); continual learning, fairness, explainability, and regularization (Jacobian/Lipschitz).
Applying knowledge and understanding.
Design and implement an end-to-end pipeline: DICOM ingestion and de-identification; pre-processing (denoising, equalization, rigid/non-rigid [elastic] registration, multimodal fusion); medical-specific data augmentation; training/evaluation of 2D/3D encoder–decoder networks with deep supervision and attention; computation of evaluation metrics (Dice, IoU, AUC, etc.); radiomic feature extraction and clinical-omics integration; individual risk estimation/stratification and prediction of therapeutic response; application of domain adaptation and regularization; use of generative/diffusion models for data synthesis and bias mitigation.
Making judgements.
Assess data quality; select pre-processing methods and architectures suited to the task and modality; set up reproducible experiments; interpret learning curves and robustness–accuracy trade-offs; analyze demographic biases and the ethical implications of medical-imaging/Radiomics studies.
Communication skills.
Produce clear notebooks and technical reports (including model cards and limitations/risks); present results and visualizations; draft concise scientific interpretation notes for case studies.
Learning skills.
Consult research articles and position papers; evaluate benchmarks; stay current with emerging architectures and standards; transfer competencies to new modalities and clinical domains.
Course Structure
Lectures are delivered in person with the support of slides, which are made available to students. The slides do not replace the reference textbooks; in addition to facilitating comprehension of the lectures, they provide a detailed account of the material covered.
If necessary, and following directives issued by the University, the course may be delivered in hybrid or fully online mode, with any adjustments required relative to the provisions above, in order to adhere to the syllabus outlined here.
Required Prerequisites
Basic Python programming and associated AI development environments;
Basic image processing methodologies;
Fundamentals of calculus, probability/statistics, and linear algebra;
Introductory concepts in machine/deep learning.
Attendance of Lessons
Detailed Course Content
-
Medical Image Formation
Physics and imaging pipelines: CT; Magnetic Resonance Imaging (MRI)—T1/T2, DWI; functional MRI (fMRI); Ultrasound (B-mode, Doppler); PET/SPECT; digital radiography (X-ray). DICOM and PACS systems; basic principles of image interpretation and anatomy refresher. -
Foundations of Deep Learning (DL) for 2D/3D Imaging
Convolutional Neural Networks (CNNs); attention mechanisms; Transformer-based architectures; hybrid architectures; medical-specific data augmentation; supervised pre-training vs self-/auto-supervised learning on radiology datasets; domain adaptation; perceptual/robust/adaptive DL; DL on non-Euclidean geometric spaces—hyperbolic geometry (introductory notions). -
Segmentation & Detection with Encoder–Decoder Networks
Segmentation pipeline; 2D/3D U-Net architectures and variants (residual, multi-scale, attention); deep supervision; evaluation metrics (Dice/IoU/AUC and related); clinical case studies (lung CT, brain MRI). -
Advanced Architectures for Medical Imaging
Hybrid CNN-Transformer models; hierarchical attention and long-range mechanisms in 2D/3D; feature-/input-domain noise compensation; cross-modality models; deformable-attention mechanisms; continual-learning and adversarial-defense approaches for medical imaging; bio-inspired models for medical imaging (overview). -
Radiomics & Outcome Prediction
Hand-crafted features (shape, texture, first-order) and deep features; clinical-omics integration; survival models and imaging-based risk prediction; classification/prediction of treatment response. Case studies. -
Generative & Foundation Models
GANs, diffusion models, multimodal pre-training; Large Language Models (LLMs) for medical imaging; data synthesis for augmentation and bias mitigation; zero/few-shot learning; structured chain-of-thought for advanced medical imaging. -
Continual Learning, Fairness & Regularization in Medical Imaging
Catastrophic forgetting; replay/regularization; privacy-preserving training; bias and fairness analysis; explainability (saliency/Grad-CAM); Jacobian regularization and Lipschitz-constrained models; bio-inspired models and neuromodulation (overview). -
Medical Imaging Applications
Neuroimaging and oncologic imaging: diagnostic cases and guided mini-projects.
Textbook Information
Instructor’s handouts/slides.
-
Prince & Links, Medical Imaging Signals and Systems, 3rd ed.
-
Zhou, Greenspan, Shen, Deep Learning for Medical Image Analysis, 2nd ed.
-
Research articles and position papers (to be specified during the course).
Course Planning
Subjects | Text References | |
---|---|---|
1 | Medical Image Formation | |
2 | Foundations of Deep Learning (DL) for 2D/3D Imaging | |
3 | Segmentation & Detection with Encoder–Decoder Networks | |
4 | Advanced Architectures for Medical Imaging | |
5 | Radiomics & Outcome Prediction | |
6 | Generative & Foundation Models | |
7 | Continual Learning, Fairness & Regularization in Medical Imaging | |
8 | Medical Imaging Applications |
Learning Assessment
Learning Assessment Procedures
The final examination is structured as follows:
-
Written exam;
-
Development of a Python-based project agreed upon with the instructor.
The written exam consists of five open-ended questions. Advance registration for the written exam is mandatory.
Notes:
-
The use of any hardware devices (programmable/scientific calculators, tablets, smartphones, smartwatches, mobile phones, Bluetooth earphones, etc.) and personal books or documents is strictly prohibited during the written exam. Any permitted materials will be provided by the examination board during the test.
-
During written exams, bags and other containers must not be kept within reach and must be left at a suitable distance. Bringing valuables is discouraged: the examination board will not store personal items and cannot be held responsible for any loss.
-
To sit the exam, registration on the SmartEdu portal is required. For technical issues related to registration, please contact the Teaching Secretariat.
-
Late registrations by email will not be accepted. Without a valid registration, the exam cannot be taken or officially recorded.
-
Students with disabilities and/or SLD (DSA) must contact, well in advance of the exam date, both the examination board and the CInAP contact person at DMI to request appropriate accommodations. Such accommodations must be certified by CInAP.
-
Midterm assessments: not provided.
-
If necessary, and following specific directives from the University, the assessment may be conducted online, with any adjustments required relative to the provisions above.
Evaluation criteria
The assessments are intended to provide an overall evaluation of the student’s preparation. The final grade is determined by averaging the evaluation of the project and the written exam.
Examples of frequently asked questions and / or exercises
Explain the differences among MRI contrast mechanisms: T1- and T2-weighted imaging, and the principle underlying diffusion-weighted imaging (DWI).
Describe the DICOM standard and a typical PACS pipeline.
Describe GANs vs diffusion models for data synthesis and bias mitigation.
Describe in detail a continual learning approach that mitigates catastrophic forgetting in medical imaging.