Numerical Methods for Scientific Computing

Academic Year 2025/2026 - Teacher: SEBASTIANO BOSCARINO

Expected Learning Outcomes

The course provides a brief introduction to numerical methods for solving linear systems, function interpolation and approximation, nonlinear equations, and numerical integration. The practical implementation of these methods will be illustrated throughout the course using the software MATLAB.

Knowledge and Understanding:
The main objective of the course is to foster an active and critical understanding of the subject matter, going beyond the mere memorization of techniques, and aiming instead at a deep comprehension of the underlying principles and ideas.

Applying Knowledge and Understanding:
Students will acquire computational tools and will be expected to master and apply them to concrete, real-world problems.

Making Judgments (Autonomy of Judgment):
Students must be able to compare the different numerical methods covered in the course and determine the most suitable one for solving a specific problem, taking into account both the problem's characteristics and the available computational resources (including issues of computational efficiency).

Communication Skills:
Students are expected to clearly and effectively present the topics covered during the oral examination, and to explicitly describe the various steps involved in solving problems.

Learning Skills:
Learning is actively encouraged during lectures through direct questioning by the instructor, promoting continuous engagement and critical thinking.

Course Structure

Lectures will be held in person, following a traditional classroom format. Theoretical content will be presented by the instructor, supported by slides and blackboard. Active student participation will be encouraged through questions and in-class discussions.

Should the course be delivered in a blended or online format, appropriate changes may be introduced to ensure alignment with the planned syllabus and learning objectives.

IMPORTANT NOTE: Information for Students with Disabilities and/or Specific Learning Disorders (SLD)
To ensure equal opportunities and in compliance with current legislation, students with disabilities or SLD are invited to request a personal meeting to discuss and arrange appropriate compensatory measures and/or accommodations, tailored to the course objectives and individual needs.

Students may also contact the CInAP (Center for Active Integration and Participation – Services for Disabilities and/or SLD) coordinator for our Department, prof.ssa Patrizia Daniele.

Required Prerequisites

A knowledge of elementary analysis concepts is required, including real and complex numbers, limits, derivatives and integrals of functions of one variable, multivariable functions, and series. Additionally, familiarity with linear algebra (vector spaces, matrix algebra) is expected.

Attendance of Lessons

Attendance to the courses is not mandatory but is strongly recommended.

Detailed Course Content

Course Contents

Introduction to the use of the MATLAB programming language. Floating-point representation. Machine numbers. Truncation and rounding. Machine operations. Numerical cancellation. Order of accuracy.

Numerical linear algebra. Direct methods for solving linear systems. Iterative methods for linear systems. Stationary and non-stationary Richardson methods. Eigenvalue localization: Gershgorin-Hadamard theorems. Eigenvalue computation: the power method and the inverse power method. Generalization of the power method. QR and SVD methods.

Function and data approximation: Polynomial interpolation. Least squares method and applications. Normal equations and their geometric interpretation. Linear regression problems (curve fitting).

Numerical optimization: Unconstrained optimization, descent methods, gradient descent and conjugate gradient methods, ADAM method. Constrained optimization.

Solution of nonlinear equations. Bisection, secant, and Newton's methods. General theory of iterative methods for nonlinear equations and fixed-point problems. Order of convergence. Stopping criteria.

Numerical quadrature. General form of a quadrature formula. Polynomial degree. Interpolatory quadrature. Convergence theorem. Newton-Cotes formulas. Gaussian quadrature. Composite formulas: trapezoidal and Simpson's rules. Romberg integration. Adaptive quadrature (overview). Monte Carlo methods for integral computation.

Numerical methods for ordinary differential equations (ODEs). Introduction to numerical methods for solving ODEs. Runge-Kutta and multistep methods.

Introduction to Machine Learning for scientific modeling: Feed-forward neural networks, Physics-Informed Neural Networks (PINNs), Encoding physical constraints as loss terms, PINNs for ODEs and PDEs: formulation and implementation.

Textbook Information

G.Naldi, L.Pareschi, G.Russo, Introduzione al calcolo scientifico, McGraw-Hill, 2001.

V.Comincioli, Analisi Numerica: metodi, modelli, applicazioni, McGraw-Hill, Milano, 1990.G. 

Monegato, Calcolo Numerico, Levrotto e Bella, Torino, 1985. 

A. Quarteroni, R. Sacco, F. Saleri, Matematica Numerica, Springer Italia, Milano, 1998.

Neural Networks and Numerical Analysis, Bruno Després (Author) Part of: De Gruyter Series in Applied and Numerical Mathematics.

Course Planning

 SubjectsText References
1Introduction to Matlab
2Floating point representation. The machine numbers. Truncation and rounding. Machine operations. Numerical cancellation. Accuracy order.
3Numerical linear algebra. Linear algebra: vectors, matrices, determinants, inverse matrix. Vector standards and matrix standards. Natural norms and their representation. Eigenvalues. Spectral radius. Generalization of the power method. QR and SVD methods
4Direct methods for solving linear systems: triangular systems, Gaussian elimination, pivoting. LU and PA=LU factorizations.
5Compact methods, Cholesky factorization. Conditioning of linear systems. Condition numbers. Sparse matrices and their representation.
6Iterative methods for solving linear systems: Jacobi method, Gauss–Seidel method, and Successive Over-Relaxation (SOR) method. Stopping criteria.
7Eigenvalues and eigenvectors: review.Eigenvalue localization: the Gershgorin–Hadamard theorems.Eigenvalue computation: the power method and the inverse power method.
8Function and data approximation. Polynomial interpolation. Lagrange form. Linear interpolation operator. Computation of the interpolating polynomial. Newton’s divided difference formula.
9The interpolation remainder in Lagrange and Newton forms.
10Chebyshev polynomials: recurrence relation, zeros, and minimal norm property.
11Piecewise polynomial interpolation. Spline functions.Least squares method and applications. Normal equations and their geometric interpretation.Linear regression problems (curve fitting).
12Unconstrained optimization: descent methods, gradient descent, conjugate gradient method, and the ADAM algorithm.Constrained optimization.
13Solution of nonlinear equations. General concepts.Bisection, secant, and Newton methods.General theory of iterative methods for nonlinear equations and fixed-point problems.Order of convergence. Stopping criteria.
14Quadrature formulas. Weighted integrals.General form of a quadrature formula. Polynomial degree of exactness.Interpolatory quadrature rules.
15Convergence theorem. Newton–Cotes formulas. Gaussian quadrature.Composite rules: trapezoidal and Simpson’s rules
16Runge-Kutta and multistep methods for solving ODEs. An overview. 
17Feed-forward neural networks.Physics-Informed Neural Networks (PINNs): concept and motivation.Encoding physical constraints as loss terms.PINNs for ODEs and PDEs: formulation and implementation.Bruno Després: Neural Networks and Numerical Analysis, Series in Applied and Numerical Mathematics 6. De Gruyter.

Learning Assessment

Learning Assessment Procedures

The final exam consists of a project presented by the student, focusing on selected course topics and including a meaningful application to computer science problems. Registration for an exam session is mandatory and must be completed exclusively online through the student portal within the designated period.

Grading criteria: The evaluation will consider the clarity of presentation, completeness of knowledge, and the ability to connect different topics. The student must demonstrate a sufficient understanding of the main subjects covered during the course.

Exams may be conducted remotely if circumstances require it. The oral exam may take place on the same day as the written exam or a few days later.

The exam aims to thoroughly assess the student's preparation, analytical skills, reasoning abilities regarding the course topics, and the appropriate use of technical language.

The following criteria will generally be used for assigning the final grade:

  • Fail: The student has not acquired the fundamental concepts and is unable to solve basic exercises.
  • 18–23: The student shows a minimal understanding of the basic concepts. Presentation and ability to connect topics are limited; the student can solve simple exercises.
  • 24–27: The student demonstrates a solid understanding of the course content, with good presentation skills and the ability to make connections between topics. Exercises are solved with few errors.
  • 28–30 cum laude: The student has fully mastered all course content, is able to present it thoroughly and make critical connections between topics, and solves exercises completely and without errors.

Students with disabilities and/or Specific Learning Disorders (DSA) must contact the instructor, the DMI CInAP coordinator (Prof.ssa Daniele), and the CInAP office well in advance of the exam date to request appropriate compensatory measures.

Examples of frequently asked questions and / or exercises

The exam consists of the presentation and discussion of a project related to a course topic. Therefore, the exam questions will focus on the content of the presented project.

Example of the projects: 

1)Numerical Analysis of Linear Regression: Comparative Study of Closed-Form and Iterative Method

2) Comparison of Iterative Methods and Application to a Machine-Learning Model for Parameter Learning

VERSIONE IN ITALIANO