Abstract - Enumath 2013

Transcrição

Abstract - Enumath 2013
ENUMATH 201 3
Hosted by
École Polytechnique Fédérale de Lausanne
Mathematical Institute of Computational Science and Engineering
BOOK OF ABSTRACTS
26 – 30 August 2013
DAILY TIMETABLES
MONDAY 26TH AUGUST
Monday 26th August (Morning)
ENUMATH 2013
07:30 - 08:50
Registration
Rolex Learning Center Auditorium (RLC)
08:50 - 09:00
Opening
Rolex Learning Center Auditorium (RLC)
09:00 - 09:50
09:50 - 10:40
Rolex Learning
Barbara Wohlmuth (Pg. 416)
Center Auditorium
Ernst Hairer (Pg. 150)
Interfaces, corners and point sources
Organizers
Chair
CO017
CO122
PARA:
MSMA:
SMAP:
Bridging software
Multiscale methods Surrogate modeling
design and
for atomistic and
continuum models approaches for PDEs performance tuning
FEPD:
Finite elements for
PDE-constrained
optimization
ACDA:
Approximation,
compression, and
data analysis
Abdulle, Ortner
Perotto, Smetana,
Veneziani
Engwer, Goeddeke
Rösch, Vexler
Grohs, Fornasier,
Ward
Cancès, Després
Abdulle
Smetana
Goeddeke
Vexler
Grohs
Després
Debrabant (Pg. 92)
Shapeev (Pg. 346)
Engwer (Pg. 114)
Ehler (Pg. 105)
Cancès (Pg. 62)
Monotone approximations
for Hamilton-Jacobi-Bellman
equations
Atomistic-to-Continuum
coupling for crystals:
analysis and construction
Braack (Pg. 53)
Mini-symposium keynote:
Bridging software design and
performance tuning for
parallel numerical codes
Signal reconstruction from
magnitude measurements
via semidefinite
programming
Monotone corrections for
cell-centered Finite Volume
approximations of diffusion
equations
Golbabaee (Pg. 133)
A monotone nonlinear finite
volume method for diffusion
equations and multiphase
flows
CO1
CO2
LRTT:
Low-rank tensor
techniques
NMFN:
Numerical methods
for fully nonlinear
PDE’s
Grasedyck, Huckle,
Khoromskij,
Kressner
Khoromskij
Jensen, Lakkis,
Pryer
Pryer
Lim (Pg. 242)
11:10 - 11:40
Symmetric tensors with
positive decompositions
CO3
Kalise (Pg. 191)
Entanglement via algebriac
geometry
An accelerated
semi-Lagrangian/policy
iteration scheme for the
solution of dynamic
programming equations
Uschmajew (Pg. 388)
Lakkis (Pg. 232)
Skowera (Pg. 351)
11:40 - 12:10
12:10 - 12:40
On asymtotic complexity of
hierarchical Tucker
approximation in L2
Sobolev classes
Khoromskaia (Pg. 200)
12:40 - 13:10
Rolf Rannacher
Coffee Break
10:40 - 11:10
Minisymposia
Chair:
Long-term analysis of numerical and analytical oscillations
Hartree-Fock and MP2
calculations by grid-based
tensor numerical methods
Review of Recent Advances in
Galerkin Methods for Fully
Nonlinear Elliptic Equations
CO016
Model- and mesh adaptivity
for transient problems
Deparis (Pg. 98)
Ortner (Pg. 289)
Optimising Multiscale Defect
Simulations
On the continuity of flow
rates, stresses and total
stresses in geometrical
multiscale cardiovascular
models
Cecka (Pg. 68)
Fast Multipole Method
Framework and Repository
Elfverson (Pg. 110)
Cances (Pg. 61)
Multiscale eigenvalue
problems
Smears (Pg. 352)
Discontinuous Galerkin
finite element approximation
of HJB equations with
Cordès coefficients
CO015
Discontinuous Galerkin
method for
convection-diffusion-reaction
problems
Rupp (Pg. 325)
ViennaCL - Portable High
Performance at High
Convenience
Stamm (Pg. 356)
Cervone (Pg. 69)
Recent developments of
Parallel assembly on
Hierarchical Model (HiMod)
overlapping meshes using the
reduction for boundary value
LifeV library
problems
Lunch
13:10 - 14:30
3
Nikitin (Pg. 280)
Model Selection with
Piecewise Regular Gauges
Chrysafinos (Pg. 78)
Schnass (Pg. 342)
Discontinuous time-stepping
schemes for the velocity
tracking problem under low
regularity assumptions
Non-Asymptotic Dictionary
Identification Results for the
K-SVD Minimisation
Principle
Sheng (Pg. 348)
The nonlinear finite volume
scheme preserving maximum
principle for diffusion
equations on polygonal
meshes
Kirchner (Pg. 205)
Perotto (Pg. 298)
Domain decomposition for
implicit solvation models
Pieper (Pg. 303)
Finite element error analysis
for optimal control problems
with sparsity functional
CO123
SDIFF:
New trends in
nonlinear methods
for solving diffusion
equation
Efficient computation of a
Tikhonov regularization
parameter for nonlinear
inverse problems with
adaptive discretization
methods
Krahmer (Pg. 216)
Burman (Pg. 55)
The restricted isometry
property for random
convolutions
Computability of filtered
quantities for the Burgers’
equation
Monday 26th August (Afternoon)
ENUMATH 2013
CO1
Minisymposia
Organizers
Chair
ANMF:
Advanced numerical
methods for fluid
mechanics
Burman, Ern,
Fernandez
Burman
15:00 - 15:30
16:00 - 16:30
16:30 - 17:00
Abdulle, Ortner
Olshanskii
Ortner
Vassilevski (Pg. 394)
CO016
CO017
CO122
SMAP:
Surrogate modeling
approaches for PDEs
PARA:
Bridging software
design and
performance tuning
FEPD:
Finite elements for
PDE-constrained
optimization
ACDA:
Approximation,
compression, and data
analysis
Engwer, Goeddeke
Rösch, Vexler
Grohs, Fornasier, Ward
Rösch
Ward
Perotto, Smetana,
Veneziani
Perotto
A numerical approach to
Newtonian and viscoplastic free
surface flows using dynamic
octree meshes
Bai (Pg. 38)
Falcó (Pg. 117)
Reduced basis finite element
heterogeneous multiscale method
for quasilinear problems
Proper Generalized
Decomposition for Dynamical
Systems
Vilmart (Pg. 399)
Kestler (Pg. 199)
Tews (Pg. 374)
Lee (Pg. 236)
Optimal control of
incompressible two-phase flows
Numerical simulation of Kaye
effects
Numerical homogenization
methods for multiscale nonlinear
elliptic problems of nonmonotone
type
On the adaptive tensor product
wavelet Galerkin method in view
of recent quantitative
improvements
Ehrlacher (Pg. 107)
Schieweck (Pg. 340)
15:30 - 16:00
Olshanskii, Vassilevski
CO015
On stability properties of
different variants of local
projection type stabilizations
Tobiska (Pg. 378)
14:30 - 15:00
CO2
CO3
FREE:
MSMA:
Numerical methods for Multiscale
methods for
fluid flows with free
atomistic and
boundaries and
continuum models
interfaces
Aizinger (Pg. 18)
Engwer
Pfefferer (Pg. 301)
Jolivet (Pg. 188)
How to easily solve PDE with
FreeFem++ ?
de la Cruz (Pg. 90)
Wollner (Pg. 418)
Weinmann (Pg. 408)
Adjoint Consistent Gradient
Computation with the Damped
Crank-Nicolson Method
Jump-sparse reconstruction by
the minimization of Potts
functionals
Van der Zee (Pg. 389)
Reguly (Pg. 314)
Discontinuous Galerkin method
for 3D free surface flows and
wetting/drying
Optimization of a structurally
graded microstructured material
Adaptive Modeling for
Partitioned-Domain Concurrent
Continuum Models
OP2: A library for unstructured
grid applications on
heterogeneous architectures
Sangalli (Pg. 337)
Danilov (Pg. 88)
Makridakis (Pg. 257)
Smetana (Pg. 353)
Wells (Pg. 411)
Isogeometric elements for the
Stokes problem
Numerical simulation of
large-scale hydrodynamic events
Consistent Atomistic /
Continuum approximations to
atomistic models.
The Hierarchical Model
Reduction-Reduced Basis
approach for nonlinear PDEs
Domain-specific languages and
code generation for solving PDEs
using specialised hardware
4
Davenport (Pg. 89)
One-Bit Matrix Completion
Unveiling WARIS code, a parallel
and multi-purpose FDM
framework
An efficient dG-method for
transport dominated problems
based on composite finite
elements
Coffee Break
On properties of discretized
optimal control problems with
semilinear elliptic equations and
pointwise state constraints
Aßmann (Pg. 34)
Regularization in Sobolev spaces
with fractional order
Steinig (Pg. 357)
Convergence Analysis and A
Posteriori Error Estimation for
State-Constrained Optimal
Control Problems
Peter (Pg. 300)
Damping Noise-Folding and
Enhanced Support Recovery in
Compressed Sensing
Vandergheynst (Pg. 390)
Compressive Source Separation:
an efficient model for large scale
multichannel data processing
Monday 26th August (Late Afternoon)
ENUMATH 2013
CO1
CO2
CO3
Contributed
Talks
CT1.1:
Treatment of
large number of
random variables
CT1.2:
Interpolation,
quadrature and
PDEs
CT1.3:
Buoyancy driven
flows and
integration
schemes
Chair
Ishizuka
Caboussat
Lukin
Macedo (Pg. 253)
17:00 - 17:30
A low-rank tensor
method for large-scale
Markov Chains
Berrut (Pg. 46)
Pekmen (Pg. 295)
The linear barycentric
rational quadrature
method for Volterra
integral equations
Steady Mixed Convection
in a Heated Lid-Driven
Square Cavity Filled with
a Fluid-Saturated Porous
Medium
Luh (Pg. 249)
Tezer-Sezgin
(Pg. 375)
CO015
CO016
CT1.5:
CT1.4:
A posteriori error
Hamiltonian
estimates and
systems and their
adaptive methods
integration
I
Janssen
Greff
Grandchamp
(Pg. 138)
Multi-scale DNA
Modelling and Birod
Mechanics
Kleiss (Pg. 207)
Guaranteed and Sharp a
Posteriori Error Estimates
in Isogeometric Analysis
CO017
CT1.6:
Domain
decomposition
and parallel
methods
CO122
CT1.7:
Modeling and
simulation of
vascular and
respiratory
systems
Sumitomo
Cattaneo
Christophe (Pg. 76)
Mortar FEs on
overlapping subdomains
for eddy current non
destructive testing
Prokop (Pg. 309)
Numerical Simulation of
Generalized Oldroyd-B
Fluid Flows in Bypass
Konshin (Pg. 211)
Migliorati (Pg. 269)
17:30 - 18:00
Adaptive polynomial
approximation by
random projection of
multivariate aleatory
functions
The Criteria of Choosing
the Shape Parameter for
Radial Basis Function
Interpolations
DRBEM Solution of Full
MHD and Temperature
Equations in a Lid-driven
Cavity
Papez (Pg. 293)
D’Ambrosio (Pg. 85)
Numerical solution of
Hamiltonian systems by
multi-value methods
Distribution of the
algebraic, discretization
and total errors in
numerical PDE model
problems
Continuous parallel
algorithm of the second
order incomplete
triangular factorization
with dynamic
decomposition and
reordering
Augustin (Pg. 27)
Parallel solvers for the
numerical simulation of
cardiovascular tissues
CO123
CO124
CT1.8:
Boundary
element and
pseudospectral
methods
CT1.9:
Numerical
treatment of
boundaries,
interfaces and
block materials
Weißer
Varygina
af Klinteberg
(Pg. 17)
Fast simulation of
particle suspensions
using double layer
boundary integrals and
spectral Ewald
summation
Saffar Shamshirgar
(Pg. 333)
The Spectrally Fast Ewald
method and a
comparison with SPME
and P3M methods in
Electrostatics
Berger (Pg. 45)
Ishizuka (Pg. 178)
18:00 - 18:30
Simulating information
propagation by near-field
P2P wireless
communication
Heine (Pg. 155)
Lukin (Pg. 251)
Mean-Curvature
Reconstruction with
Linear Finite Elements
Mathematical modelling
of radiatively accelerated
canalized magnetic jets
Caboussat (Pg. 59)
18:30 - 19:00
Numerical solution of a
partial differential
equation involving the
Jacobian determinant
Greff (Pg. 140)
Conservation of
Lagrangian and
Hamiltonian structure
for discrete schemes
Pousin (Pg. 306)
A posteriori estimate and
adaptive partial domain
decomposition
Lejon (Pg. 239)
Janssen (Pg. 182)
Higher order projective
integration schemes for
multiscale kinetic
equations in the diffusive
limit
The hp-adaptive
Galerkin time stepping
method for nonlinear
differential equations
with finite time blow-up
5
Ruprecht (Pg. 327)
Convergence of Parareal
for the Navier-Stokes
equations depending on
the Reynolds number
Sumitomo (Pg. 364)
GPU accelerated
Symplectic Integrator in
FEA for solid continuum
Solving the Generalised
Large Deformation
Poroelastic Equations for
Modelling Tissue
Deformation and
Ventilation in the Lung
Cattaneo (Pg. 66)
Computational models
for coupling tissue
perfusion and
microcirculation
Börm (Pg. 52)
Fast evaluation of
boundary element
matrices by quadrature
techniques
Weißer (Pg. 409)
Challenges in BEM-based
Finite Element Methods
on general meshes
Kreiss (Pg. 222)
Imposing Neumann and
Robin boundary
conditions with added
penalty term
Juntunen (Pg. 189)
A posteriori estimate of
Nitsche’s method for
discontinuous material
parameters
Varygina (Pg. 392)
Numerical Modeling of
Elastic Waves
Propagation in Block
Media with Thin
Interlayers
TUESDAY 27TH AUGUST
Tuesday 27th August (Morning)
ENUMATH 2013
08:20 - 09:10
Rolex Learning
Ruth Baker (Pg. 39)
09:10 - 10:00
Center Auditorium
Eric Cancès (Pg. 63)
Developing multiscale models for exploring biological phenomena
CO1
CO2
FREE:
ANMF:
methods
Advanced numerical Numerical
for fluid flows with
methods for fluid
free
boundaries
and
mechanics
interfaces
Organizers
Burman, Ern,
Fernandez
Olshanskii,
Vassilevski
Chair
Ern
Vassilevski
10:30 - 11:00
Miloslav Feistauer
Coffee Break
10:00 - 10:30
Minisymposia
Chair:
Electronic structure calculation
CO3
CO015
CO016
LRTT:
Low-rank tensor
techniques
CTNL:
Current trends in
numerical linear
algebra
ADFE:
Adaptive finite
elements
Simoncini
Micheletti, Perotto,
Picasso
Grasedyck, Huckle,
Khoromskij,
Kressner
Huckle
Simoncini
CO017
MANT:
Modelling, Analysis
and Numerical
Techniques for
Viscoelastic Fluids
Bonito, Nochetto
Chartier, Lemou
Bonito
Chartier
Picasso
Kroll & Turek
Sahin (Pg. 334)
Parallel Large-Scale
Numerical Simulations of
Purely-Elastic Instabilities
with a Template-Based Mesh
Refinement Algorithm
Gross (Pg. 142)
Ehrlacher (Pg. 106)
Powell (Pg. 308)
Fictitious Domain
Formulation for Immersed
Boundary Method
XFEM for pressure and
velocity singularities in 3D
two-phase flows
Greedy algorithms for
high-dimensional eigenvalue
problems
Fast solvers for stochastic
FEM discretizations of PDEs
with uncertainty
Unified variational
multiscale method for
compressible and
incompressible flows using
anisotropic adaptive mesh
Jiranek (Pg. 186)
Henning (Pg. 157)
A general framework for
algebraic multigrid methods
Error control for a Multiscale
Finite Element Method
GEOP:
Geometric Partial
Differential
Equations
CO123
ASHO:
Asymptotic preserving schemes for
highly-oscillatory
PDEs
Kroll, Turek
Hachem (Pg. 147)
Gastaldi (Pg. 125)
CO122
Heine (Pg. 156)
Vilmart (Pg. 400)
Mean-Curvature
Reconstruction with Linear
Finite Elements
Multi-revolution composition
methods for highly
oscillatory problems
Caiazzo (Pg. 60)
11:00 - 11:30
An explicit stabilized
projection scheme for
incompressible NSE: analysis
and application to POD
based reduced order
modeling
Burman (Pg. 56)
11:30 - 12:00
Projection methods for the
transient Navier–Stokes
equations discretized by
finite element methods with
symmetric stabilization
Bonelle (Pg. 49)
12:00 - 12:30
Compatible Discrete
Operator Schemes on
Polyhedral Meshes for Stokes
Flows
Basting (Pg. 42)
A hybrid level set / front
tracking approach for fluid
flows with free boundaries
and interfaces
Turek (Pg. 386)
3D Level Set FEM techniques
for (non-Newtonian)
multiphase flow problems
with application to
pneumatic extension nozzles
and micro-encapsulation
Kramer (Pg. 219)
Converting Interface
Conditions due to Excluded
Volume Interactions into
Boundary Conditions by
FEM-BEM Methods
Schneider (Pg. 343)
Convergence of dynamical
low rank approximation in
hierarchical tensor formats
Badia (Pg. 37)
Zulehner (Pg. 427)
Tyrtyshnikov (Pg. 387) Operator
Adaptive finite element
Preconditioning for
Tensor decompositions in the
simulation of incompressible
a Mixed Method of
drug design optimization
problems
Ballani (Pg. 40)
Black box approximation
strategies in the hierarchical
tensor format
Biharmonic Problems on
Polygonal Domains
flows by hybrid
continuous-discontinuous
Galerkin formulations
Stoll (Pg. 361)
Artina (Pg. 26)
Fast solvers for Allen-Cahn
and Cahn-Hilliard problems
Anisotropic mesh adaptation
for brittle fractures
Wünsch (Pg. 419)
Caboussat (Pg. 58)
Thalhammer (Pg. 377)
Numerical simulation of
viscoelastic fluid flow in
confined geometries
Numerical Approximation of
Fully Nonlinear Elliptic
Equations
Multi-revolution composition
methods for time-dependent
Schrödinger equations
Keslerova (Pg. 195)
Hintermueller
(Pg. 166)
Numerical Simulation of
Steady and Unsteady Flows
for Viscous and Viscoelastic
Fluids
Damanik (Pg. 86)
Lunch
12:30 - 14:00
7
A multigrid LCR-FEM solver
for viscoelastic fluids with
application to problems with
free surface
Optimal shape design subject
to elliptic variational
inequalities
Bartels (Pg. 41)
Projection-free
approximation of geometric
evolution problems
Possanner (Pg. 305)
Numerical integration of the
MHD equations on the
resistive timescale
Tuesday 27th August (Afternoon)
ENUMATH 2013
CO1
CO2
Contributed
Talks
CT2.1:
Accurate and
reliable matrix
computations
CT2.2:
Multiscale wave
equation
Chair
Pena
Stohrer
Miyajima (Pg. 270)
14:00 - 14:30
Fast verified computation
for solutions of
generalized least squares
problems
Ogita (Pg. 282)
14:30 - 15:00
Backward error bounds
on factorizations of
symmetric indefinite
matrices
CO3
CO015
CT2.3:
CT2.4:
Advanced
A posteriori error
methods for fluid
estimates and
and transport
adaptive methods
problems
II
Matthies
Arjmand (Pg. 24)
Linke (Pg. 244)
Analysis of
Heterogeneous Multiscale
Methods for Long Time
Multiscale Wave
Propagation Problems
Stabilizing Mixed
Methods for
Incompressible Flows by
a New Kind of
Variational Crime
On a posteriori error
analyses for generalized
Stokes problem using an
augmented
velocity-pseudostress
formulation
Muslu (Pg. 275)
Ojala (Pg. 286)
New Numerical Results
on Some Boussinesq-type
Wave Equations
Accurate bubble and drop
simulations in 2D Stokes
flow
Ozaki (Pg. 291)
Nguyen (Pg. 278)
Homogenization of the
one-dimensional wave
equation
Stohrer (Pg. 359)
15:30 - 16:00
16:30 - 17:30
17:30 - 18:30
Accurate computations
for some classes of
matrices
Rolex Learning
Center Auditorium
Hadrava (Pg. 148)
Space-time
Discontinuous Galerkin
Method for the Problem
of Linear Elasticity
Micro-Scales and
Long-Time Effects: FE
Heterogeneous Multiscale
Method for the Wave
Equation
CO122
CT2.7:
CT2.6:
Finite volume and Regression and
finite difference statistical inverse
problems
methods
Touma
Icardi
Sepúlveda (Pg. 345)
ten Thije
Boonkkamp
(Pg. 371)
Azijli (Pg. 30)
Gorkem (Pg. 136)
Kucera (Pg. 229)
Jannelli (Pg. 180)
On the use of
reconstruction operators
in discontinuous
Galerkin schemes
Quasi-uniform Grids and
ad hoc Finite Difference
Schemes for BVPs on
Infinite Intervals
Verani (Pg. 395)
Touma (Pg. 380)
Icardi (Pg. 174)
Mimetic finite differences
for quasi-linear elliptic
equations
Central finite volume
schemes on nonuniform
grids and applications
Bayesian parameter
estimation of a porous
media flow model
Harmonic complete flux
schemes for conservation
laws with discontinuous
coefficients
Tryoen (Pg. 384)
Error Estimation for The
Convective Cahn –
Hilliard Equation
Matthies (Pg. 259)
Frolov (Pg. 123)
A two-level local
projection stabilisation
on uniformly refined
triangular meshes
Reliable a posteriori error
estimation for plane
problems in Cosserat
elasticity
Apéro hosted by MathWorks
8
CT2.9:
Flow problems in
heterogeneous
media
Khoromskij
Yücel
Savostyanov
(Pg. 338)
Budac (Pg. 54)
An adaptive numerical
homogenization method
for a Stokes problem in
heterogeneous media
Ouazzi (Pg. 290)
Newton-Multigrid
Least-Squares FEM for
V-V-P and S-V-P
Formulations of the
Navier-Stokes Equations
Dolgov (Pg. 103)
Alternating minimal
A semi-intrusive
energy methods for linear
stochastic inverse method
systems in higher
for uncertainty
dimensions. Part II:
characterization and
implementation hints
propagation in
and application to
hyperbolic problems
nonsymmetric systems
Inverse Problems Regularized by Sparsity
Public Lecture: Martin Vetterli (Pg. 397)
CT2.8:
Low-rank tensor
techniques
Physics-based
Alternating minimal
interpolation of
energy methods for linear
incompressible flow fields
systems in higher
obtained from
dimensions. Part I: the
experimental data: a
framework and theory
Bayesian perspective
for SPD systems
Madhavan (Pg. 254)
Finite element methods
for transient
convection-diffusion
equations with small
diffusion
CO124
Billaud Friess
(Pg. 47)
A new a posteriori error
estimator of low
computational cost for
an augmented mixed
FEM in linear elasticity
On a Discontinuous
Galerkin Method for
Surface PDEs
CO123
Convergent Finite Volume Azzimonti (Pg. 32)
A Tensor-Based
Mixed Finite Elements for
Schemes for Nonlocal
Algorithm for the
spatial regression with
and Cross Diffusion
Optimal Model Reduction
PDE penalization
Reaction Equations.
of High Dimensional
Applications to biology
Problems
Gonzalez (Pg. 134)
Fast Interval Matrix
Multiplication by
Blockwise Computations
Pena (Pg. 297)
CO017
Verani
Frolov
Bustinza (Pg. 57)
Nadir (Pg. 276)
15:00 - 15:30
CO016
CT2.5:
Discontinuous
Galerkin and
mimetic finite
difference
methods
Khoromskij
(Pg. 201)
Quantized tensor
approximation methods
for multi-dimensional
PDEs
den Ouden (Pg. 96)
Application of the
level-set method to a
multi-component Stefan
problem
Yücel (Pg. 422)
Distributed Optimal
Control Problems
Governed by Coupled
Convection Dominated
PDEs with Control
Constraints
WEDNESDAY 28TH AUGUST
Wednesday 28th August
ENUMATH 2013
08:20 - 09:10
Rolex Learning
Rolf Stenberg (Pg. 358)
09:10 - 10:00
Center Auditorium
Ilaria Perugia (Pg. 299)
Mixed Finite Element Methods for Elasticity
Chair:
Trefftz-discontinuous Galerkin methods for time-harmonic wave problems
Yuri Kuznetsov
CO1
CO2
CO3
Coffee Break
CO015
CO016
Minisymposia
UQPD:
Uncertainty
Quantication for PDE
model
ASHO:
Asymptotic preserving
schemes for
highly-oscillatory PDEs
LRTT:
Low-rank tensor
techniques
CTNL:
Current trends in
numerical linear
algebra
ADFE:
Adaptive finite
elements
Organizers
Nobile, Schwab
Chartier, Lemou
Grasedyck, Huckle,
Khoromskij, Kressner
Simoncini
Micheletti, Perotto,
Picasso
Kroll, Turek
Bonito, Nochetto
Chair
Schwab
Lemou
Kressner
Simoncini
Micheletti
Turek & Kroll
Bartels
10:00 - 10:30
Cohen (Pg. 80)
10:30 - 11:00
Breaking the curse of
dimensionality in sparse
polynomial approximation of
parametric PDEs
Despres (Pg. 99)
Uniform convergence of
Asymptotic Preserving schemes
on general meshes
Crouseilles (Pg. 84)
Asymptotic preserving schemes
for highly oscillatory
Vlasov-Poisson equations
11:00 - 11:30
Scheichl (Pg. 339)
11:30 - 12:00
Hierarchical Multilevel Markov
Chain Monte Carlo Methods and
Applications to Uncertainty
Quantification in Subsurface
Flow
Giraud (Pg. 132)
Recovery policies for Krylov
solver resiliency
Donatelli (Pg. 104)
Kazeev (Pg. 194)
Tensor-structured approach to
the Chemical Master Equation
Lafitte (Pg. 231)
Bachmayr (Pg. 35)
Projective integration schemes
for kinetic equations in the
hydrodynamic limit
Adaptive methods based on
low-rank tensor representations
of coefficient sequences
CO122
GEOP:
Geometric Partial
Differential Equations
Hoffman (Pg. 170)
Hegland (Pg. 154)
Solving the chemical master
equations for biological
signalling cascades using tensor
factorisation
CO017
MANT:
Modelling, Analysis
and Numerical
Techniques for
Viscoelastic Fluids
Multigrid preconditioning for
nonlinear (degenerate) parabolic
equations with application to
monument degradation
Simoncini (Pg. 350)
Solving Ill-posed Linear Systems
with GMRES
Adaptive finite element methods
for turbulent flow and
fluid-structure interaction:
theory, implementation and
applications
Luce (Pg. 248)
Robust local flux reconstruction
for various finite element
methods
Kroll (Pg. 226)
An alternative description of the
visko-elastic flow behavior of
highly elastic polymer melts
Capatina (Pg. 65)
Robust discretization of the
Giesekus model
Olshanskii (Pg. 287)
Chen (Pg. 70)
An adaptive finite element
method for PDEs based on
surfaces
A numerical study of viscoelastic
fluid-structure interaction and its
application in a micropump
Tobiska (Pg. 379)
Influence of surfactants on the
dynamics of droplets
Walker (Pg. 405)
A New Mixed Formulation For a
Sharp Interface Model of Stokes
Flow and Moving Contact Lines
Antil (Pg. 23)
A Stokes Free Boundary Problem
with Surface Tension Effects
Hintermueller (Pg. 165)
Harbrecht (Pg. 153)
12:00 - 12:30
12:30 - 14:00
14:00 - 18:30
On multilevel quadrature for
elliptic stochastic partial
differential equations
Crestetto (Pg. 83)
Coupling of an
Asymptotic-Preserving scheme
with the Limit model for highly
anisotropic-elliptic problems
Huckle (Pg. 173)
Tensor representations of sparse
or structured vectors and
matrices
Strakos (Pg. 362)
Remarks on algebraic
computations within numerical
solution of partial differential
equations
An adaptive finite element
method for variational
inequalities of second kind with
applications in L2-TV-based
image denoising and Bingham
fluids
Lunch
Excursion: Lavaux, Gruyères / Free Afternoon
10
Picasso (Pg. 302)
Numerical simulation of
extrusion with viscoelastic flows
Dede (Pg. 93)
Numerical approximation of
Partial Differential Equations on
surfaces by Isogeometric Analysis
THURSDAY 29TH AUGUST
Thursday 29th August (Morning)
ENUMATH 2013
08:20 - 09:10
CO1
Stochastic Newton MCMC Methods for Bayesian Inverse Problems, with Application
to Ice Sheet Dynamics
Omar Ghattas (Pg. 131)
High-order accurate reduced basis multiscale finite element methods
Jan Hesthaven (Pg. 161)
09:10 - 10:00
10:00 - 10:30
CO1
CO2
CO3
Minisymposia
UQPD:
Uncertainty
Quantication for PDE
models
NEIG:
Numerical methods for
linear and nonlinear
eigenvalue problems
PSPP:
Preconditioners for
saddle point problems
Organizers
Nobile, Schwab
Benner, Guglielmi
Chair
Nobile
Guglielmi
Deparis, Klawonn,
Pavarino
Deparis
10:30 - 11:00
Voss (Pg. 403)
Variational Principles for
Nonlinear Eigenvalue Problems
Widlund (Pg. 413)
Two-level overlapping Schwarz
methods for some saddle point
problems
Ohlberger (Pg. 284)
11:00 - 11:30
11:30 - 12:00
TIME:
Time integration of
partial differential
equations
CO016
ROMY:
Reduced order
modelling for the
simulation of complex
systems
Ostermann
Quarteroni, Rozza
Ostermann
Rozza
Hochbruck (Pg. 167)
Tempone (Pg. 370)
Numerical Approximation of the
Acoustic and Elastic Wave
Equations with Stochastic
Coefficients
Coffee Break
CO015
Model reduction for nonlinear
parametrized evolution problems
Le Maitre (Pg. 235)
Jarlebring (Pg. 185)
Galerkin Method for Stochastic
Ordinary Differential Equations
with Uncertain Parameters
An iterative block algorithm for
eigenvalue problems with
eigenvector nonlinearities
Schillings (Pg. 341)
Kressner (Pg. 225)
Sparsity in Bayesian Inverse
Problems
Interpolation based methods for
nonlinear eigenvalue problems
Wiesner (Pg. 414)
Algebraic multigrid (AMG)
methods for saddle-point
problems arising from
mortar-based finite element
discretizations
DEIM-based Non-Linear PGD
Grote (Pg. 144)
Amsallem (Pg. 22)
Einkemmer (Pg. 108)
12:00 - 12:30
Isogeometric Schwarz
preconditioners for mixed
elasticity and Stokes systems
CO017
CO122
MMHD:
MAXWELL and MHD
NFSI:
Numerics of
Fluid-Structure
Interaction
Bonito, Guermond
Richter, Rannacher
Bonito
Rannacher
Chinesta (Pg. 73)
Error Estimates for
Element-Based Hyper-Reduction
of Nonlinear Dynamic Finite
Element Models
Analysis of an unconditionally
convergent stabilized finite
element formulation for
incompressible
magnetohydrodynamics
Lin (Pg. 243)
L2 projected finite element
methods for Maxwell’s equations
with low regularity solution
Mehl (Pg. 260)
Towards massively parallel
fluid-structure simulations – two
new parallel coupling schemes
Puscas (Pg. 310)
3d conservative coupling method
between a compressible fluid flow
and a deformable structure
Mula (Pg. 272)
The Generalized Empirical
Interpolation Method: Analysis
of the convergence and
application to the Stokes problem
Kolev (Pg. 209)
Richter (Pg. 320)
Parallel Algebraic Multigrid for
Electromagnetic Diffusion
A Fully Eulerian Formulation for
Fluid-Structure Interactions
Lassila (Pg. 234)
The Crank-Nicolson scheme with
splitting and discrete transparent Space-time model reduction for
nonlinear time-periodic problems
boundary conditions for the
using the harmonic balance
Schrödinger equation on an
reduced basis method
infinite strip
A local projection stabilization
method for finite element
approximation of a
magnetohydrodynamic model
A discontinuous Galerkin
approximation for Vlasov
equations
Zlotnik (Pg. 425)
Pavarino (Pg. 294)
Franco Brezzi
Codina (Pg. 79)
Error analysis of implicit
Runge-Kutta methods for
discontinuous Galerkin
discretizations of linear
Maxwell’s equations
Runge-Kutta based explicit local
time-stepping methods for wave
propagation
Chair:
Lunch
12:30 - 14:00
12
Wacker (Pg. 404)
Wick (Pg. 412)
A fluid-structure interaction
framework for reactive flows in
thin channels
Thursday 29th August (Afternoon)
ENUMATH 2013
CO017
CO122
MMHD:
MAXWELL and MHD
NFSI:
Numerics of
Fluid-Structure
Interaction
Quarteroni, Rozza
Bonito, Guermond
Richter, Rannacher
Quarteroni
Codina
Richter
CO1
CO2
CO3
CO015
Minisymposia
STOP:
Adaptive stopping
criteria
NEIG:
Numerical methods for
linear and nonlinear
eigenvalue problems
PSPP:
Preconditioners for
saddle point problems
TIME:
Time integration of
partial differential
equations
CO016
ROMY:
Reduced order
modelling for the
simulation of complex
systems
Organizers
Ern, Strakos, Vohralik
Benner, Guglielmi
Ostermann
Strakos
Benner
Deparis, Klawonn,
Pavarino
Pavarino
Ostermann
Chair
Ern (Pg. 116)
14:00 - 14:30
Adaptive inexact Newton
methods with a posteriori
stopping criteria for nonlinear
diffusion PDEs
Smirnova (Pg. 354)
14:30 - 15:00
Michiels (Pg. 266)
Freitag (Pg. 122)
A Novel Stopping Criterion for
Iterative Regularization with
Undetermined Reverse
Connection
Computing Jordan blocks in
parameter-dependent
eigenproblems
Capatina (Pg. 64)
Multiscale adaptive finite
element method for PDE
eigenvalue/eigenvector
approximations
Miedlar (Pg. 268)
15:00 - 15:30
Stopping criteria based on locally
reconstructed fluxes
Vohralik (Pg. 401)
15:30 - 16:00
16:00 - 16:30
Adaptive regularization,
linearization, and algebraic
solution in unsteady nonlinear
problems
Olshanskii (Pg. 288)
Projection based methods for
Preconditioners for the linearized
nonlinear eigenvalue problems
Navier-Stokes equations based on
and associated distance problems
the augmented Lagrangian
Guglielmi (Pg. 146)
Computing the distance to
defectivity
Qingguo Hong (Pg. 311)
A multigrid method for
discontinuous Galerkin
discretizations of Stokes
equations
Schratz (Pg. 344)
Himpe (Pg. 164)
Efficient numerical time
integration of the Klein-Gordon
equation in the non-relativistic
limit regime
Combined State and Parameter
Reduction of Large-Scale
Hierarchical Systems
Gauckler (Pg. 127)
Veroy-Grepl (Pg. 396)
Plane wave stability of the
split-step Fourier method for the
nonlinear Schrödinger equation
On Synergies between the
Reduced Basis Method, Proper
Orthogonal Decomposition, and
Balanced Truncation
Grandperrin (Pg. 139)
Lang (Pg. 233)
Colciago (Pg. 81)
Multiphysics Preconditioners for
Fluid–Structure Interaction
Problems
Anisotropic Finite Element
Meshes for Linear Parabolic
Equations
Reduced Order Models for
Fluid-Structure Interaction
Problems in Haemodynamics
Koskela (Pg. 215)
A reduced computational and
geometrical framework for
viscous optimal flow control in
parametrized systems
Klawonn (Pg. 206)
A deflation based coarse space in
dual-primal FETI methods for
almost incompressible elasticity
Nore (Pg. 281)
Dynamo action in finite cylinders
Yang (Pg. 421)
Flueck (Pg. 121)
Domain decomposition for
computing ferromagnetic effects
Coffee Break
13
Numerical Methods for
Fluid-Structure Interaction
Problems with a Mixed Elasticity
Form in Hemodynamics
Sharma (Pg. 347)
Tricerri (Pg. 382)
Convergence Analysis of an
Adaptive Interior Penalty
Discontinuous Galerkin Method
for the Helmholtz Problem
Fluid-Structure Interaction
simulation of cerebral aneurysm
using anisotropic model for the
arterial wall
Heumann (Pg. 163)
The Geometric Conservation law
in Astrophysics: Discontinuous
Galerkin Methods on Moving
Meshes for the non-ideal Gas
Dynamics in Wolf-Rayet Stars
Kramer (Pg. 218)
Rozza (Pg. 323)
A moment-matching Arnoldi
method for phi-functions
Gerbeau (Pg. 128)
Luenberger observers for
fluid-structure problems
Stabilized Galerkin for Linear
Advection of Differential Forms
Thursday 29th August (Late Afternoon)
ENUMATH 2013
CT3.5:
Applications of
reduced order
models
CO017
CT3.6:
Elasticity, plasticity
and hysteresis in
solid and particles
systems
CT3.7:
Deterministic
methods for
uncertainty
quantification
CO123
CT3.8:
Convergence
analysis, minimal
energy asymptotics,
and data analysis
Hess
Sadovskii
Tamellini
Kirby
CO1
CO2
CO3
CO015
CO016
Contributed
Talks
CT3.1:
Inverse problems
and optimal
experimental design
CT3.2:
Stochastic
simulation:
Chemistry and
finance
CT3.3:
Iterative methods
and inexactness
CT3.4:
Continuous and
discontinuous
Galerkin methods
for complex flow
Chair
Kray
Engblom
Vannieuwenhoven
Feistauer
CO122
Avalishvili (Pg. 28)
Rozgic (Pg. 321)
16:30 - 17:00
Mathematical optimization
methods for process and
material parameter
identification in forming
technology
Dementyeva (Pg. 94)
17:00 - 17:30
The Inverse Problem of a
Boundary Function Recovery
by Observation Data for the
Shallow Water Model
Reduced Order Optimal
Control of DiffusionConvection-Reaction
Equation Using Proper
Orthogonal Decomposition
On spectral method of
approximation of dynamical
dual-phase-lag
three-dimensional model for
thermoelastic shells by
two-dimensional
initial-boundary value
problems
Dimitriu (Pg. 102)
Sadovskaya (Pg. 329)
Bonizzoni (Pg. 50)
POD-DEIM Approach on
Dimension Reduction of a
Multi-Species
Host-Parasitoid System
Parallel Software for the
Analysis of Dynamic
Processes in Elastic-Plastic
and Granular Materials
Low-rank techniques applied
to moment equations for the
stochastic Darcy problem
with lognormal permeability
Akman (Pg. 19)
Gergelits (Pg. 129)
Bause (Pg. 43)
Composite polynomial
convergence bounds in finite
precision CG computations
Space-time Galerkin
discretizations of the wave
equation
Karasozen (Pg. 193)
Meinecke (Pg. 262)
Krukier (Pg. 227)
Stochastic simulation of
diffusion on unstructured
meshes via first exit times
Symmetric - skew-symmetric
splitting and iterative
methods
Vilanova (Pg. 398)
Idema (Pg. 176)
Chernoff-based Hybrid
Tau-leap
On the Convergence of
Inexact Newton Methods
Engblom (Pg. 112)
Vannieuwenhoven
(Pg. 391)
Adaptive Discontinuous
Galerkin Methods for
nonlinear DiffusionConvection-Reaction
Models
Di Pietro (Pg. 100)
Long (Pg. 245)
17:30 - 18:00
A Projection Method for
Under Determined Optimal
Experimental Designs
A generalization of the
Crouzeix–Raviart and
Raviart–Thomas spaces with
applications in subsoil
modeling
Herrero (Pg. 158)
The reduced basis
approximation applied to a
Rayleigh-Bénard problem
Murata (Pg. 274)
Analysis on distribution of
magnetic particles with
hysteresis characteristics and
field fluctuations
Chkifa (Pg. 75)
Fishelov (Pg. 119)
High-dimensional adaptive
sparse polynomial
interpolation and
application for parametric
and stochastic elliptic PDE’s
Convergence analysis of a
high-order compact scheme
for time-dependent
fourth-order differential
equations
18:00 - 18:30
19:00
A new approach to solve the
inverse scattering problem
for the wave equation
Sensitivity estimation and
inverse problems in spatial
Parallel tensor-vector
stochastic models of chemical multiplication using blocking
kinetics
Space-time DGFEM for the
solution of nonstationary
nonlinear
convection-diffusion
problems and compressible
flow
Hess (Pg. 160)
Reduced Basis Methods for
Maxwell’s Equations with
Stochastic Coefficients
Social Dinner
4th floor of building BC
14
On the asymptotics of
discrete Riesz energy with
external fields
Mali (Pg. 258)
Estimates of Effects Caused
by Incompletely Known Data
in Elliptic Problems
Generated by Quadratic
Energy Functionals
Feistauer (Pg. 118)
Kray (Pg. 220)
Jaraczewski (Pg. 183)
Sadovskii (Pg. 331)
Tamellini (Pg. 367)
Hyperbolic Variational
Inequalities in
Elasto-Plasticity and Their
Numerical Implementation
Quasi-optimal polynomial
approximations for elliptic
PDEs with stochastic
coefficients
Kirby (Pg. 204)
Flag manifolds for
characterizing information
in video sequences
FRIDAY 30TH AUGUST
Friday 30th August
ENUMATH 2013
CO1
CO2
CO3
CO015
CO016
Contributed
Talks
CT4.1:
Time integration
of stiff/multiscale
dynamical
systems
CT4.2:
Preconditioning
CT4.3:
Compressible
flows, turbulence
and flow stability
CT4.4:
New approaches
in model order
reduction
CT4.5:
Numerical
computation of
external flows
Chair
Samaey
Reimer
Reigstad
Chen
Kosík
Algarni (Pg. 20)
John (Pg. 187)
Maier (Pg. 255)
Numerical Simulation of
the Atmospheric
Boundary Layer Flow
over coal mine in North
Bohemia
08:20 - 08:50
Numerical Evolution
Methods of Rational
Form for Reaction
Diffusion Equations
A multilevel
preconditioner for the
biharmonic equation
Louda (Pg. 246)
Numerical simulations of
laminar and turbulent
3D flow over backward
facing step
08:50 - 09:20
Samaey (Pg. 335)
09:20 - 09:50
A micro-macro parareal
algorithm: application to
singularly perturbed
ordinary differential
equations
Krendl (Pg. 224)
Efficient preconditioning
for time-harmonic
control problems
Pořízková (Pg. 304)
Zhang (Pg. 423)
Furmanek (Pg. 124)
Compressible and
incompressible unsteady
flows in convergent
channel
Reduced-order modeling
and ROM-based
optimization of batch
chromatography
Numerical Simulation of
Flow Induced Vibrations
with Two Degrees of
Freedom
Reigstad (Pg. 316)
Negri (Pg. 277)
Numerical investigation
of network models for
Isothermal junction flow
Reduced basis methods
for PDE-constrained
optimization
Numerical Simulation of
Compressible Turbulent
Flows Using Modified
Earsm Model
Chen (Pg. 72)
Kosík (Pg. 213)
A Weighted Reduced
Basis Method for Elliptic
Partial Differential
Equations with Random
Input Data
The Interaction of
Compressible Flow and
an Elastic Structure
Using Discontinuous
Galerkin Method
Tani (Pg. 369)
CG methods in
non-standard inner
product for saddle-point
algebraic linear systems
with indefinite
preconditioning
Melis (Pg. 264)
09:50 - 10:20
A relaxation method with
projective integration for
solving nonlinear systems
of hyperbolic
conservation laws
Reimer (Pg. 318)
H2 -matrix arithmetic
and preconditioning
10:20 - 10:50
10:50 - 11:40
11:40 - 12:30
12:30 - 12:50
12:50 - 14:20
CO1
Keslerova (Pg. 197)
CO122
CO123
CT4.7:
CT4.6:
CT4.8:
Conforming and A posteriori error Monte Carlo and
estimates
and
nonconforming adaptive methods multi level Monte
Carlo methods
methods for PDEs
III
Simian
Mali
Lee (Pg. 238)
Rademacher
(Pg. 312)
Hodge Laplacian
problems with Robin
boundary conditions
Model and mesh
adaptivity for frictional
contact problems
Blumenthal
Tesei (Pg. 372)
Multi Level Monte Carlo
methods with Control
Variate for elliptic SPDEs
CO124
CT4.9:
Molecular
dynamics and
quantum
mechanics
simulations
Kieri
Szepessy (Pg. 366)
How accurate is
molecular dynamics for
crossings of potential
surfaces?
Part I: Error estimates
Repin (Pg. 319)
Kamijo (Pg. 192)
Numerical Method for
Fractal Analysis on
Discrete Dynamical Orbit
in n-Dimensional Space
Using Local Fractal
Dimension
A reduced basis method
for domain
decomposition problems
CO017
Petr Knobloch (Pg. 208)
Dmitri Kuzmin (Pg. 230)
Holman (Pg. 171)
Lilienthal (Pg. 241)
Non-Dissipative Space
Time Hp-Discontinuous
Galerkin Method for the
Time-Dependent Maxwell
Equations
Simian (Pg. 349)
Conforming and
Nonconforming Intrinsic
Discretization for Elliptic
Partial Differential
Equations
On Poincaré Type
Inequalities for Functions
With Zero Mean
Boundary Traces and
Applications to A
Posteriori Analysis of
Boundary Value Problems
Kadir (Pg. 190)
Haji-Ali (Pg. 151)
Optimization of mesh
hierarchies for Multilevel
Monte Carlo
Walloth (Pg. 406)
Hoel (Pg. 168)
Kieri (Pg. 202)
An efficient and reliable
residual-type a posteriori
error estimator for the
Signorini problem
On non-asymptotic
optimal stopping criteria
in Monte Carlo
simulations
Accelerated convergence
for Schrödinger
equations with
non-smooth potentials
Sandberg (Pg. 336)
Blumenthal (Pg. 48)
An Adaptive Algorithm
for Optimal Control
Problems
Stabilized Multilevel
Monte Carlo Method for
Stiff Stochastic Problems
Coffee Break
Finite element methods for convection dominated problems
Vertex-based limiters for continuous and discontinuous Galerkin methods
Closing Remarks
CO1
Lunch
16
How accurate is
molecular dynamics for
crossings of potential
surfaces?
Part II: numerical tests
Chair:
Assyr Abdulle
Ludvig af Klinteberg
Numerical Analysis, KTH Royal Institute of Technology, SE
Fast simulation of particle suspensions using double layer boundary integrals and
spectral Ewald summation
Contributed Session CT1.8: Monday, 17:00 - 17:30, CO123
We present a method for simulating periodic suspensions of sedimenting rigid particles, based on a boundary integral solution of the Stokes flow equations. The
purpose of our work is to improve the understanding of the large scale properties of suspensions by looking at the microscale interactions between individual
particles. Boundary integral methods are attractive for this problem type due to
high attainable accuracy, depending on the underlying quadrature method, and a
reduction of the problem dimensionality from three to two. However, the resulting discrete systems have full matrices, and require the use of fast algorithms for
efficient solution.
Our method is based on a periodic version of the completed double layer boundary integral formulation for Stokes flow, which yields a well-conditioned system
that converges rapidly when solved iteratively using GMRES. The discrete system is formulated using the Nyström method, and the singular integrals of the
formulation are treated using singularity subtraction.
The method is accelerated by a spectrally accurate fast Ewald summation method,
which allows us to compute the single and double layer potentials of the formulation in O(N log N ) time. By developing accurate estimates for the truncation
errors of the Ewald summation, we are able to choose the parameters of the fast
method such that the computation time is optimal for a given error tolerance.
Joint work with Anna-Karin Tornberg.
17
Vadym Aizinger
University of Erlangen-Nuernberg, DE
Discontinuous Galerkin method for 3D free surface flows and wetting/drying
Minisymposium Session FREE: Monday, 15:30 - 16:00, CO2
The local discontinuous Galerkin method is applied to the numerical solution of
the three-dimensional hydrostatic equations of coastal ocean circulation. A wetting/drying algorithm in combination with dynamically varying vertical mesh resolution in the vicinity of the free surface is presented in the talk.
18
Tuğba Akman
Middle East Technical University, TR
Reduced Order Optimal Control of Diffusion-Convection-Reaction Equation Using
Proper Orthogonal Decomposition
Contributed Session CT3.5: Thursday, 16:30 - 17:00, CO016
We consider the reduced optimal control of time-dependent convection dominated
diffusion-convection-reaction equation by proper orthogonal decomposition. The
optimal control problem is discretized by space-time discontinuous Galerkin finite
elements. Time discretization is performed by discontinuous Galerkin method with
piecewise constant and linear polynomials in time, while symmetric interior penalty
Galerkin (SIPG) with upwinding is used for space discretization. It is known that
discontinuous Galerkin discretization with weak treatment of the boundary conditions results in an accurate solution in the presence of boundary layers. In the case
of nonhomogenous boundary conditions, this property provides a reduced order
solution satisfying the boundary conditions without any additional treatment. On
the other hand, discontinuous Galerkin time discretization is a strongly A-stable
method and demonstrates the advantage of long-time integration.
The quality of the reduced order approximation is affected by the number of POD
basis functions, the number and the location of the snapshots. To obtain an accurate reduced approximation, we increase the number of POD basis functions by
measuring the value of the cost functional or the total energy contained in the
system. The POD basis is constructed by using not only the state equation, but
also the adjoint equation or a combination of them. We present numerical results
with different convection terms to illustrate the efficiency of the method.
Joint work with Bülent Karasözen.
19
Said Algarni
King Fahd University of Petroleum and Minerals , SA
Numerical Evolution Methods of Rational Form for Reaction Diffusion Equations
Contributed Session CT4.1: Friday, 08:20 - 08:50, CO1
The purpose of this study is to investigate select numerical methods that demonstrate good performance in solving PDEs that couple diffusion and reaction terms.
The simple form of a reaction diffusion equation is the following
ut (t, x) = αuxx (t, x) + f (u),
where u is an order-parameter field, e.g., population density which depends on
space x and time t. The order-parameter may be either scalar or vector, depending
on the number of variables that describe the physical system. The order-parameter
evolves in time due to a local reaction, described by the nonlinear term f (u), in
conjunction with spatial diffusion. The coefficient α can be a scalar or it could
be dependent on time and space α(t, x). These types of equations have numerous
fields of application such as environmental studies, biology, chemistry, medicine,
and ecology.
Our aim is to investigate and develop accurate and efficient approaches which
compare favourably to other applicable methods. In particular, we investigate and
adapt a relatively new class of methods based on rational polynomials. Namely,
Padé time stepping (PTS), which is highly stable for the purposes of the present
application and is associated with lower computational costs. Furthermore, PTS
is optimized for our study to focus on reaction diffusion equations. Due to the
rational form of PTS method, as shown in Fig. 1, a local error control threshold
(LECT) is proposed. Numerical runs are conducted to obtain the optimal LECT.
Fig. 2 illustrates the use of LECT.
Based on the results, we find PTS alone and combined via splitting with other approaches provided favourable performance in certain and wide ranging parameter
regimes.
20
Figure 1: Singularities of each component in φ, θ-space when
iteration of PTS[1/1] approach.
α
dx2
= 0.5 of the first
Figure 2: The first iteration of PTS[1/1] approach in φ and error space, h = 0.9,
LECT = 1.
21
David Amsallem
Stanford University, US
Error Estimates for Element-Based Hyper-Reduction of Nonlinear Dynamic Finite
Element Models
Minisymposium Session ROMY: Thursday, 11:00 - 11:30, CO016
Error estimates for a recently developed model reduction approach for the efficient
and fast solution of finite-element based dynamical systems are presented. The
model reduction approach relies on two main ingredients: (1) the Galerkin projection of the high-dimensional dynamical system onto a set of Proper Orthogonal
Decomposition-based modes and (2) the hyper-reduction of that projected system
by the evaluation of the non linear internal forces onto a subset of the finite elements. This hyper-reduction step enables an online evaluation of the reduced-order
model that does not scale with the dimension of the underlying high-dimensional
model.
The contribution of the discarded elements is taken care of by weighting the elementary contribution of the retained elements using appropriate weights. These
weights are determined in an offline phase through the solution of a non-negative
least-squares problem minimizing the discrepancy between the exact internal forces
and their approximation using a subset of the elements, up to a given tolerance.
A-posteriori error estimates will show that, when linear dynamical systems are
considered, the error between the hyper-reduced an non-hyper-reduced models can
be bounded by a quantity proportional to the tolerance used in the minimization
problem. This suggests that the tolerance can be a criterion for determining a
hyper-reduced model that satisfies a certain accuracy. An appropriate choice of
training vectors is also suggested by the error bound derivation. In the case of nonlinear dynamical systems, an error bound proportional to the discrepancy between
the exact and approximated internal force term can also be derived.
Numerical experiments will illustrate the proposed approach as well as the error
estimates derived in this work.
Joint work with David Amsallem, and Charbel Farhat.
22
Harbir Antil
George Mason University, US
A Stokes Free Boundary Problem with Surface Tension Effects
Minisymposium Session GEOP: Wednesday, 11:30 - 12:00, CO122
We consider a Stokes free boundary problem with surface tension effects in variational form. This model is an extension of the coupled system proposed by P.
Saavedra and L. R. Scott, where they consider a Laplace equation in the bulk
with Young-Laplace equation on the free boundary to account for surface tension.
The two main difficulties for the Stokes free boundary problem are: the vector
curvature on the interface, which causes problem to write a variational form of
the free boundary problem and the existence of solution to Stokes equations with
1+1/p0
Navier-slip boundary conditions for Wp
domains (minimal regularity). We
will demonstrate the existence of solution to Stokes equations with Navier-slip
boundary conditions using a perturbation argument for the bended half space fol1+1/p0
lowed by standard localization technique. The Wp
regularity of the interface
allows us to write the variational form for the entire free boundary problem, we
conclude with the well-posedness of this system using a fixed point iteration.
Joint work with Ricardo H. Nochetto, and Patrick Sodre.
23
Doghonay Arjmand
PhD Student, Applied and Computational Mathematics, KTH, SE
Analysis of Heterogeneous Multiscale Methods for Long Time Multiscale Wave
Propagation Problems
Contributed Session CT2.2: Tuesday, 14:00 - 14:30, CO2
We investigate the properties of a heterogeneous multi-scale method (HMM) type
multi-scale algorithm for approximating the solution of the following initial boundary value problem modelling long time wave propagation
∂tt uε (t, x) − ∇ · (A(x/ε)∇uε (t, x)) = 0, in [0, T ε ] × Ω
uε (0, x) = q(x), ∂t uε (0, x) = z(x), on {t = 0} × Ω,
(1)
where A(y) is 1-periodic, symmetric and uniformly positive definite, Ω ⊂ Rd ,
and T ε ≈ O(ε−2 ). We assume that the above equation is equipped with suitable
boundary data. As ε → 0, the solution of (1) tends to a solution û which has no
dependence on the small scale parameter ε. For short time scales T ε = T ≈ O(1) ,
the classical homogenization theory reveals the limiting behavior of the multi-scale
solution and the equation. In this setting the solution û satisfies
∂tt û(t, x) − ∇ · Â∇û(t, x) = 0, in [0, T ] × Ω
(2)
û(0, x) = q(x), ∂t û(0, x) = z(x), on {t = 0} × Ω,
where the homogenized coefficient  is a constant matrix, computation of which
involves solving another set of non-oscillatory periodic elliptic problems called cell
problems. On the other hand, for time scales T ε = O(ε−2 ), the solution uε (t, x)
starts to exhibit O(1) dispersive effects which are not present in the short time
homogenized solution. In this setting, Symes and Santosa [4] derive an effective
equation for the long time wave propagation. In one dimension the equation has
the form
∂tt û(t, x) − ∂x Â∂x û(t, x) + ε2 β∂xxx û(t, x) = 0, in [0, T ε ] × Ω
(3)
û(0, x) = q(x), ∂t û(0, x) = z(x), on {t = 0} × Ω,
where β is given to be a complicated functional of A.
HMM is a general framework for treating multi-scale problems. HMM is often
useful when we have the full microscopic model which is not affordable to use
throughout the entire domain. The main idea is that one starts with assuming
a macro model in which some missing data are upscaled from local microscopic
simulations, where the micro model is then restricted by the coarse scale data.
The multi-scale problem (1) is within the spectrum of application areas of HMM.
A typical HMM algorithm for problem (1) starts with assuming a macro model
utt = ∇ ·F where F stands for the missing flux in the model. This quantity is then
computed by F = (KA(x/ε)uεx ), where K is an averaging operator in time and
space, and uε solves the full microscopic problem (1) in a domain of size η ≈ O(ε),
with initial data given by linear interpolation of the current macroscopic state u.
For further details of such type of HMM based methods for short time multi-scale
wave problems we refer the reader to [5] (for finite element HMM) and [3] (for
finite difference HMM).
In [2], the short time FD-HMM algorithm from [3] was extended to approximate
the solution of long time wave propagation problems. The extension involved only
24
modifying the initial data for the micro problem to a third-order polynomial as
well as using a high order averaging kernel K in the upscaling procedure. Numerical evidences were shown to illustrate that the numerical solution captures the
dispersive effects, represented by β, seen in (3).
In this talk, we give a theoretical foundation of the previous results in [2], by
proving that HMM indeed computes the correct flux also for the long time multiscale wave problem (1). With suitable macroscale discretization parameters, it will
therefore capture the O(1) dispersive effects in (3). More precisely, let FHM M be
the flux computed by HMM when the micro problem is given initial data consistent
with a third order polynomial û(x), i.e. (Kuε ) (0, x) = û(x), then
FHM M = Âûx + ε2 β ûxxx + O(η p + (ε/η)q ),
where η is the size of the micro domain and q and p are parameters representing
the smoothness and number of vanishing moments of the kernel, which in principle
can be chosen arbitrarily large. Moreover, we give a surprisingly simple expression
for the parameter β which was known before to equal a very complicated functional
of A.
In our proof we use two new ideas; the first idea is to look at the solutions of
hyperbolic PDEs with a special form of data known as quasi polynomials, where
the polynomial coefficients are replaced by periodic functions. This is useful in
unfolding the spatial structure of the solution as well as expressing the locally periodic solutions in terms of combination of much simpler purely periodic functions.
Next, we look at the time averages of solution of hyperbolic PDEs and provide
general statements which might potentially be applicable to much broader areas.
With the help of these two ideas we are able to fully understand the crucial role
consistency plays in HMM type algorithms. Finally we present numerical results
to support our theoretical statements.
References
[1] E. Weinan, B. Engquist, The Heterogeneous Multi-Scale Methods, Comm.
Math. Sci., 1(1):87-133, 2003.
[2] B. Engquist, H. Holst, and O. Runborg, Multi-Scale Methods for Wave Propagation in Heterogeneous Media Over Long Time, in Lect. Notes Comput. Sci.
Eng., Springer Verlag, 82:167-186, 2011.
[3] B. Engquist, H. Holst and O. Runborg, Multi-Scale methods for Wave Propagation in Heterogeneous Media, Commun. Math. Sci., 9(1):33-56, 2011.
[4] F. Santosa, W. W. Symes, A Dispersive Effective Medium For Wave Propagation in Periodic Composites , Siam J. Appl. Math., 51 (4):984-1005, 1991.
[5] A. Abdulle and M. J. Grote, Finite Element Heterogeneous Multi-Scale Method
for the Wave Equation, SIAM J. Multiscale Model. and Simul., 9(2):766-792,
2011.
Joint work with Prof. Olof Runborg.
25
Marco Artina
Technische Universität München, DE
Anisotropic mesh adaptation for brittle fractures
Minisymposium Session ADFE: Tuesday, 12:00 - 12:30, CO016
The study of the evolution of brittle fractures is a very challenging continuum
mechanics problem, which requires an interdisciplinary study, from physics to
mathematical analysis, and computations. The propagation of a fracture can be
mathematically described as a rate independent evolution, where nonconvex and
nonsmooth energies are instantaneously minimized under forcing constraints.
One of the most advocated model for describing fractures is the Francfort-Marigo
model. It is particularly interesting because it is well defined in a rather general
setting and does not require any predefined path for the crack. To numerically
approximate the problem one needs first to Gamma-approximate the nonsmooth
energy, which depends on the displacement and its discontinuity set, by using a
smoother version as proposed by Ambrosio and Tortorelli where a smooth indicator function is used to identify the discontinuity set. Then, we resort to an
adaptive Finite Element approach based on P1 elements. However, similarly to
early work by Chambolle et al., but differently from recent approaches by Süli
et al. where isotropic meshes are used, in this work we wish to investigate how
designing anisotropic meshes can lead to dramatic improvements in terms of the
balance between accuracy and complexity. Indeed the main advantage which can
be achieved is the significant reduction of the number of elements required to
capture with good confidence the expected fracture path. The employment of
anisotropic grids allows us to follow very closely the propagation of the fracture,
refining it only in a very thin neighborhood of the crack.
In this talk, we first present the derivation of a novel a posteriori error estimator
driving the mesh adaptation. Then, we provide several numerical results which assess both the accuracy and the computational saving associated with an anisotropic
adapted grid.
Joint work with Massimo Fornasier, Stefano Micheletti, and Simona Perotto.
26
Christoph Augustin
Institut für Biophysik, Medizinische Universität Graz, AT
Parallel solvers for the numerical simulation of cardiovascular tissues
Contributed Session CT1.7: Monday, 17:30 - 18:00, CO122
For the numerical simulation of the elastic behavior of biological tissues, such as
the artery or the myocardium, we consider the stationary equilibrium equations
div σ(u, x) + f (x) = 0
for x ∈ Ω.
(1)
In addition we have to incorporate boundary conditions to describe the displacements or the boundary stresses on Γ = ∂Ω.
For the derivation of the constitutive equation of the stress tensor σ, we introduce
the strain energy function Ψ(C). From this we obtain the constitutive equation
σ = J −1 F
∂Ψ(C) >
F ,
∂C
(2)
where J = det F is the Jacobian of the deformation gradient F = ∇x ϕ(x), and
C = F > F is the right Cauchy-Green tensor. The specific form of the strain energy
function now varies from material to material, e.g. for the artery we use the well
known Holzapfel model
o
κ
c
k1 X n
Ψ(C) = (J − 1)2 + (J −2/3 I1 − 3) +
exp[k1 (J −2/3 Ii − 1)2 ] − 1 ,
2
2
2k2 i=4,6
where κ, c, k1 and k2 are positive parameters, I1 = tr(C) and I4 and I6 are
invariants representing the stretch in fiber direction.
Due to preferential orientations of fibers, such as collagen, the modeling of biological tissues leads to an anisotropic and highly nonlinear material model. In
order to obtain a numerical solution of eq. (1) we use variational and finite element techniques. For the linearization of the resulting system Newton’s method
is applied.
However, such detailed multiphysics simulations are computationally vastly demanding. While current trends in high performance computing (HPC) hardware
promise to alleviate this problem, exploiting the potential of such architectures remains challenging for various reasons. On one hand, strongly scalable algorithms
are necessitated to achieve a sufficient reduction in execution time by engaging a
large number of cores, and, on the other hand, alternative acceleration technologies
such as graphics processing units (GPUs) are playing an increasingly important
role which imposes further constraints on design and implementation of solver
codes.
We discuss two different parallel approaches to solve the non-linear elasticity problems arising from the simulation of the mechanical behavior of cardiovascular tissues. The finite element tearing and interconnecting (FETI) methods, in particular
classical and allfloating FETI, and a proper domain decomposition algebraic multigrid. Scalability results for these mechanical simulations will be presented and we
discuss advantages and limitations of the particular numerical methods. We will
also show first results of weakly and strongly coupled electro-mechanical problems
and discuss challenges that need to address with regard to highly scalable parallel
implementations.
Joint work with Gernot Plank.
27
Gia Avalishvili
Iv. Javakhishvili Tbilisi State University, GE
On spectral method of approximation of dynamical dual-phase-lag three-dimensional model for thermoelastic shells by two-dimensional initial-boundary value
problems
Contributed Session CT3.6: Thursday, 16:30 - 17:00, CO017
The present paper is devoted to construction and investigation of spectral algorithm of approximation of three-dimensional initial-boundary value problem corresponding to dynamical model for thermoelastic shells by two-dimensional ones
in the context of Chandrasekharaiah-Tzou nonclassical theory of thermoelasticity.
We study dynamical problems for thermoelastic shells within the framework of
the nonclassical theory of thermoelasticity with two phase-lags, which was proposed to eliminate shortcomings of the classical thermoelasticity, such as infinite
velocity of thermoelastic disturbances, unsatisfactory thermoelastic response of
a solid to short laser pulses, and poor description of thermoelastic behavior at
low temperatures. In the paper [1] Tzou proposed a dual-phase-lag heat conduction model, where the phase-lag corresponding to temperature gradient is caused
by microstructural interactions such as phonon scattering or phonon-electron interactions, while the second phase-lag is interpreted as the relaxation time due
to fast-transient effects of thermal inertia. Further, Chandrasekharaiah [2] constructed nonclassical model for thermoelastic bodies, where the classical Fourier’s
law of heat conduction was replaced with its generalization proposed by Tzou. In
this model the equation describing the temperature field involves the third order
derivative with respect to the time variable of the temperature and divergence of
the third order derivative with respect to the time variable of the displacement.
Note that the Chandrasekharaiah-Tzou model is an extension of the Lord-Shulman
[2] nonclassical model for thermoelastic bodies, which depends on one phase-lag.
Spatial behavior of solutions of the dual-phase-lag heat conduction equation and
problems of stability of dual-phase-lag heat conduction models have been investigated and particular one-dimensional initial-boundary value problems have been
analysed in the Chandrasekharaiah-Tzou theory [3-5].
In this paper we investigate general three-dimensional initial-boundary value problem with mixed boundary conditions corresponding to Chandrasekharaiah-Tzou
nonclassical model. Applying variational approach and suitable a priori estimates,
we obtain the existence and uniqueness of solution in corresponding Sobolev spaces.
In order to simplify algorithms of numerical solution of three-dimensional problem
we construct a sequence of two-dimensional initial-boundary value problems applying spectral approximation method, which is a generalization and extension of the
dimensional reduction method suggested by I. Vekua [6] in the classical theory of
elasticity for plates with variable thickness. Note that the classical Kirchhoff-Love
and Reissner-Mindlin models can be incorporated into the hierarchy obtained by
I. Vekua. Static two-dimensional models constructed by I. Vekua for general shells
first were investigated in [7] and in the case of plate the rate of approximation
of the exact solution of the three-dimensional problem by the vector-functions of
three space variables restored from the solutions of the reduced two-dimensional
problems in the spaces of classical smooth functions was estimated in [8]. Later on,
various two-dimensional and one-dimensional models were constructed and investigated for problems of the theory of elasticity and mathematical physics applying
I. Vekua’s reduction method and similar spectral methods (see [9] and references
28
given therein).
We construct algorithm of approximation of the three-dimensional initial-boundary value problem corresponding to Chandrasekharaiah-Tzou dynamical model
by a sequence of two-dimensional problems for thermoelastic shells with variable
thickness, which may vanish on a part of the boundary. Applying semidiscretization of the three-dimensional problem in the transverse direction of the shell,
we construct a hierarchy of two-dimensional initial-boundary value problems and
investigate them in suitable weighted Sobolev spaces. Moreover, we study the
relationship between the constructed problems and the original three-dimensional
one. We prove convergence in suitable spaces of the sequence of approximate solutions of three space variables, constructed by means of the solutions of the reduced
problems, to the exact solution of the original three-dimensional problem and under additional regularity conditions we estimate the rate of convergence. Note that
the constructed algorithm of approximation can be used not only for simplification of algorithms for numerical solution of three-dimensional problems, but also
the first approximations of the constructed hierarchy of two-dimensional initialboundary value problems can be considered as independent nonclassical models
for thermoelastic shells and can be used for mathematical modeling of engineering
structures.
Acknowledgment. The work of Gia Avalishvili has been supported by the Presidential Grant for Young Scientists (Contract No. 12/62).
References
[1] D.Y. Tzou, A unified approach for heat conduction from macro to micro-scales,
J. Heat Transfer, 117 (1995), 8-16.
[2] D.S. Chandrasekharaiah, Hyperbolic thermoelasticity: a review of recent literature, Appl. Mech. Review, 51 (1998), 705-729.
[3] D.S. Chandrasekharaiah, One-dimensional wave propagation in the linear theory of thermoelasticity without energy dissipation, J. Thermal Stresses, 19 (1996),
695-710.
[4] R. Quintanilla, A condition on the delay parameters in the one-dimensional
dual-phase-lag thermoelastic theory, J. Thermal Stresses, 28 (2005), 43-57.
[5] R. Quintanilla, R. Racke, Qualitative aspects in dual-phase-lag heat conduction,
Proc. Royal Society A, 463 (2007), 659-674.
[6] I.N. Vekua, Shell theory: General methods of construction, Pitman Adv. Publ.
Program, Boston, 1985.
[7] D.G. Gordeziani, On the solvability of some boundary value problems for a
variant of the theory of thin shells, Dokl. Akad. Nauk SSSR, 215 (1974), 12891292.
[8] D.G. Gordeziani, To the exactness of one variant of the theory of thin shells,
Dokl. Akad. Nauk SSSR, 216 (1974), 751-754.
[9] G. Avalishvili, M. Avalishvili, D. Gordeziani, B. Miara, Hierarchical modeling
of thermoelastic plates with variable thickness, Anal. Appl., 8 (2010), 125-159.
Joint work with Mariam Avalishvili.
29
Iliass Azijli
Delft University of Technology, NL
Physics-based interpolation of incompressible flow fields obtained from experimental data: a Bayesian perspective
Contributed Session CT2.7: Tuesday, 14:30 - 15:00, CO122
We introduce a statistical interpolation method that uses a physical model in the
reconstruction of velocity fields obtained from experiments. It is applicable to
incompressible flows. Due to the inclusion of physical knowledge, the spatial resolution of the data can be increased. We formulate the method within the Bayesian
framework because it allows for a natural inclusion of measurement uncertainty
[1]. Our method is therefore a generalization of the works of Narcowich and Ward
[2] and Handscomb [3], who followed a deterministic derivation path.
The method is applied to a two-dimensional synthetic test case and to real experimental data of a circular jet in water [4]. Tomographic particle image velocimetry
was used to extract the three components of velocity in a three-dimensional space
[5].
Bayesian inference consists of three steps. First, one states a prior belief for a
state f . Assuming Gaussian processes, f ∼ N (µ, P ). µ is the prior mean and P
is the prior covariance matrix. Then, data (y) is collected: y | f ∼ N (Hf, R),
where H is the observation operator that converts the unobserved state into the
observed set of data points. R is the observation error covariance matrix. Finally,
Bayes’ Theorem is used to obtain the posterior distribution, which is also normally
distributed due to the assumption of Gaussian processes.
From the continuity equation it follows that the velocity field of an incompressible
flow has zero divergence. From vector calculus it is known that such a field can be
obtained by taking the curl of a vector potential. The state vector f is therefore
defined to consist of the components of the potential and their first partial derivatives. The correlation between a vector potential component and its derivatives,
and the correlation between these derivatives, is incorporated in the prior covariance matrix [6]. However, it is assumed that the different potential components
are uncorrelated, because there is no prior physical knowledge to assume that they
are. The observation operator is constructed such that it returns the curl of the
potential, being the observed velocity field.
Table 1 compares physics-based interpolation (pb) with standard interpolation (st)
for the synthetic test case. By standard interpolation, we mean the assumption of
uncorrelated velocity components [7]. The results show that physics-based interpolation indeed increases the spatial resolution of the data, even in the presence
of measurement uncertainty.
References
[1] Wikle, C.K., and Berliner, L.M., A Bayesian tutorial for data assimilation,
Physica D: Nonlinear Phenomena, Vol.230, pp. 1-16, 2007.
[2] Narcowich, F.J., and Ward, J.D., Generalized Hermite interpolation via
matrix-valued conditionally positive definite functions, Mathematics of Computation, Vol. 63, No. 208, pp. 661-687, 1994.
30
N 1/2
3
6
9
12
15
NE (st)
2.77e-1
6.62e-3
1.77e-4
4.74e-5
1.76e-5
NE (pb)
2.08e-1
6.64e-4
3.08e-5
1.04e-5
4.54e-6
GE (st)
2.77e-1
7.71e-3
3.57e-3
1.85e-3
1.03e-3
GE (pb)
2.08e-1
5.69e-3
2.29e-3
1.27e-3
7.94e-4
LE (st)
2.77e-1
1.07e-2
4.85e-3
1.79e-3
1.02e-3
LE (pb)
2.08e-1
5.24e-3
1.23e-3
4.60e-4
3.01e-4
Table 1: RMSE as a function of the number of sample points (N ) for the synthetic
test case. NE: perfect measurement, no measurement uncertainty. GE: spatially
varying measurement uncertainty, but constant uncertainty assumed. LE: spatially
varying measurement uncertainty, and correct measurement uncertainty used.
[3] Handscomb, D., Local recovery of a solenoidal vector field by an extension
of the thin-plate spline technique, Numerical Algorithms, Vol. 5, pp. 121-129,
1993.
[4] Violato, D., and Scarano, F., Three-dimensional evolution of flow structures
in transitional circular and chevron jets, Physics of Fluid, Vol. 23, 2011.
[5] Elsinga, G.E., Scarano, F., Wieneke, B., and van Oudheusden, B.W., Tomographic particle image velocimetry, Exp Fluids, Vol. 41, pp. 933-947, 2006.
[6] Rasmussen, C., and Williams, C., Gaussian processes for machine learning,
MIT Press, 2006.
[7] de Baar, J.H.S., Percin, M., Dwight, R.P., van Oudheusden, B.W., and Bijl,
H., Kriging regression of PIV data using a local error estimate, submitted 2012.
Joint work with Richard Dwight, and Hester Bijl.
31
Laura Azzimonti
MOX - Department of Mathematics, Politecnico di Milano, IT
Mixed Finite Elements for spatial regression with PDE penalization
Contributed Session CT2.7: Tuesday, 14:00 - 14:30, CO122
In this work we study the properties of a non-parametric regression technique for
the estimation of bidimensional or three dimensional fields on bounded domains
from some pointwise noisy evaluations. We focus on applications in physics, engineering, biomedicine, etc. where a prior knowledge on the field might be available
from physical principles and should be taken into account in the smoothing process. We consider in particular phenomena where the field is described by a partial
differential equation (PDE) and has to satisfy some known boundary conditions.
Spatial regression with PDE penalization (SR-PDE) has been developed in [1] for
the estimation of the blood velocity field on the section of an artery from EchoDoppler data. This technique has very broad applicability. Many applications of
particular interest can be named: the estimation of the concentration of pollutant
released in water or in the air transported by the stream or by the wind, the
estimation of temperature or pressure fields from electronic control units or sensors
in environmental sciences and many other phenomena in physics, biology and
engineering. In this work we focus on phenomena that are well described by linear
second order elliptic PDEs, typically transport-reaction-diffusion problems.
The field is estimated minimizing a penalized least squares functional that generalizes classical smoothing techniques such as thin-plate splines. Thin-plate splines
and, more recently, the spatial spline regression models described in [3] estimate
bidimensional surfaces penalizing a measure of the local curvature. We propose
instead to minimize the functional
Z
n
1X
2
(f (pi ) − zi ) + λ (Lf − u)2
(1)
J(f ) =
n i=1
Ω
where pi are the observation sites, zi the observations and f the field to be estimated. The penalty term involves the misfit of a second order PDE, Lf = u,
modeling the phenomenon under study. This, in turns, corresponds to assuming
that the forcing term in the PDE is not exactly known. On the other hand, we
assume here that all the parameters appearing in the PDE (except for the forcing
term) and the boundary conditions are completely determined. This approach is
similar to the one used in control theory when a distributed control is considered.
The main difference from classical results in control theory is that the observations
are pointwise and affected by noise. For this reason it is necessary to require higher
regularity to the field to ensure that the functional J(f ) is well defined.
In [2] we prove the existence and the uniqueness of the estimator in the Sobolev
space H 2 , in the described pointwise framework. In particular, minimizing the
functional J(f ) is equivalent to solving a fourth order problem; we resort to a
mixed approach for fourth order problems in order to prove the existence and the
uniqueness of the estimator. Accordingly, a mixed equal order Finite Element
method is used for discretizing the estimation problem; the proposed method is
similar to some classical methods used for the discretization of fourth order problems. The well-posedness of the discrete problem is also proved.
Both the continuous and the discrete estimators have a bias induced by the presence of the penalty term in the minimized functional. We obtain a bound for the
bias of the continuous estimator and we study the convergence of the bias of the
32
discrete estimator when the mesh size goes to zero. The proposed mixed equal
order Finite Element discretization is known to have sub-optimal convergence rate
when applied to fourth order problems with arbitrary boundary conditions and, in
particular, the first order approximation might not converge to the exact solution.
However we are able to prove optimal convergence of the proposed discretization
method for the specific set of boundary conditions that are naturally associated
to the smoothing problem (1), whenever the true underlying field satisfies exactly
those conditions.
Finally the smoothing technique is extended to the case of data distributed on
some subdomains, particularly interesting in many applications. For instance in
the case of the driving problem considered in [1] concerning the velocity field
estimation, the Echo-Doppler data represent the mean velocity of blood on some
subdomains on the section of an artery. The properties of the estimator in the
areal setting are obtained following the approach used in the pointwise framework.
References
[1] Azzimonti L. Blood flow velocity field estimation via spatial regression with
PDE penalization. PhD Thesis, Politecnico di Milano, PhD School “Mathematical models and methods in engineering”, 2013.
[2] Azzimonti L., Nobile F., Sangalli L.M., Secchi P. Mixed Finite Elements for
spatial regression with PDE penalization. In preparation.
[3] Sangalli L.M., Ramsay J.O., Ramsay T. Spatial spline regression models. J.
R. Stat. Soc. Ser. B Stat. Methodol., (75,4):1—23, 2013.
Joint work with Fabio Nobile, Laura Maria Sangalli, and Piercesare Secchi.
33
Ute Aßmann
Universität Duisburg-Essen, DE
Regularization in Sobolev spaces with fractional order
Minisymposium Session FEPD: Monday, 15:30 - 16:00, CO017
We study the minimization of a quadratic functional subject to a nonlinear elliptic PDE where the Tichonov regularization term is given in H s with a fractional
parameter s > 0. Moreover, pointwise control constraints are given. In order to
allow a numerical treatment of this problem we introduce a multilevel approach
as an equivalent norm concept. Furthermore, the existence of regular Lagrange
multipliers can be shown. At the end of the talk we will present some numerical
calculations.
Joint work with Arnd Rösch.
34
Markus Bachmayr
RWTH Aachen, DE
Adaptive methods based on low-rank tensor representations of coefficient sequences
Minisymposium Session LRTT: Wednesday, 11:30 - 12:00, CO3
We consider a framework for the construction of iterative schemes for high-dimensional operator equations that combine adaptive approximation in a basis and
low-rank approximation in tensor formats.
Our starting point is an operator equation Au = f , where A is a bounded and
elliptic linear operator mapping a separable Hilbert space H – for instance, a
function space on a high-dimensional product domain – to its dual H 0 . Assuming
that a Riesz basis of H is available, the original problem can be rewritten equivalently as a biinfinite linear system on `2 , where the system matrix is bounded and
continously invertible.
Under the given assumptions, a simple Richardson iteration on the infinite-dimensional problem converges, but of course cannot be realized in practice. This is the
starting point for adaptive wavelet methods as introduced by Cohen, Dahmen and
DeVore, which dynamically approximate such an ideal iteration by finite quantities.
Such methods exploit the approximate sparsity of coefficient sequences.
The new aspect here is that, in order to significantly reduce computational complexity in a high dimensional context, we make use of an additional tensor product
structure of the problem. For this discussion, we assume H = H1 ⊗ · · · ⊗ Hd , i.e.,
that H is a tensor product Hilbert space, and that we have a tensor product Riesz
basis of H. We now use a structured tensor format for the corresponding sequence
of basis coefficients. Examples of suitable tensor structures are the Tucker format
or the Hierarchical Tucker format, where the latter can also be used for problems
in very high dimensions. A crucial common feature of both formats is that quasibest approximations by lower-rank tensors, with controlled error in `2 -norm, can
be computed by procedures implementable by standard linear algebra routines.
We are thus considering a highly nonlinear type of approximation: besides the
multiplicative nonlinearity in the tensor representation, we aim to adaptively determine simultaneously suitable finite approximation ranks, the active indices for
the basis expansions in the lower-dimensional spaces Hi , and corresponding coefficients. We accomplish this by a perturbed Richardson iteration, where approximation ranks and active basis indices are adjusted implicitly in a sufficiently accurate
approximation of the residual. The resulting growth in the complexity of iterates is
kept in check by combining a tensor recompression operation, which yields an approximation with lower ranks up to a specified error, with a coarsening operation
that eliminates negligible coefficients in the lower-dimensional basis expansions.
In the efficient realization of the latter, the special orthogonality properties of the
considered tensor formats play a central role.
Under the present quite general assumptions, we can then identify a choice of
parameters for the resulting iterative scheme that ensures its convergence and
produces approximations with near-minimal ranks. To our knowledge this is the
first convergence result of this type. Under suitable further approximability conditions on the problem, we also obtain estimates for the total number of operations
required for reaching an approximate solution with a certain target accuracy. Furthermore, we discuss the additional difficulties related to the preconditioning of
problems posed on Sobolev spaces in this setting. We consider some possible applications and illustrate our theory by numerical experiments.
35
Joint work with Wolfgang Dahmen.
36
Santiago Badia
CIMNE and UPC, ES
Adaptive finite element simulation of incompressible flows by hybrid continuousdiscontinuous Galerkin formulations
Minisymposium Session ADFE: Tuesday, 11:30 - 12:00, CO016
Conforming cG formulations are preferred over dG formulations when we focus on
CPU cost (at the same convergence order). For simplicial meshes, dG formulations
involve around 14 times more degrees of freedom than cG ones in dimension three
(6 in two dimensions); those are the values obtained for a structured mesh with periodic boundary conditions. For hexahedral meshes this ratio is around 8 and 4 for
quadrilateral meshes. Certainly, these numbers cannot be ignored when simulating complex and realistic phenomena. However, the solution of many problems of
interest often exhibit sharp layers or strong singularities. The use of locally refined
meshes in these regions is required in order to get good results, since uniformly
refined meshes can be prohibitive. dG formulations are better suited to adaptive
refinement, because they can easily deal with non-conforming meshes with hanging
nodes, e.g. using local mesh refinement, compared to cG formulations. The redgreen mesh refinement strategy for cG formulations, which keeps the conformity of
the mesh but not the aspect ratio. Alternatively, non-conforming refined meshes
can be used together with cG formulations, by constraining the hanging nodes in
order to keep continuity. This approach is certainly involved in terms of implementation and it is usually restricted to 1-irregular meshes, i.e. two neighboring
elements can only differ in at most one level of refinement.
The motivation of this work is a hybrid method that combines the low CPU
cost of cG formulations with the capabilities of dG formulations when dealing
with adaptive refinement, naturally denoted as continuous-discontinuous Galerkin
(cdG) formulation. In particular, we design an equal-order cdG numerical method
for the approximation of incompressible flows, due to the superior efficiency and
simplicity both in the cG and dG case. The cdG formulation is designed in such a
way that the method is stable and optimally convergent for this particular type of
FE spaces. The resulting methods is a suitable combination of the cG variational
multiscale (VMS) formulation and an equal-order symmetric interior penalty dG
formulation with upwind for the convective term.
Optimal stability and convergence results are obtained. For the adaptive setting,
we use an standard error estimator and marking strategy. Numerical experiments
show the optimal accuracy of the hybrid algorithm both for uniformly and adaptively refined non-conforming meshes. The outcome of this work is a finite element
formulation that can naturally be used on non-conforming meshes, as discontinuous Galerkin formulations, while keeping the much lower CPU cost of continuous
Galerkin formulations.
Joint work with Joan Baiges.
37
Yun Bai
ANMC MATHICSE EPFL, CH
Reduced basis finite element heterogeneous multiscale method for quasilinear problems
Minisymposium Session MSMA: Monday, 14:30 - 15:00, CO3
In this talk, we introduce the reduced basis finite element heterogeneous multiscale method (RB-FE-HMM) based on an offline-online strategy for quasilinear
problems [2]. In this approach, a small number of representative micro problems
selected by a greedy algorithm are computed in an offline stage. Missing data in
the homogenized equation are efficiently recovered by the linear combinations of
those precomputed micro solutions in an online stage. Thanks to a new a posteriori error estimator, the result of [2] can be extended to quasilinear problem. A
priori error estimates and convergence of the Newton method can be established.
Numerical experiments show that the RB-FE-HMM considerably reduces the cost
of the classical FE-HMM for quasilinear multiscale problems [3] originating from
a large number of micro FE problems in each iteration of the Newton method.
References
[1] A. Abdulle, Y. Bai and G. Vilmart, Reduced basis finite element heterogeneous
multiscale method for quasilinear elliptic homogenization problems. Preprint,
submitted for publication, 2013.
[2] A. Abdulle and Y. Bai, Reduced basis finite element heterogeneous multiscale
method for high-order discretizations of elliptic homogenization problems. J.
Comput. Phys., 231(21) (2012), 7014-7036.
[3] A. Abdulle and G. Vilmart, Analysis of the finite element heterogeneous multiscale method for nonmonotone elliptic homogenization problems. To appear
in Math. Comp., 2013.
Joint work with Assyr Abdulle, and Gilles Vilmart.
38
Ruth Baker
University of Oxford, Mathematical Institute, England
Developing multiscale models for exploring biological phenomena
Plenary Session: Tuesday, 08:20 - 09:10, Rolex Learning Center Auditorium
Epithelial tissues consist of one or more layers of closely packed cells and their
dynamical behaviour plays a central role during development of the embryo. Epithelial cell sheets line the surfaces and cavities of organs throughout the body
where they act as a protective layer, regulating the passage of chemicals to and
from underlying tissues and restricting the invasion of pathogens and harmful
substances.
The highly organised nature of epithelial sheets means they can achieve complex
morphogenetic processes involving folding or bending, through the coordinated
movement and rearrangement of individual cells. Mechanics plays a key role in
driving epithelial morphogenesis and various forms of mechanical feedback, including mechanotransduction, play a role in regulating and ‘fine tuning’ growth during
development. A second key player is the dynamics of signalling networks which,
for example, regulate cell death and cell size, and control of cell proliferation.
Recent advances in our understanding have been facilitated by new imaging tools
and fluorescent probes to measure tissue deformation and the dynamics of key signalling proteins within cells and tissues. Mathematical and computational modelling offer a complimentary tool with which to study these processes. Models can
be used to develop abstract representations of biological systems, test competing
hypotheses and generate new predictions that can then be validated experimentally.
In this talk a comprehensive computational framework will be presented within
which the effects of chemical signalling factors on growing epithelial tissues can
be studied. The method incorporates a vertex-based cell model, in which cells
are represented as polygons whose edges are shared with other cells in the tissue. Node movements are determined by simple force laws, and cell proliferation
and junctional rearrangements can be incorporated by changes in the shared-edge
configuration. The evolution of chemical signalling dynamics is modelled using a
system of nonlinear partial differential equations (PDEs). The vertex model provides a natural mesh for the finite element method which is used to solve the PDEs
governing the chemical signalling. As the tissue evolves and the cells rearrange an
arbitrary Lagrangian-Eulerian framework is used.
The method we describe may be adapted to a range of potential application areas,
and even to other cell-based models with designated node movements, to accurately probe the role of chemical signalling in epithelial tissues. We demonstrate
the potential uses of our model framework by showing its application to a number
of areas in development.
39
Jonas Ballani
RWTH Aachen, DE
Black box approximation strategies in the hierarchical tensor format
Minisymposium Session LRTT: Tuesday, 12:00 - 12:30, CO3
The hierarchical tensor format allows for the low-parametric representation of tensors even in high dimensions d. The efficiency of this representation strongly relies
on an appropriate hierarchical splitting of the different directions 1, . . . , d such that
the associated ranks remain sufficiently small. This splitting can be represented
by a binary tree which is usually assumed to be given. In this talk, we address
the question of finding an appropriate tree from a subset of tensor entries without
any a priori knowledge on the tree structure. The proposed strategy can be combined with rank-adaptive cross approximation techniques such that tensors can be
approximated in the hierarchical format in an entirely black box way. Numerical
examples illustrate the potential and the limitations of our approach.
Joint work with Lars Grasedyck.
40
Soeren Bartels
University of Freiburg, DE
Projection-free approximation of geometric evolution problems
Minisymposium Session GEOP: Tuesday, 12:00 - 12:30, CO122
Geometric evolution problems are nonlinear parabolic or hyperbolic partial differential equations that involve a pointwise constraint described by a smooth submanifold. Typical examples include the harmonic map heat flow and wave maps.
Closely related are problems in nonlinear elasticity that result from a dimension
reduction and then involve a pointwise constraint on the gradient that may model
the inextensibility of an elastic rod. Numerical methods based on a linearization
of the constraint and a subsequent projection of the update to satisfy the constraint require restrictive conditions on the discretizations to guarantee numerical
stability and convergence. We will demonstrate that in many evolution problems
the projection step can be omitted leading to a violation of the constraint that is
controlled by the step size and is independent of the number of iterations.
41
Steffen Basting
University Erlangen-Nuremberg, DE
A hybrid level set / front tracking approach for fluid flows with free boundaries and
interfaces
Minisymposium Session FREE: Tuesday, 11:00 - 11:30, CO2
We present a hybrid level set / front tracking approach for the representation of
sharp interfaces in finite element discretizations of two-phase flow models. The
hybrid approach makes use of an implicit representation of the interface by means
of a level set function. The computational mesh is obtained by deforming a simplicial reference mesh such that the mesh is aligned to the implicitly described
geometry. The resulting meshes provide an additional explicit representation of
the interface while guaranteeing optimality of the mesh quality. The proposed
method is based on a variational approach to optimal meshes and leads to a fully
automated mesh optimization procedure. Because mesh connectivity is retained,
the proposed approach can be easily integrated into existing finite element codes.
Due to the hybrid interface representation, the geometrical flexibility of conventional front tracking / moving mesh approaches is enhanced.
We present and evaluate the proposed framework in the context of particulate
flows and two-phase flow applications with free interfaces.
Joint work with M. Weismann, and R. Prignitz.
42
Markus Bause
Helmut Schmidt University, University of the Federal Armed Forces Hamburg, DE
Space-time Galerkin discretizations of the wave equation
Contributed Session CT3.4: Thursday, 16:30 - 17:00, CO015
1. Motivation
The accurate and reliable numerical approximation of the hyperbolic wave equation is of fundamental importance to the simulation of acoustic, electromagnetic
and elastic wave propagation phenomena. Our interest in the numerical simualation of wave propagation phenomena comes from material inspection of lightweight materials (e.g. carbon fibre reinforced plastics) by piezoelectric induced
ultrasonic waves. This is a relatively new and an intelligent technique to monitor
the health of the structure, for damage detection and non-destructive evaluation.
For this it is strictly necessary to understand wave propagation in such materials
and the influence of the geometrical and mechanical properties of the system; cf.
Fig. 1.
2. Variational time discretization
Galerkin-type discretization schemes for the temporal variable were recently proposed and studied for the parabolic heat equation and the Stokes system. In
this contribution we will present continuous and discontinuous variational time
discretization schemes for the hyperbolic wave equation. For the discretization
in space a symmetric interior penalty discontinuous Galerkin (SIPG) method is
used. In the field of numerical wave propagation the spatial discretization by the
discontinuous Galerkin finite element method (DGM) has attracted the interest of
researchers. Advantages of the DGM are the flexibility with which it can accommodate discontinuities in the model, material parameter and boundary conditions
and the ability to approximate the wavefield with high degree polynomials. The
DGM has the further advantage that it can be energy conservative, and it is suitable for parallel implementation. The mass matrix of the DGM is block-diagonal,
with block size equal to the number of degrees of freedom per element, such that
its inverse is available at low computational cost.
We show that the resulting block matrix system can be condensed algebraically by
eliminating internal degrees of freedom. Using further the block diagonal structure of the mass matrix of the discontinuous Galerkin discretization in space, then
allows us to solve the algebraic system of equations efficiently. The performance
properties of the scheme are illustrated by numerical convergence studies. Moreover, the schemes are applied to wave propagation phenomena in heterogeneous
media admitting multiple sharp wave fronts; cf. Fig. 1.
Here, we briefly present our family of continuous variational time discretization
schemes for the acoustic wave equation, as a prototype model,
∂t v(x, t) − ∇ · (c(x)∇u(x, t)) = f (x, t) ,
∂t u(x, t) = v(x, t) ,
written as a first order system of equations and equipped with the initial conditions
u(0) = u0 , v(0) = v0 and homogeneous Dirichlet boundary conditions. Its counterpart of discontinuous variational times discretizations is not given here, but it
will also be addressed in the presentation. We decompose I = [0, T ] into N subintervals In = (tn−1 , tn ]. For some Hilbert space H, let Xτr (H) = {u ∈ C(I, H) |
u|In ∈ Pr (In , H)} and Yτr (H) = {w ∈ L2 (I, H) | w|In ∈ Pr−1 (In , H)}, where
43
Pr
Pr (In , H) = u : In 7→ H | u(t) = j=0 ξnj tj , ξnj ∈ H . Our continuous variational
approximation of () then reads as: Find uτ ∈ Xτr (H01 (Ω)), vτ ∈ Xτr (L2 (Ω)) such
that uτ (0) = u0 , vτ (0) = v0 and
Z
T
0
Z T
{h∂t vτ , ϕτ i + hc∇uτ , ∇ϕτ i} dt =
hf, ϕτ idt
0
Z T
{h∂t uτ , ψτ i − hvτ , ψτ i} dt = 0
0
for all ϕτ ∈ Yτr (H01 (Ω)) and ψτ ∈ Yτr (L(Ω)).
Here, h·, ·i denotes the inner product in L2 (Ω). Precisely, we have a PetrovGalerkin method, since the discrete time space Xτr (H) and discrete time test space
Yτr (H) for the unknown uτ differ. We call this approach a cGP(r) method. Since
Yτr (H) imposes no continuity constraint on its elements, the variational problem
can be rewritten as a time marching scheme. We choose Lagrange basis functions
to represent uτ , vτ and apply the Gauß-Lobatto quadrature rule to compute the
integrals. Finally, the resulting semidiscrete approximation scheme is combined
with an interior penalty discontinuous Galerkin method for the spatial discretization. For the cGP(2) method we observe numerically superconvergence of fourth
order at the end points of the time intervals.
3. Future prospects
By using the Galerkin method for the time discretization of the wave equation
we have a uniform variational approach in space and time which may be advantageous for the future analysis of the fully discrete problem and the construction of
simultaneous space-time adaptive methods. Further, it is very natural to construct
methods of higher order and the well-known finite element stability concepts of
the Galerkin-Petrov or discontinuous Galerkin methods can be applied. For future
developments, the well-known adaptive finite element techniques can be applied
for changing the polynomial degree and the length of the time steps.
Figure 1: Structural health monitoring by ultrasonic waves (left) and complex
wave propagtion phenomena in heterogeneous media (right).
Joint work with Uwe Köcher.
44
Lorenz Berger
University of Oxford, GB
Solving the Generalised Large Deformation Poroelastic Equations for Modelling
Tissue Deformation and Ventilation in the Lung
Contributed Session CT1.7: Monday, 18:00 - 18:30, CO122
Gas exchange in the lungs is optimised by ensuring efficient matching between
ventilation and blood flow, the distributions of which are largely governed by
tissue deformation, gravity and branching structure of the airway and vascular
trees. In this work, we aim to develop a 3D organ scale lung model that tightly
couples the deformation of the tissue with the ventilation. Such a fully coupled
model is needed to accurately model tissue deformation and ventilation in the lung,
especially in the diseased case, where there is a strong interplay between both these
components. To achieve this tight coupling we propose a novel multiscale model
that approximates lung parenchyma by a biphasic (air and tissue) poroelastic finite
deformation model. Briefly, the poroelastic equations are given by,
ρs (1 − φ)
∂2u
= ∇ · (σ e (u) − pI) + ρs (1 − φ)b + ρf φb
∂t2
w
1
= kf · (−∇p + ρf b + ∇ · (φσ vis ))
f
ρ
φ
∂u
) = 0 in Ωt ,
∂t
1 − φ0
J(u) =
in Ωt .
1−φ
in Ωt ,
in Ωt ,
∇ · w + ∇ · (ρf
The primary variables of this system of equations are the deformation u, the fluid
pressure p, the fluid flux w and the porosity φ. The other terms: σ e is the nonlinear effective stress of the solid skeleton valid for large deformations, b is an
external body force and ρf and ρs are the densities of the fluid and solid, respectively. The permeability tensor is given by kf , the viscous stress of the fluid is
given by σ vis and the determinant of the deformation gradient is denoted by J.
The above model extends the classic linear poroelastic equations commonly used
within the geomechanics community to uses in biology, specifically to model ventilation in the lung. Nonlinear elasticity theory that includes the effect of inertia is
used to model the large deformations during breathing. Due to the high porosity
and relative high fluid velocities in the lung we use a generalised Darcy flow model
also known as the Brinkman model to allow for viscous effects in the fluid. Due
to the size and nonlinearity of the model we propose an operator splitting scheme
to solve the equations using the finite element method. To the best of our knowledge this is the first method that solves the fully incompressible large deformation
poroelastic equations using an operator splitting approach. By solving the solid
(deformation) and fluid equations separately, well developed preconditioners can
be applied to each system. We present simulations that highlight the importance
of including inertia forces and viscous stresses in the model for particular choices
of parameters and show numerical experiments that demonstrate the convergence
and stability of the algorithm. Finally, we present results on coupling a 1D fluid
airway network to the 3D poroelastic medium.
Joint work with Dr David Kay, and Dr Rafel Bordas.
45
Jean-Paul Berrut
Université de Fribourg, CH
The linear barycentric rational quadrature method for Volterra integral equations
Contributed Session CT1.2: Monday, 17:00 - 17:30, CO2
We shall first introduce linear barycentric rational interpolation to the unaware
audience : it can be viewed as a small modification of the classical interpolating polynomial. Then we present two direct quadrature methods based on linear
rational interpolation for solving general Volterra integral equations of the second kind. The first, deduced by a direct application of linear barycentric rational
quadrature given in former work, is shown to converge at the same rate, but is
costly on long integration intervals. The second, based on a composite version of
the rational quadrature rule, looses one order of convergence, but is much cheaper.
Both require only a sample of the involved functions at equispaced nodes and yield
a stable, infinitely smooth solution of most classical examples with machine precision.
Joint work with Georges Klein, and Seyyed Ahmad Hosseini.
46
Marie Billaud Friess
Ecole Centrale de Nantes - GeM, FR
A Tensor-Based Algorithm for the Optimal Model Reduction of High Dimensional
Problems
Contributed Session CT2.8: Tuesday, 14:00 - 14:30, CO123
Due to the need of more realistic numerical simulations, models presenting uncertainties or either numerous parameters are receiving a growing interest. To
solve such high dimensional problems, one has to circumvent the so called curse of
dimensionality when using classical numerical approaches. To overcome such an
issue, model reduction approaches have became popular these last years.
This presentation is concerned with the resolution of high dimensional linear equations by means of approximations in low-rank tensor subset. To compute the
optimal approximation of the solution in this tensor subset, an ideal best approximation problem which consists in minimizing the distance to the exact solution
for a given norm || · || is introduced. Since the exact solution is not available, such
a problem cannot be directly solved. However, it can be replaced by computing
a low rank tensor approximation of the unknown that minimizes the equation
residual, which is computable, measured with another norm || · ||∗ . Nevertheless,
if || · ||∗ is chosen in usual way, the resulting approximation may be far from the
one expected by solving the initial best approximation problem with respect to ||·||.
Here, we present an ideal minimal residual method that relies on an ideal choice
for || · ||∗ and that can apply to high dimensional weakly coercive problems. Especially, || · ||∗ is chosen to ensure the equivalence between the best approximation
problem for || · || and the residual minimization problem with || · ||∗ . Yet, the
computation of the residual norm with || · ||∗ is not affordable in practice. Here,
the residual norm is not exactly computed but estimated with a controlled precision δ. We thus propose a perturbed minimization algorithm of gradient type
that provides an approximation of the optimal approximation of the solution with
an error depending on δ. A progressive construction of the low-rank approximate
solution of the initial problem is also introduced by means of greedy corrections
computed with the proposed iterative algorithm. The resulting weak greedy algorithm is proved to be convergent under some assumptions on δ and is successfully
validated to numerically solve stochastic partial differential equations.
Joint work with Anthony Nouy, and Olivier Zahm.
47
Adrian Blumenthal
ANMC MATHICSE EPF Lausanne, CH
Stabilized Multilevel Monte Carlo Method for Stiff Stochastic Problems
Contributed Session CT4.8: Friday, 09:50 - 10:20, CO123
A new stabilized multilevel Monte Carlo (MLMC) method is presented which
can be used for mean square stable stochastic differential equations with multiple
scales. For problems where such stiffness occurs the performance of the standard MLMC approach based on classical explicit numerical integrators degrades.
In fact due to the time step restriction on the fastest scales not all levels of the
MLMC method can be exploited. In this talk we introduce a new stabilized MLMC
approach based on explicit stabilized numerical schemes [1]. It is shown that balancing the stabilization procedure simultaneously with the hierarchical sampling
strategy of MLMC methods reduces the computational cost for stiff systems significantly. Due to the explicit time stepping in our algorithm the simplicity of the
MLMC implementation is preserved [2].
References
[1] A.Abdulle and T.Li, S-ROCK methods for stiff Itô SDEs, Commun. Math.
Sci., 6(4):845–868, 2008.
[2] A.Abdulle and A.Blumenthal, Stabilized Multilevel Monte Carlo Method for
Stiff Stochastic Differential Equations, accepted in J. Comput. Phys., 2013.
Joint work with A. Abdulle.
48
Jerome Bonelle
EDF R&D - Univ. Paris-Est, CERMICS, FR
Compatible Discrete Operator Schemes on Polyhedral Meshes for Stokes Flows
Minisymposium Session ANMF: Tuesday, 12:00 - 12:30, CO1
Compatible Discrete Operator (CDO) schemes belong to the class of compatible (or mimetic, or structure-preserving) schemes. Their aim is to preserve key
structural properties of the underlying PDE. This is achieved by distinguishing
topological laws and constitutive relations. CDO schemes are formulated using
discrete differential operators for the topological laws and discrete Hodge operators for the constitutive relations. CDO schemes have been recently analyzed
in [1] for elliptic problems. We first review the main results in this case. Then, we
derive CDO schemes for Stokes flows that are closely related to the recent work
of Kreeft and Gerritsma [2]. We discuss analytical results and present numerical
tests.
References
[1] J. Bonelle and A. Ern, Analysis of Compatible Discrete Operator
scheme for Elliptic Problems on Polyhedral Meshes, Available from:
http://hal.archives-ouvertes.fr/hal-00751284, 2012.
[2] J. Kreeft and M. Gerritsma, A priori error estimates for compatible spectral
discretization of the Stokes problem for all admissible boundary conditions,
arxiv:1206.2812 [cs.NA].
Joint work with Alexandre ERN.
49
Francesca Bonizzoni
CSQI - MATHICSE, EPF Lausanne, MOX - Dip. di Matematica, Politecnico di
Milano, CH
Low-rank techniques applied to moment equations for the stochastic Darcy problem
with lognormal permeability
Contributed Session CT3.7: Thursday, 17:00 - 17:30, CO122
Many natural phenomena and engineering applications are modeled by deterministic boundary value problems for partial differential equations where all the input
data are assumed to be perfectly known. Thanks to the recent developments in
scientific computing, it is now possible to efficiently and accurately compute the
numerical solution of these problems. However, in reality, the problem data are
either incompletely known or contain a certain level of uncertainty due to the
material properties, boundary conditions, loading terms, domain geometry, etc.
One way to overcome this is to describe the problem data as random variables or
random fields, so that the deterministic problem turns into a stochastic differential
equation. Stochastic models are employed in many areas such as financial mathematics, seismology and bioengineering. The solution of a stochastic differential
equation is itself a random field u(ω) with values in a suitable function space V .
The description of this random field requires the knowledge of its k-points correlation E[u⊗k ]. The simplest approach is Monte Carlo Method. Generally, its
convergence rate is slow and this method is very costly. An alternative approach
consists in deriving the so called moment equations, that is the deterministic equations solved by the probabilistic moments of the stochastic solution.
We are interested in studying the fluid flow in a heterogeneous porous domain,
with randomly varying permeability. We model this phenomenon using the Darcy
law with lognormal permeability coefficient:
− divx (a(ω, x)∇x u(ω, x)) = f (x)
a.s.
(1)
where the forcing term is deterministic and a(ω, x) = eY (ω,x) , Y (ω, x) Gaussian
random field with standard deviation σ. The aim of the work is the computation
of the statistical moments of u.
Under the assumption of small standard devotion σ, we expand the random solution u(Y, x) in Taylor series in a neighborhood of E[Y ], and approximate u using
its Taylor polynomial T K u. This approach is known as perturbation technique. We
predict the divergence of the Taylor series, and the existence of an optimal order
σ
such that adding further terms to the Taylor polynomial will deteriorate the
Kopt
accuracy instead of improving it.
The Taylor polynomial is directly computable only in the finite-dimensional setting, that is when Y (ω, x) is parametrized by a finite number of random variables.
In the infinite-dimensional setting the Taylor polynomial involves uk , the k-th
Gateaux derivative of u with respect to Y , for k = 0, . . . , K, which is not computable. However, it is possible to derive the deterministic equations solved by
E[uk ].
Starting from the stochastic problem (1) we derive the problem solved by E[uk ] for
k = 0, . . . , K, and state its well-posedness. The solution of this k-th order correction problem requires the solution of a recursion on the (l + 1)-points correlations
E[uk−l ⊗ Y ⊗l ], for l = 1, . . . , k. Each correlation E[uk−l ⊗ Y ⊗l ] is defined on the
tensorized domain D×(l+1) , and solves a high dimensional problem.
50
In the discrete setting, each correlation E[uk−l ⊗ Y ⊗l ] is represented by a tensor of
order l+1. The curse of dimensionality affects the recursion we are studying, since
the number of entries of a tensor grows exponentially in its order. To overcome
this problem, we propose to store and make computations between tensors in a
data-sparse or low-rank format. Of particular interest is the Tensor Train format.
A tensor in TT-format is represented as a sequence of order three tensors whose
dimensions are called TT-ranks. We represent all correlations E[uk−l ⊗ Y ⊗l ] in
TT-format and show that the number of entries grows almost linearly on the order
l + 1, and the curse of dimensionality is greatly reduced.
We develop an algorithm in TT-format able to compute E[T K u], the K-th order
approximation of E[u]. In the simple one-dimensional case D = [0, 1] we perform
some numerical tests both to study the complexity of the algorithm and the accuracy of the TT-solution. We compare the TT-solution with a collocation or Monte
Carlo solution.
Joint work with Kumar R., Nobile F., and Tobler C..
51
Steffen Börm
Christian-Albrechts-Universität zu Kiel, DE
Fast evaluation of boundary element matrices by quadrature techniques
Contributed Session CT1.8: Monday, 18:00 - 18:30, CO123
The boundary integral method is frequently used to treat partial differential equations, e.g., to handle unbounded domains. Discretizing the corresponding integral
operators by a boundary element method leads to matrices G of the form
Z Z
gij =
ϕi (x)γ(x, y)ψj (y) dy dx,
∂Ω
∂Ω
where γ is related to the differential operator’s fundamental solution and ϕi and
ψj are suitable basis functions.
If a standard discretization scheme is used, most of the entries of G can be expected to be non-zero, therefore standard sparse matrix formats cannot be used to
represent the matrix efficiently. A viable approach is to use a data-sparse matrix
e that
approximation, i.e., to replace the n × n matrix G by an approximation G
α
requires only ∼ n log n units of storage.
We propose a new approximation scheme that relies on the same representation
formula as the boundary integral method, e.g., Green’s identity in the case of
Laplace’s equation: restricted to subdomains τ + × σ with dist(τ + , σ) > 0, the
fundamental solution is itself a solution of the homogeneous partial differential
equation and can therefore be represented by a boundary integral
Z
Z
∂γ
∂γ
(z, y) dz −
(x, z)γ(z, y) dz x ∈ τ + , y ∈ σ.
γ(x, y) =
γ(x, z)
∂n
∂n
+
+
z
z
∂τ
∂τ
If we choose τ ⊆ τ + such that dist(τ, ∂τ + ) > 0, the integrands are smooth enough
to allow us to approximate the integrals by quadrature and obtain
γ(x, y) ≈
q
X
ν=1
wν γ(x, zν )
q
X
∂γ
∂γ
(zν , y) −
wν
(x, zν )γ(zν , y) x ∈ τ, y ∈ σ
∂nz
∂nz
ν=1
with quadrature points zν ∈ ∂τ + and quadrature weights wν , i.e., we can approximate the fundamental solution by a sum of tensor products.
This approximation translates directly into an approximation of the matrix G by
low-rank blocks that can be handled efficiently. For a d-dimensional problem,
the boundary is a (d − 1)-dimensional manifold, therefore q ∼ md−1 quadrature
points are sufficient for an m-th order quadrature rule. We can prove that the
approximation converges exponentially if m is increased.
In order to improve the compression ratio, we combine our approach with a cross
approximation scheme. The resulting hybrid method starts by constructing local
interpolation-type operators for all subdomains τ and then only has to compute
a small number of matrix coefficients for each block to obtain the final approximation. Since only a small number of operations are required for each block, the
hybrid algorithm is very efficient, and since we can afford to use cross approximation with full pivoting, it is also very robust.
Joint work with Jessica Gördes, and Sven Christophersen.
52
Malte Braack
University of Kiel, DE
Model- and mesh adaptivity for transient problems
Minisymposium Session SMAP: Monday, 11:10 - 11:40, CO015
We propose a dual weighted error estimator with respect to modeling and discretization error based on time-averages for evolutionary partial differ- ential equations. This goal-oriented estimator measures the error of linear functionals averaged in time. We take advantage of time averages and circumvents the solution of
a transient adjoint problem. We use the proposed estimator to solve convectiondiffusion-reaction equations containing e.g. atmospheric chemistry models as commonly used in meteorology.
Joint work with Nico Taschenberger.
53
Ondrej Budac
EPFL, CH
An adaptive numerical homogenization method for a Stokes problem in heterogeneous media
Contributed Session CT2.9: Tuesday, 14:00 - 14:30, CO124
A finite element heterogeneous multiscale method is proposed for solving the Stokes
problem in porous media. The method is based on the coupling of an effective
Darcy equation on a macroscopic mesh, whose a priori unknown permeability is
recovered from microscopic finite element approximations of Stokes problems on
sampling domains. The computational work is independent of the smallness of
the pore structure. A priori estimates are obtained and fully resolved for a locally periodic pore structure. Realistic micro structures lead to non-convex micro
domains which significantly decrease convergence rates when uniform microscopic
refinement is used. For complicated macroscopic domains, uniform macroscopic
refinement also yields poor convergence rates. We therefore propose an adaptive
multiscale strategy on both micro and macro scale based on a posteriori error indicators and derive an a posteriori error analysis of the coupled problem. Two and
three-dimensional numerical experiments confirm the derived convergence rates.
Joint work with Assyr Abdulle.
54
Erik Burman
University College London, GB
Computability of filtered quantities for the Burgers’ equation
Minisymposium Session SDIFF: Monday, 12:40 - 13:10, CO123
In this talk we will discuss finite element discretizations of the viscous Burgers’
equation. Stability will be ensured by a nonlinear stabilization term that switches
on automatically where the solution exhibits oscillations at under resolved layers.
For this method we consider estimates in weak norms, that can be interpreted
as measuring the error in filtered quantities or local averages. Both a posteriori
and a priori error estimates will be discussed, where the latter are derived from
the former using the stability properties of the nonlinear scheme. An important
property of these estimates is that the error constant is independent both of the
Reynolds number and the Sobolev regularity of the exact solution, but depends
on the initial data only. We will give a detailed exposition on the results on the
Burgers’ equation and then discuss possible extensions to higher dimension. In
particular we will discuss the Navier-Stokes’ equation in two space dimensions and
how the present theory can be applied to the numerical analysis of large eddy
simulation in a model situation.
55
Erik Burman
University College London, GB
Projection methods for the transient Navier–Stokes equations discretized by finite
element methods with symmetric stabilization
Minisymposium Session ANMF: Tuesday, 11:30 - 12:00, CO1
We consider the transient Navier–Stokes equations discretized in space by finite
elements with symmetric stabilization and in time by a projection method. We
focus on the implicit Euler scheme for the time derivative of the velocity and a
semi-implicit treatment of the convective term. The stabilization of velocities and
pressures can be treated explicitly or implicitly. The analysis is performed for the
Oseen equations. Stability estimates are derived under a CFL condition, leading to
quasi-optimal error estimates for smooth solutions. The estimates do not depend
explicitly on the viscosity, but, as usual, on the regularity of the exact solution.
The analysis is illustrated by some numerical experiments.
Joint work with Erik Burman, Alexandre Ern, and Miguel A. Fernandez.
56
Rommel Bustinza
Universidad de Concepcion, CL
On a posteriori error analyses for generalized Stokes problem using an augmented
velocity-pseudostress formulation
Contributed Session CT2.4: Tuesday, 14:00 - 14:30, CO015
We develop two a posteriori error analyses for an augmented mixed method for the
generalized Stokes problem. The stabilized scheme is obtained by adding suitable
least squares terms to the velocity-pseudostress formulation of the generalized
Stokes problem. Then, in order to approximate its solution applying an adaptive
mesh refinement technique, we derive two reliable a posteriori error estimators of
residual type, and study their efficiency. To this aim, we include two different
analyses: the standard residual based approach, and an unusual one, based on the
Ritz projection of the error. The main difference of both approaches relies on the
way we treat the nonhomogeneous boundary condition. Finally, we present some
numerical examples that confirm the theoretical properties of our approach and
estimators.
References
[1] T.P. B ARRIOS , R. B USTINZA , G. C. G ARCÍA , AND E. H ERNÁNDEZ, On stabilized
mixed methods for generalized Stokes problem based on the velocity-pseudostress
formulation: A priori error estimates. Computer Methods in Applied Mechanics and Engineering, vol. 237-240, pp. 78-87, (2012).
[2] G.N. G ATICA , L. F. G ATICA AND A. M ARQUEZ, Analysis of a pseudostress based
mixed finite element method for Brinkman model of porous media flow. Preprint
2012-02, Centro de investigación en Ingeniería Matemática, Universidad de
Concepción, (2012).
[3] G.N. G ATICA , A. M ÁRQUEZ AND M.A. S ÁNCHEZ, Analysis of a velocity-pressurepseudostress formulation for the stationary Stokes equations. Computer Methods in Applied Mechanics and Engineering, vol. 199, 17-20, pp. 1064-1079,
(2010).
[4] S. R EPIN AND R. S TENBERG, A posteriori error estimates for the generalized
Stokes problem. Journal of Mathematical Sciences, vol. 142, 1, pp. 1828-1843,
(2007).
Joint work with Tomás P. Barrios, Galina C. García, and María González.
57
Alexandre Caboussat
Geneva School of Business Administration, CH
Numerical Approximation of Fully Nonlinear Elliptic Equations
Minisymposium Session GEOP: Tuesday, 11:00 - 11:30, CO122
Fully nonlinear elliptic equations have many applications in geometry, finance,
mechanics or physics. Among them, the Monge-Ampère equation is the most
well-known and the one that has gathered most of the attention for several years
already.
In this talk, we present some numerical methods for the solution of the Dirichlet problem for fully nonlinear elliptic equations. We focus in particular on the
cases when no classical solutions exist or when solutions exhibit some non-smooth
properties.
We focus first on the Monge-Ampère equation in two dimensions of space, and
then on the (sigma-2) equation in three dimensions of space. Both problems correspond to finding a function defined by some kind of given curvature. We detail
a relaxation method, using a least squares approach, well-suited to the particular structure of these problems. This iterative method allows to decouple the
differential operators from point-wise nonlinear problems, and provide a flexible
computational framework. Classical variational PDE techniques and mixed finite
element approximations are used to solve the differential operators. Mathematical
programming techniques are used to solve the nonlinear optimization problems.
Numerical experiments are presented for various examples in two and three dimensions of space, in particular when non-smooth solutions are expected.
This is a joint work with Roland Glowinski (Univ. of Houston) and Danny C.
Sorensen (Rice University).
58
Alexandre Caboussat
Geneva School of Business Administration, CH
Numerical solution of a partial differential equation involving the Jacobian determinant
Contributed Session CT1.2: Monday, 18:30 - 19:00, CO2
We address the numerical approximation of a fully nonlinear partial differential
equation that involves the Jacobian determinant and that reads as follows: Find
u : Ω → R2 satisfying
det∇u = f
u=g
in Ω
on ∂Ω
where Ω ⊂ R2 is a two-dimensional domain, and f, g are given, sufficiently regular,
data.
This example of fully nonlinear equation has been studied from the theoretical
viewpoint, starting in, e.g., [Dacorogna, Moser (1990)]. We present here a numerical framework relying on variational arguments together with an adequate
high-order regularization. Based on previous works on fully nonlinear equations,
we advocate an augmented Lagrangian method to provide an approximation of the
solution of this problem. An iterative, Uzawa-type, algorithm is used to solve the
corresponding saddle-point problem, and decouples the local nonlinearities from
the differential operators arising in the variational framework.
Piecewise linear finite elements are used for the space discretization. The discrete iterative algorithm consists in solving alternatively a boundary-value elliptic
problem involving a biharmonic operator, and a sequence of local constrained optimization problems that arise on each grid element.
Numerical experiments show the efficiency, and robustness and the algorithm, as
well as its convergence properties when the problem admits a classical soluti on.
Finally, we numerically investigate the cases when the problem is not necessarily
well-posed.
This is a joint work with Prof. Roland Glowinski (University of Houston) and
Prof. Bernard Dacorogna (Ecole Polytechnique Fédérale de Lausanne).
Keywords: Fully nonlinear equation, Jacobian determinant, Volume preserving
mapping, Augmented Lagrangian, Finite element approximation.
59
Alfonso Caiazzo
WIAS Berlin, DE
An explicit stabilized projection scheme for incompressible NSE: analysis and application to POD based reduced order modeling
Minisymposium Session ANMF: Tuesday, 11:00 - 11:30, CO1
In this talk we propose a splitting scheme with a full explicit treatment of the
convection for the numerical resolution of incompressible Navier-Stokes equations.
The scheme is based on a Chorin-Temam projection method, combined with a
recently proposed explicit stabilized treatment of advection equations [Burman,
Ern, Fernandez, 2010]. The analysis of the method shows that the explicit stabilized advection is stable under a superlinear CFD condition. The method is
tested on several problems, comparing the accuracy against the standard ChorinTemam scheme with semi-implicit advection. Furthermore, we show applications
in the context of model order reduction based on Proper Orthogonal Decomposition (POD), where the explicit nature of the scheme allows to pre-compute the
reduced matrix.
Joint work with Miguel A. Fernandez, and Jean-Frederic Gerbeau.
60
Eric Cances
Ecole des Ponts and INRIA, FR
Multiscale eigenvalue problems
Minisymposium Session MSMA: Monday, 12:10 - 12:40, CO3
The numerical computation of the eigenvalues of a self-adjoint operator on an
infinite dimensional separable Hilbert space, is a standard problem of numerical
analysis and scientific computing, with a wide range of applications in science and
engineering. Such problems are encountered in particular in mechanics (vibrations
of elastic structures), electromagnetism and acoustics (resonant modes of cavities),
and quantum mechanics (bound states of quantum systems). Galerkin methods
provide an efficient way to compute the discrete eigenvalues of a bounded-frombelow self-adjoint operator A laying below the bottom of the essential spectrum of
A. On the other hand, Galerkin methods may fail to approximate discrete eigenvalues located in spectal gaps, that is between two points of the essential spectrum.
Such situations are encountered in multiscale eigenvalue problems when localized
bound states are trapped by local defects in infinite periodic media (quantum
dots in semi-conductors, defects in photonic crystals, atoms in the QED vacuum,
...). In some cases, the Galerkin method cannot find some of the eigenvalues of
A located in spectral gaps (lack of approximation); in other cases, the limit set of
the spectrum of the Galerkin approximations of A contains points which do not
belong to the spectrum of A (spectral pollution). I will present recent results on
the numerical analysis of these problems.
Joint work with Virginie Ehrlacher, and Yvon Maday.
61
Clément Cancès
LJLL - UPMC Paris 6, FR
Monotone corrections for cell-centered Finite Volume approximations of diffusion
equations
Minisymposium Session SDIFF: Monday, 11:10 - 11:40, CO123
We present a nonlinear technique to correct a general Finite Volume scheme for
anisotropic diffusion problems, which provides a discrete maximum principle. We
point out general properties satisfied by many Finite Volume schemes and prove
the proposed corrections also preserve these properties. We then study two specific
corrections proving, under numerical assumptions, that the corresponding approximate solutions converge to the continuous one as the size of the mesh tends to 0.
Finally we present numerical results showing that these corrections suppress local
minima produced by the original Finite Volume scheme.
This work results from a collaboration with M. Cathala and Ch. Le Potier.
62
Eric Cancès
Ecole des Ponts and INRIA, FR
Electronic structure calculation
Plenary Session: Tuesday, 09:10 - 10:00, Rolex Learning Center Auditorium
Electronic structure calculation is one of the main field of applications of quantum mechanics. It has become an essential tool in physics, chemistry, molecular
biology, materials science, and nanosciences.
In this talk, I will review the main numerical methods to solve the electronic
Schrödinger equation and the Kohn-Sham formulation of the Density Functional
Theory (DFT). The electronic Schrödinger equation is a high-dimensional linear elliptic eigenvalue problem, whose solutions can be numerically approximated either
by stochastic methods. Sparse tensor product techniques can also be considered.
Kohn-Sham models are constrained optimization problems, whose Euler-Lagrange
equations have the form of nonlinear elliptic eigenvalue problems. Recent progress
has been made in the analysis of these mathematical models and of the associated
numerical methods, which paves the road to certified numerical simulations (with
a posteriori error bounds) of the electronic structure of large molecular systems.
63
Daniela Capatina
University of Pau, FR
Stopping criteria based on locally reconstructed fluxes
Minisymposium Session STOP: Thursday, 15:00 - 15:30, CO1
A posteriori error estimators based on locally reconstructed H(div)-fluxes are
nowadays well-established. Since they provide sharp upper bounds, it seems appropriate to use them to define stopping criteria for iterative solution algorithms.
We consider a unified framework for local flux reconstruction, covering conforming, nonconforming and discontinuous Galerkin finite element methods. For this
reconstruction, it is supposed that the discrete equations are satisfied. However,
in the context of stopping criteria, this assumption is no longer verified. In this
talk, we propose a generalization, where the local conservation property of the
H(div)-fluxes is not fulfilled. It leads to a simple stopping criterion, balancing the
discretization and the iteration errors.
Joint work with Roland Becker, and Robert Luce.
64
Daniela Capatina
University of Pau, FR
Robust discretization of the Giesekus model
Minisymposium Session MANT: Wednesday, 11:00 - 11:30, CO017
We consider a discontinuous Galerkin discretization of a matrix-valued nonlinear transport equation, which arises in the modeling of viscoelastic fluids. More
precisely, it describes the constitutive law of the conformation tensor for certain
polymeric liquids. A challenging question from a numerical point of view is the
positivity of the solution. We prove existence and uniqueness of the discrete maximal solution, as well as convergence of a modified Newton method and positive
definiteness of the discrete solution. Applications to Giesekus and Oldroyd-B models for polymer flows are discussed. The positivity of the conformation tensor is
crucial for the derivation of energy estimates and for the robustness of numerical
schemes at large Weissenberg numbers. Numerical simulations will be presented.
Joint work with Roland Becker, and Didier Graebling.
65
Laura Cattaneo
Politecnico di Milano, IT
Computational models for coupling tissue perfusion and microcirculation
Contributed Session CT1.7: Monday, 18:30 - 19:00, CO122
Reduced models of fluid flows and mass transport in heterogeneous media are often adopted in the computational approach when the geometrical configuration
of the system at hand is too complex. A paradigmatic example in this respect
is blood flow through a network of capillaries surrounded by a porous interstitium. We numerically address this biological system by a computational model
based on the Immersed Boundary (IB) method, a technique originally proposed for
the solution of complex fluid-structure interaction problems [Liu et al., Comput.
Methods Appl. Mech. Engrg. 195 (2006)]. Exploiting the large aspect ratio of
the system, we avoid resolving the complex 3D geometry of the submerged vessels
by representing them with a 1D geometrical description of their centerline and
the resulting network. The IB method then gives rise to an asymptotic problem,
obtained applying a suitable rescaling and replacing the immersed interface and
the related conditions by means of an equivalent concentrated source term. The
advantage of such an approach relies in its efficiency, because it does not need a
full description of the real geometry allowing for a large economy of memory and
CPU time and it facilitates handling fully realistic networks.
The analysis of perfusion and drug release in vascularized tumors is a relevant
application of such techniques. Delivery of diagnostic and therapeutic agents differs dramatically between tumor and normal tissues. Blood vessels in tumors are
substantially leakier than in healthy tissue and they are tortuous. These vascular
abnormalities lead to an impaired blood supply and abnormal tumor microenvironment characterized by hypoxia and elevated interstitial fluid pressure that
reduces the distribution of macromolecules through advection [Chapman, S. et al.,
Bulletin of Mathematical Biology, 2008].
The aforementioned multiscale approach enables us to develop a simple computational model that retains the fluid dynamics characteristics of the microvasculature
at the macroscale and describe the transport of macromolecules in the vascular
structure and in the tumor interstitium.
Fluid and mass transport within a tumor mass is governed by a subtle interplay of sinks and sources, such as the leakage of the capillary bed, the lymphatic
drainage, the exchange of fluid with the exterior volume and the interstitial fluid
pressure. To better characterize the microenvironment, we develop a resistance
model for lymphatic drainage [Baxter, L.T. and Jain, R. K., Microvascular Research, 1990]. Regarding the boundary conditions on the outer surface of the tissue
region, they are frequently not determined by available experimental information
and additional assumptions must be made so that the problem is completely specified [Secomb, T.W. et al., Annals of Biomedical Engineering, 2004]. For interstitial
perfusion, the flow conditions enforced at the boundary of the domain significantly
determine how the model interacts with the exterior. We aim to model the in-vivo
configuration, where the tumor, or a sample of it, is embedded into a similar environment. To represent this case, we believe that the most flexible option is to
use Robin-type boundary conditions for the interstitial pressure.
Tissue perfusion is particularly relevant because it directly affects how efficiently
the microcirculation can bring nutrients and drugs to the cells permeating the
interstitial tissue and simultaneously remove metabolic wastes. To study these
66
effects we introduce two key indicators: the fluid flux from the capillary network to
the interstitial volume, f t , and the equivalent conductivity of the tissue construct,
kKkF . These indicators are affected by both the capillary conductivity and the
interstitial fluid pressure. Understanding which of these two last factors dominates
is the key point to determine what is the effect of enhanced permeability and
retention over tissue perfusion.
Finally we discuss the application of the model to delivery nanoparticles. In particular, transport of nanoparticles in the vessels network, their adhesion to the
vessel wall and the drug release in the surrounding tissue will be adressed.
Joint work with D. Ambrosi, L.Cattaneo, R. Penta, A. Quarteroni, and P. Zunino.
67
Cris Cecka
IACS, Harvard University, US
Fast Multipole Method Framework and Repository
Minisymposium Session PARA: Monday, 11:40 - 12:10, CO016
Fast multipole methods (FMM) are a general strategy for accelerating dense
matrix-vector products of the form
X
φi =
K(xi , yj ) σj
j
where K is the interaction kernel, xi and yj are source and target values (usually points or functions in Rd ), and σj and φi are the source charges and target
fields. The kernel may be a Green’s function from an N-body interaction, a integral operator from a boundary element method (BEM), or a radial basis function
for weighting or interpolation. The FMM accelerates the O(N 2 ) matrix-vector
product to O(N logα N ) and finds a wide range of applications in mechanics, fluid
dynamics, acoustics, electromagnetics, N-body problems, machine learning, computer vision, and interpolation.
FMMs require multiple, carefully optimized steps and numerical analysis in order
to achieve the improved asymptotic performance and required accuracy. These
research areas span tree generation, tree traversal, numerical and functional analysis, and the complex heterogeneous parallel computing strategies for each stage.
Unfortunately, many FMM codes are written with a particular application (an
interaction kernel and/or compute environment) in mind and optimized around
it. It is often difficult to extract out advances from one research area and apply
them to another code or application.
I will present recent work on a new framework and repository for the generalized
matrix-vector product above which attempts to abstract each stage of the FMM for
independent development. This allows us to develop the code at a high level and
collect a repository of interaction kernels for rapid application development in any
of the above domains. We will present of number of use cases and test applications
including a simple Poisson problem, more advanced BEM solvers for molecular
dynamics and/or electromagnetic PDEs, and the use of FMM as a preconditioner
for related PDEs.
In addition, I will present recent results for general optimization strategies for the
FMM and the abstracted kernels. This includes using a runtime system for parallel
scheduling and resolution of the complex dependencies within the tree structure,
use of GPU computing for accelerating the more structured operations, and aggregating like-transformations to improve data locality.
This is joint work with Lorena Barba, Simon Layton, Aparna Chandramowlishwaran, Rio Yokota, and Louis Ryan.
68
Antonio Cervone
ENEA - UTFISSM/SICSIS, IT
Parallel assembly on overlapping meshes using the LifeV library
Minisymposium Session PARA: Monday, 12:40 - 13:10, CO016
Matrices are required for the solution of PDEs when some implicit term is involved. Tipically, the system of (non-linear) differential equations is discretized
into a linear system, that couples all the degrees of freedom associated to the problem. In the Finite Element Method (FEM) this matrix is filled using a procedure
usually called "assembly", where the mesh that discretizes the domain of interest
is traveled element by element and its contribution is added to the matrix.
This step in the solution of the PDE can be a large part in terms of CPU time,
especially when solving time dependent simulations or non-linear systems where
the matrix terms must be assembled at every iteration. Unstructured meshes
are very common when dealing with the simulation of large scale and realistic
domains. This kind of meshes increase the computational time since the topology
of the elements cannot be computed a priori, but every element must know which
elements are its neighbors.
In parallel codes that rely on domain decomposition, the assembly is performed
on each subdomain separately. However, there are degrees of freedom that lie
on the separation line between subdomains, and the corresponding set of support
elements is split between processes. A widely used strategy to fill this contribution
from different processes requires a communication between them. It is well known
that the communication is one of the principal bottlenecks in codes. In order to
avoid this communication we introduce a novel approach that avoids it by using
overlapping subdivisions of the mesh.
This means that, when the mesh is cut in subdomains that are assigned to each
process, each degree of freedom that is associated with any process will have all
the elements of the mesh that are in its support in the local mesh. This clearly
generates overlapping meshes, that must be constructed accurately and efficiently
when dealing with unstructured grids. The assembly procedure on this kind of
meshes requires a larger computational cost, as the duplicated element contributions are computed more then once, but removes the need for any communication
between processes.
This work will describe the implementation techniques implemented in the LifeV
library to perform this type of assembly procedure. An analysis and comparison
of this approach is also shown, in order to assess the performance of this approach
in comparison with the traditional ones.
Joint work with Nur A. Fadel, and Luca Formaggia.
69
Xingyuan Chen
Center of Smart Interfaces, Technische Universität Darmstadt, DE
A numerical study of viscoelastic fluid-structure interaction and its application in
a micropump
Minisymposium Session MANT: Wednesday, 11:30 - 12:00, CO017
Micropumps play an important role in recent years in biomedical applications,
such as sampling and drug delivery. In these applications fluids normally have nonNewtonian, in particular viscoelastic, features. They flow in deformable domains
along with the interaction with elastic solids. Micropumps delivering Newtonian
fluids have been thoroughly studied in literature, but not much work is associated
with viscoelastic fluids. In the present work we use the recent techniques for simulation of viscoelastic fluid flow and fluid-structure interaction (FSI) to investigate
this complex problem.
We study the interaction between an Oldroyd-B fluid and an elastic solid using the partitioned implicit coupling approach. Our in-house code FASTEST
serves as the flow solver, which is based on the block-structured collocated finitevolume method. To cope with the high Weissenberg number problem (HWNP) in
simulation of viscoelastic fluid flow, we apply two stabilization approaches based
on the so-called Log-Conformation Representation (LCR) [1] and Square RootConformation Representation (SRCR) [2]. We do a comprehensive comparison
study of these two approaches and the standard approach, i.e. devoid of any stabilization approaches. Therefore we examine the test cases lid-driven cavity, 4:1
contraction flow and flow past a cylinder. We find that LCR and SRCR are not
only more stable, that is they can predict flows with much higher Weissenberg
number (Wi) than the standard approach, but they also allow the use of a larger
grid spacing without loss of accuracy. The latter can be seen e.g. in the mesh study
of the normal stress along the lid in lid-driven cavity (Figure 1). The solid part is
solved by the finite-element method program FEAP developed by U.C. Berkeley.
The MpCCI Coupling Environment is used as an interface for code coupling.
A two-dimensional collapsible channel, i.e. the middle part of the upper wall
is replaced by an elastic solid (Figure 2 (a)), is chosen as the first test case to
investigate the difference between Newtonian and viscoelastic FSI. We find that
the growing elastic effect of fluid increases the pressure on the membrane (Figure 2
(b)), which results in pushing the membrane upwards (Figure 2 (d)). Interestingly,
we also find that the pressure drop along the channel decreases with growing Wi
(Figure 2 (c)). As reported in [3], the HWNP occurs in this case. This problem is
not observed when LCR and SRCR are applied.
In the second case, we study a valveless micropump. The fluid flow in this type
of micropump is driven by a vibrating membrane. The effect of fluid flow on
membrane is not negligible in actual applications. We are investigating the different effects between Newtonian and viscoelastic fluids on the performance and
efficiency of the micropump. This work is in progress.
[1] R. Fattal et al., J. Non-Newton. Fluid Mech. 126 (2005) 23-37
[2] N. Balci et al., J. Non-Newton. Fluid Mech. 166 (2011) 546-553.
[3] D. Chakraboty et al., J. Non-Newton. Fluid Mech. 165 (2010) 1204-1218
70
Figure 1: A mesh study of the normal stress along the lid. ErM 1 is the relative
error of values of the coarsest mesh M1 compared with the extrapolated values.
Figure 2: (a) Geometry of the 2D collapsible channel; (b) Pressure on the membrane at different Wi; (c) Pressure drop along the channel at different Wi; (d)
Displacement of the middle point of the membrane at different Wi.
Joint work with Holger Marschall, Michael Schäfer, and Dieter Bothe.
71
Peng Chen
EPFL, CH
A Weighted Reduced Basis Method for Elliptic Partial Differential Equations with
Random Input Data
Contributed Session CT4.4: Friday, 09:50 - 10:20, CO015
In this work we propose and analyze a weighted reduced basis method to solve
elliptic partial differential equation (PDE) with random input data. The PDE
is first transformed into a weighted parametric elliptic problem depending on a
finite number of parameters. Distinctive importance at different values of the parameters are taken into account by assigning different weight to the samples in
the greedy sampling procedure. A priori convergence analysis is carried out by
constructive approximation of the exact solution with respect to the weighted parameters. Numerical examples are provided for the assessment of the advantages
of the proposed method over the reduced basis method and stochastic collocation
method in both univariate and multivariate stochastic problems.
Joint work with Alfio Quarteroni, and Gianluigi Rozza.
72
Francisco Chinesta
ECN - IUF, FR
DEIM-based Non-Linear PGD
Minisymposium Session ROMY: Thursday, 10:30 - 11:00, CO016
The efficient resolution of complex models (in the dimensionality sense) is probably
the essential objective of any model reduction method. This objective has been
clearly reached for many linear models encountered in physics and engineering [2,4].
However, model order reduction of nonlinear models, and specially, of parametric
nonlinear models, remains as an open issue. Using classic linearization techniques
such Newton method, both the nonlinear term and its Jacobian must be evaluated
at a cost that still depends on the dimension of the non-reduced model [1]. The
Discrete Empirical Interpolation Method (DEIM), which the discrete version of the
Empirical Interpolation Method (EIM) [3], proposes to overcome this difficulty
by using the reduced basis to interpolate the nonlinear term. The DEIM has
been used with Proper Orthogonal Decomposition (POD) [1,4] where the reduced
basis is a priori known as it comes from several pre-computed snapshots. In
this work, we propose to use the DEIM in the Proper Generalized Decomposition
(PGD) framework [2], which is an a priori model reduction technique, and thus
the nonlinear term is interpolated using the reduced basis that is being constructed
during the resolution.
Consider a certain model in the general form:
L(u) + FN L (u) = 0
(1)
where L is a linear differential operator and FN L is a nonlinear function, both
applying over the unknown u(x), x ∈ Ω = Ω1 × . . . × Ωd ⊂ Rd , which belongs
to the appropriate functional space and respects some boundary and/or initial
conditions. Using the PGD method implies constructing a basis B = {φ1 , . . . , φN }
such that the solution can be written as:
u(x) ≈
N
X
i=1
αi · φi (x)
(2)
where αi are coefficients, and
φi (x) = Pi1 (x1 ) · . . . · Pid (xd ) , i = 1, . . . , N
(3)
being Pij (xj ), j = 1, . . . , d, functions of a certain coordinate xj ∈ Ωj . In the linear
case, the basis B can be constructed sequentially by solving a nonlinear problem
at each step in order to find functions Pij . In the nonlinear case a linearization
scheme for Eq. 1 is compulsory, but evaluating the nonlinear term is still as
costly as in the non-reduced model. The DEIM method proposes to circumvent
this inconvenient by performing an interpolation of the nonlinear term using the
basis functions. In a POD framework, these functions come from the precomputed
snapshots, but in a PGD framework these functions are constructed by using the
PGD algorithm. Here we propose to proceed as follows:
I - Solve the linear problem: find u0 such that L(u0 ) = 0 → B 0 = {φ01 , . . . , φ0N0 }
II - Select a set of points X 0 = {x01 , . . . , x0N0 }. Later on we explain how to make
an appropriate choice.
73
III - Interpolate the nonlinear term FN L using functions B 0 in the points X 0 . Or
in other words, find the coefficients ϕ0i such as:
N0
X
FN L u0m ≡ FN L u0 (x0m ) =
ϕ0i · φ0i (x0m ) , m = 1, . . . , N0
(4)
i=1
IV - Once we have computed {ϕ01 , . . . , ϕ0N0 }, the interpolation of the nonlinear
term reads:
N0
X
FN L (u) ≈ b0 = −
ϕ0i · φ0i
(5)
i=1
and therefore, the linearized problem writes:
L(u) = b0
(6)
V - At this point, three options can be thought: (i) Restart the separated representation, i.e., find u1 ; (ii) Reuse the solution u0 , i.e. u1 = u0 + u
e and (iii) Reuse
by projecting.
VI - From this point we repeat the precedent steps: let us assume that we have
already computed uk . Then select a set of points X k = {xk1 , . . . , xkNk }, interpolate
the nonlinear term using B k , and find uk+1 , until a certain convergence criteria is
reached.
[1] Chaturantabut, S. and Sorensen, D.C. SIAM J. Sci. Comput. (2010) 32 27372764.
[2 ]Chinesta, F., Leygue, A., Bordeu, F., Aguado, J.V., Cueto, E., Gonzalez, D.,
Alfaro, I., Ammar, A. and Huertal. Arch. Comput. Methods Eng. (2013) 20
31-59
[3] Barrault, M., Maday, Y., Nguyen, N.C. and Patera, A.T. Comptes Rendus
Mathematique (2004) 339/9 667-672.
[4] Chinesta, F., Lavedeze, P. and Cueto, E. Archives of Computational Methods
in Engineering (2011) 18 395-404.
Joint work with J.V. Aguado, A. Leygue, E. Cueto, and A. Huerta.
74
Moulay Abdellah Chkifa
Laboratoire Jacques Louis Lions, FR
High-dimensional adaptive sparse polynomial interpolation and application for parametric and stochastic elliptic PDE’s
Contributed Session CT3.7: Thursday, 16:30 - 17:00, CO122
The numerical approximation of parametric partial differential equations is a computational challenge, in particular when the number of involved parameter is large.
We considers a model class of second order, linear, parametric, elliptic PDEs on a
bounded domain D with diffusion coefficients depending on the parameters in an
affine manner. For such models, it was shown in [1] that under very weak assumptions on the diffusion coefficients, the entire family of solutions to such equations
can be simultaneously approximated in the Hilbert space V = H01 (D) by multivariate sparse polynomials in the parameter vector y with a controlled number N
of terms. The convergence rate in terms of N does not depend on the number
of parameters in V , which may be arbitrarily large or countably infinite, thereby
breaking the curse of dimensionality. However, these approximation results do
not describe the concrete construction of these polynomial expansions, and should
therefore rather be viewed as benchmark for the convergence analysis of numerical
methods. We present the polynomial interpolation process in high dimension proposed in [4]. We explain how this process allows an easy Newton-like interpolation
formula for constructing polynomial interpolants and that is has a proven moderate Lebesgue constant for well located interpolation points based on the results in
[2]. As for the application to parametric PDEs, we show that sequences of sparse
polynomials constructed by the interpolation process are proved to converge toward the solution with the optimal benchmark rate. Numerical experiments are
presented in large parameter dimension, which confirm the effectiveness of the
process.
References
[1] A. C OHEN , R. D E V ORE , AND C. S CHWAB, Analytic regularity and polynomial
approximation of parametric and stochastic PDE’s, Analysis and Applications
(Singapore) 9, 1-37 (2011).
[2] A. C HKIFA , A. C OHEN , R. D E V ORE , AND C. S CHWAB, Sparse Adaptive Taylor Approximation Algorithms for Parametric and Stochastic Elliptic PDEs,
M2AN, volume 47-1, pages 253-280, 2013.
[3] A. C HKIFA, On the Lebesgue constant of Leja sequences for the complex unit
disk and of their real projection, volume 166, pages 176-200, 2013.
[4] A. C HKIFA , A. C OHEN , AND C. S CHWAB, High-dimensional adaptive sparse
polynomial interpolation and applications to parametric PDEs, To appear in
JfoCM 2013
[5] A. C HKIFA , A. C OHEN, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs., Submitted
Joint work with Albert Cohen, and Christoph Schwab.
75
Alexandra Christophe
Laboratoire de Génie Electrique de Paris (LGEP) and Univ. Nice, FR
Mortar FEs on overlapping subdomains for eddy current non destructive testing
Contributed Session CT1.6: Monday, 17:00 - 17:30, CO017
The modelisation in eddy current (EC) non destructive testing (NDT) aims at
reproducing the interaction between a sensor and a conductor in order to localize possible defects in the latter without damaging it. The finite element (FE)
method is frequently used in this context as well suited to treat problems with
complex geometries while keeping a simplicity in the implementation. However, in
NDT, the modelisation has to be realized for different positions of the sensor, thus
requiring a global remeshing of the problem domain. Different techniques to take
into account the movement of a sensor avoiding remeshing have been studied (see
e.g., [1]-[4]). The mortar element method (MEM), a variational non-conforming
domain decomposition approach [5, 6], offers attractive advantages in terms of
flexibility and accuracy. In its original version for non-overlapping subdomains,
the information is transferred through the skeleton of the decomposition by means
of a suitable L2 -projection of the field trace from the master to the slave subdomains. At the occasion of Enumath 2001, a MEM with overlapping subdomains
has been proposed to couple a global scalar potential defined everywhere in the
considered domain and a local vector potential defined only in (possibly moving)
conductors [7], and later applied to study electromagnetic brakes [8]. In this paper, a new FE-MEM able to deal with moving non-matching overlapping grids is
introduced, which realizes the bidirectional transfer of information between the
fixed subdomain (including the conductor and the air) and the moving one (represented by the sensor). The field source is in the moving part. This is indeed what
occurs in EC-NDT, as the alternative current alimented inductive coils move over
the conductors to detect possible defects on them (visible as a perturbation of the
EC distribution). Two numerical examples are presented to support the theory.
The first, an electrostatic problem with known solution, to state the optimality
of the method. The second, an EC-NDT application, to underline the flexibility
and efficiency of the proposed approach. This work has the financial support of
CEA-LIST.
References
[1] C.R.I. Emson, C.P. Riley, D.A. Walsh, K. Ueda, T. Kumano, “Modelling eddy
currents induced by rotating systems,” IEEE Trans. Mag., vol.34, No.5, pp.
2593-2596, 1998.
[2] S. Kurz, J. Fetzer, G. Lehner, W.M. Ricker, “A novel formulation for 3D
eddy current problems with moving bodies using a Lagrangian description
and BEM-FEM coupling,” IEEE Trans. Mag., vol.34, No.5, pp. 3068-3073,
1998.
[3] D. Rodger, H.C. Lai, P.J. Leonard, “Coupled elements for problems involving
movement,” IEEE Trans. Mag., vol.26, No.2, pp. 548-550, 1990.
76
[4] H. Zaidi, L. Santandrea, G. Krebs, Y. Le Bihan, E. Demaldent, “Use of Overlapping Finite Elements for Connecting Arbitrary Surfaces With Dual Formulations”, IEEE Trans. Mag., vol. 48, No. 2, pp. 583-586, 2012.
[5] C. Bernardi, Y. Maday, A. Patera, “A new non-Conforming approach to domain decomposition: the mortar element method”, Seminaire XI du College
de France, Brezis & Lions eds., in Nonlinear partial differential equations and
their applications, Pitman, pp. 13-51, 1994.
[6] B.I. Wohlmuth, “Discretization methods and iterative solvers based on domain
decomposition, Lecture Notes in Computational Science and Engineering, vol.
17, Springer, 2001.
[7] Y. Maday, F. Rapetti, B. I. Wohlmuth, “Mortar element coupling between
global scalar and local vector potentials to solve eddy current problems”, dans
“Numerical mathematics and advanced applications”, Enumath 2001 proc.,
Brezzi F. et al. eds., Springer-Verlag Italy (Milan) pp. 847–865, 2003.
[8] B. Flemisch, Y. Maday, F. Rapetti, B. I. Wohlmuth, “Scalar and vector potentials’ coupling on nonmatching grids for the simulation of an electromagnetic
brake”, COMPEL (Int. J. for Comp. and Math. in Electric and Electronic
Eng.), vol. 24, No. 3, pp. 1061-1070, 2005.
Joint work with F. Rapetti, L. Santandrea, G. Krebs, and Y. Le Bihan.
77
Konstantinos Chrysafinos
National Technical University of Athens, Greece, GR
Discontinuous time-stepping schemes for the velocity tracking problem under low
regularity assumptions
Minisymposium Session FEPD: Monday, 12:10 - 12:40, CO017
The velocity tracking problem for the evolutionary Stokes and Navier-Stokes flows
is examined. The scope of the optimal control problem under consideration is to
match the velocity vector field to a given target, using distributed controls. In
this talk, we present some results related to the analysis of suitable fully-discrete
schemes under low regularity assumptions on the given data of the prescribed flows.
The schemes are based a discontinuous (in time) Galerkin approach combined with
standard conformning (in space) finite elements. Error estimates for the state,
adjoint and control variables are presented in case of the evolutionary Stokes flows.
In addition, stability estimates and related convergence results are discussed for the
tracking problem related to Navier-Stokes flows under low regularity assumptions.
78
Ramon Codina
Universitat Politècnica de Catalunya, ES
Analysis of an unconditionally convergent stabilized finite element formulation for
incompressible magnetohydrodynamics
Minisymposium Session MMHD: Thursday, 10:30 - 11:00, CO017
In this work, we analyze a numerical formulation for the approximation of the incompressible visco-resistive magnetohydrodynamics (MHD) system, which models
incompressible viscous and electrically conducting fluids under electromagnetic
fields. Many conforming numerical approximations to this problem have been
proposed so far. There are different equivalent formulations of the continuous
magnetic sub-problem, namely saddle-point and (weighted) exact penalty formulations. The first one leads to a double-saddle-point formulation for the MHD
system. It is well-known that saddle-point formulations require to choose particular mixed FE spaces satisfying discrete versions of the so-called inf-sup conditions.
Instead, a weighted exact penalty formulation allows to simplify implementation
issues but introduces a new complication, the definition of the weight function.
Alternative formulations have been proposed for a regularized version of the system, based on an exact penalty formulation. These methods must be used with
caution, since they converge to spurious solutions when the exact magnetic field
is not smooth. Non-conforming approximations of discontinuous Galerkin type
have also been designed. They have good numerical properties, but the increase
in CPU cost (degrees of freedom) of these formulations (with respect to conforming formulations) is severe for realistic large-scale applications. Since the resistive
MHD system loses coercivity as the Reynolds and magnetic Reynolds numbers
increase, i.e. convection-type terms become dominant, the previous formulations
are unstable unless the mesh size is sufficiently refined, which is impractical.
In order to treat the problems described, some stabilized FE formulations have
been proposed for resistive MHD. These formulations are appealing in terms of
implementation issues, since arbitrary order Lagrangian FE spaces can be used
for all the unknowns and include convection-type stabilization. However, these
formulations are based on the regularized functional setting of the problem, and
so, restricted to smooth or convex domains. They are accurate for regular magnetic solutions but tend to spurious (unphysical) solutions otherwise. A further
improvement is the formulation we propose, which always converges to the exact
(physical) solution, even when it is singular. In this work, we carry out a numerical analysis of this formulation in order to prove stability and unconditional
convergence in the correct norms while keeping optimal a priori error estimates
for smooth solutions.
We first describe the MHD problem of interest and then recall the stabilized FE
formulation. We will then present a detailed stability and convergence analysis for
the stationary and linearized problem. The possible extension of these results to
nonlinear problem will also be discussed.
Joint work with Santiago Badia, and Ramon Planas.
79
Albert Cohen
Université Pierre et Marie Curie, FR
Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs
Minisymposium Session UQPD: Wednesday, 10:30 - 11:30, CO1
The numerical approximation of parametric partial differential equations D(u, y) =
0 is a computational challenge when the dimension d of of the parameter vector y
is large, due to the so-called curse of dimensionality. It was recently shown that,
for a certain class of elliptic PDEs with diffusion coefficients depending on the
parameters in an affine manner, there exists polynomial approximations to the
solution map y 7→ u(y) with an algebraic convergence rate that is immune to the
growth in the parametric dimension d, in the sense that it holds in the case d = ∞.
This analysis is however heavily tied to the linear nature of the considered diffusion PDE and to the affine parameter dependence of the operator. The present
talk proposes a general strategy in order to establish similar results for parametric PDEs that do not necessarily fall in this category. Our approach is based on
building an analytic extension z 7→ u(z) of the solution map on certain tensor
product of ellipses in the complex domain, and using this extension to estimate
the Legendre coefficients of u. The varying radii of the ellipses in each coordinate
zj reflect the anisotropy of the solution map with respect to the corresponding
parametric variables yj . This allows us to derive algebraic convergence rates for
tensorized Legendre expansions in the case d = ∞. We also show that such rates
are preserved when using certain interpolation procedures, which is an instance
of a non-intrusive method. As examples of parametric PDE’s that are covered
by this approach, we consider (i) diffusion equations with uniformly elliptic coefficients that depend on y in a non-affine manner, (ii) nonlinear monotone elliptic
PDE’s with coefficients parametrized by y, and (iii) elliptic equations set on a
domain that is parametrized by the vector y. While for the first example (i) the
validity of the analytic extension follows by straightforward arguments, we give
general strategies that allows us to derive it in a simple abstract way for examples
(ii) and (iii), in particular based on the holomorphic version of the implicit function theorem in Banach spaces. We expect that this approach can be applied to
a large variety of parametric PDEs, showing that the curse of dimensionality can
be overcome under mild assumptions.
Joint work with Abdellah Chkifa, and Christoph Schwab.
80
Claudia Colciago
EPFL, CH
Reduced Order Models for Fluid-Structure Interaction Problems in Haemodynamics
Minisymposium Session ROMY: Thursday, 15:00 - 15:30, CO016
The modelling of the haemodynamics in an arterial vessel requires the coupling of
the equation of the blood flow and the one for the vessel wall movement through
suitable conditions. This type of problems has high computational costs in terms of
time and memory storage. Our aim is to provide a reduced order Fluid-Structure
Interaction model (FSI-ROM) which allows to speed up the computations and, at
the same time, to lower the memory storage costs.
The FSI-ROM is based on two levels of reduction: we firstly perform a model
reduction and then a discretization one. In many cases, we are interested in the
blood flow dynamics in compliant vessels, whereas the displacement of the domain
is small and the structure dynamics is less relevant. In these situations, techniques
to reduce the complexity of the model can be used. In particular we focus our
attention on two sources of complexity that arise in a FSI problem. The first one is
represented by the time dependent fluid domain. A possible solution to overcome
this difficulty is using transpiration condition for the fluid model as surrogate for
the wall displacement, thus allowing to keep the domain fixed [2]. The second
source of complexity is the coupling between two different physical systems. We
choose to model the arterial wall as a thin membrane under specific assumptions
and, using suitable coupling conditions, we express the structural equation in terms
of the blood velocity. This strategy allows to integrate the dynamics of the vessel
motion in the fluid equations [1] . The resulting model is a Navier-Stokes system in
a fixed domain where the embedding of structural equation yields specific stiffness
integrals on the boundaries of the fluid domain.
The second level of reduction is achieved through the implementation of a Proper
Orthogonal Decomposition (POD) technique. Using a Galerkin projection, the
finite element discretization space is reduced to a low dimensional space that can
be solved in real time [5, 4].
We apply the FSI-ROM on a realistic case of a femoropopliteal bypass where
patient-specific boundary conditions are imposed at the inlet and outlet sections.
We first compare the finite element solution of the reduced FSI model with the one
of a 3D-3D FSI model [3]. The reference finite element solution has 106 degrees of
freedom, while the thanks to the POD we end up with a reduced system of about
30 degrees of freedom.
References
[1] C. A. Figueroa et al., Comput. Methods in Applied Mechanics and Engineering
195 (2006).
[2] S. Deparis et al., ESAIM:Mathematical Modelling and Numerical Analysis 37
(2003).
[3] C. M. Colciago et al., submitted, 33 (2011).
[4] A.-L. Gerner et al., arXiv:1208.5010, (2012).
81
[5] L. Grinberg et al., Annals of Biomedical Engineering, 37 (2009).
Joint work with Simone Deparis, and Alfio Quarteroni.
82
Anais Crestetto
University Paul Sabatier Toulouse 3, FR
Coupling of an Asymptotic-Preserving scheme with the Limit model for highly
anisotropic-elliptic problems
Minisymposium Session ASHO: Wednesday, 12:00 - 12:30, CO2
We are interested in the numerical simulation of 2D highly anisotropic-elliptic
problems, like the ones encountered in strongly magnetized ionospheric plasmas.
The anisotropy is parameterized by ε and leads to a multiscale problem, called
Singular-Perturbation (SP) problem. In previous works of Degond et al.1 and
Besse et al.2 , an Asymptotic-Preserving (AP) reformulation was used in order to
obtain an accurate scheme, whatever the value of ε. This formulation is based
on the decomposition of the unknown u into its mean part along the anistropy
direction (corresponding to the z-axis) and a perturbation.
For the applications we consider, ε 1 in a large range of the computational
domain. In this part of the domain, we can assume that the solution does not
depend on the z-coordinate. That is why we propose a strategy for the spatial
coupling of the AP reformulation and its limit (L) model. The obtained AP-L
scheme, based on finite elements discretization, is practically available and accurate
in the whole domain. Moreover, its cost is reduced in the region where ε 1,
which increases the performance of the scheme.
We will present some numerical results, for which ε depends on z and presents a
high gradient. Our coupling will be compared (accuracy, cost) to the AP reformulation and the SP problem.
Joint work with Fabrice Deluzet, Jacek Narski, and Claudia Negulescu.
1 P. Degond, F. Deluzet, C. Negulescu, An Asymptotic Preserving scheme for strongly anisotropic elliptic
problem, SIAM-MMS (Multiscale Modeling and Simulation) (2010).
2 C. Besse, F. Deluzet, C. Negulescu, C. Yang, Efficient numerical methods for strongly anisotropic
elliptic equations, Journal of Scientific Computing (2012).
83
Nicolas Crouseilles
inria, FR
Asymptotic preserving schemes for highly oscillatory Vlasov-Poisson equations
Minisymposium Session ASHO: Wednesday, 11:00 - 11:30, CO2
This work is devoted to the numerical simulation of a Vlasov-Poisson model describing a charged particle beam under the action of a rapidly oscillating external
field. We construct an Asymptotic Preserving numerical scheme for this kinetic
equation in the highly oscillatory limit. This scheme enables to simulate the problem without using any time step refinement technique. Moreover, since our numerical method is not based on the derivation of the simulation of asymptotic
models, it works in the regime where the solution does not oscillate rapidly, and
in the highly oscillatory regime as well. Our method is based on a "two scale" reformulation of the initial equation, with the introduction of an additional periodic
variable.
Joint work with mohammed lemou, and florian méhats.
84
Raffaele D’Ambrosio
Department of Mathematics, University of Salerno, IT
Numerical solution of Hamiltonian systems by multi-value methods
Contributed Session CT1.4: Monday, 17:30 - 18:00, CO015
The recent literature regarding geometric numerical integration of ordinary differential equations has given special emphasis to the employ of multi-value methods:
in particular, some efforts have been addressed to the construction of general linear methods with the aim of achieving an excellent long-time behavior for the
integration of Hamiltonian systems.
In this talk we present the analysis and derivation of G-symplectic and symmetric multi-value methods with zero growth parameter for the parasitic components
and test their effectiveness on a selection of Hamiltonian problems. A backward
error analysis is also presented, which permits to get sharp estimates for the parasitic solution components and for the error in the Hamiltonian. For carefully
constructed methods (symmetric and zero growth parameters) the error in the
parasitic components typically grows like hp+4 exp(h2 Lt), where p is the order of
the method, and L depends on the problem and on the coefficients of the method.
This is confirmed by numerical experiments.
References
[1] J. C. Butcher, R. D’Ambrosio, Partitioned general linear methods for separable Hamiltonian problems, submitted.
[2] R. D’Ambrosio, G. De Martino, B. Paternoster, Numerical integration of
Hamiltonian problems by G-symplectic integrators, submitted.
[3] R. D’Ambrosio, B. Paternoster, Long-term stability of multi-value methods
for ordinary differential equations, submitted.
85
Hogenrich Damanik
TU Dortmund, DE
A multigrid LCR-FEM solver for viscoelastic fluids with application to problems
with free surface
Minisymposium Session MANT: Tuesday, 12:00 - 12:30, CO017
In this talk, we shall discuss discretization and solution approaches for viscoelastic
fluid flow problems. We consider viscoelastic models which are based on upperconvective differential forms. The proposed numerical method is a combination
between good remodeling of the upper-convected viscoelastic models, strong discretization techniques and efficient solvers.
More specific, we use the Log-Conformation Reformulation (LCR) to remodel the
upper-convected viscoelastic materials, which is able to capture high stress gradients with exponential growth at critical Weissenberg numbers. The LCR technique
separates the velocity gradient into translational and rotational matrices. This allows to take the logarithmic of the conformation stress in the original equation.
Thus, the LCR technique preserves the positivity of the conformation tensor inside, and the conformation tensor can be easily obtained by taking the exponential
of the LCR components.
We apply high order biquadratic conforming FEM (Q2) to discretize the viscoelastic models given in their LCR form. This Q2 element with discontinuous pressure
element (P1) is, in our experience, one of the best choices for velocity-pressure
spaces and satisfies the LBB condition. Unfortunately, we are faced with the
same stability condition also for the velocity-stress approximation. As a remedy,
we penalize the discrete system with a jump over the edges which is known as
edge-oriented FEM. This provides stable discrete systems.
Next, we solve the total discrete systems in a monolithic coupled way. Here, all
unknowns are solved simultaneously with Newton iteration. Inside one Newton
step, a geometric multigrid solver takes care of the linearized discrete systems.
In our case, the multigrid solver uses a full Vanka-smoother together with the
Q2P1 canonical grid transfer routines. The local system inside the smoother utilizes a direct solver for solving small matrices. This guarantees the fully coupled
characteristic of viscoelastic problems.
Finally, we present mesh convergence studies for a well-known flow around cylinder benchmark configuration to validate our methodology, and present interesting
viscoelastic flow applications with multiphysics character, including multiphase
flow problems and film casting, see figures.
Keywords: Viscoelastic, LCR, FEM, Multigrid, Film casting, rising bubble.
86
Figure 1: Film casting configuration
Figure 2: Rising bubbles surrounded by different fluids
Joint work with Dr. Otto Mierka, Dr. Abderrahim Ouazzi, and Prof. Dr. Stefan
Turek.
87
Alexander Danilov
Institute of Numerical Mathematics of Russian Academy of Sciences, RU
Numerical simulation of large-scale hydrodynamic events
Minisymposium Session FREE: Monday, 16:00 - 16:30, CO2
We present basic components of the computational technology for the simulation of
complex hydrodynamic events, such as a break of a dam, a wave pileup and run-up,
a landlside, or a mud flow. The mathematical model is based on the Navier-Stokes
equations and the transport equation for level set function. The relation between
the stress tensor and the rate of strain tensor may be nonlinear which results in
non-Newtonian flows. The numerical method uses adaptively refined octree meshes
and the finite volume discretization of the differential equations. The efficiency of
the technology is illustrated by simulations of hydrodynamic events in areas with
actual 3D topology.
Yu.Vassilevski, K.Nikitin, M.Olshanskii, K.Terekhov. CFD technology for 3D simulation of large-scale hydrodynamic events and disasters. Rus. J. Numer. Anal.
Math. Model. 27(4), (2012), 399-412.
Joint work with K.Nikitin, and K.Terekhov.
88
Mark Davenport
Georgia Institute of Technology, US
One-Bit Matrix Completion
Minisymposium Session ACDA: Monday, 14:30 - 15:00, CO122
In this talk I will describe a theory of matrix completion for the extreme case of
noisy 1-bit observations. Instead of observing a subset of the real-valued entries of
a matrix M , we obtain a small number of binary (1-bit) measurements generated
according to a probability distribution determined by the real-valued entries of
M . The central question I will discuss is whether or not it is possible to obtain an
accurate estimate of M from this data. In general this would seem impossible, but
we show that the maximum likelihood estimate under a suitable constraint returns
an accurate estimate of M under certain natural conditions. If the log-likelihood
is a concave function (e.g., the logistic or probit observation models), then we can
obtain this estimate by optimizing a convex program.
Joint work with Yaniv Plan, Ewout van den Berg, and Mary Wootters.
89
Raúl de la Cruz
Barcelona Supercomputing Center, ES
Unveiling WARIS code, a parallel and multi-purpose FDM framework
Minisymposium Session PARA: Monday, 15:00 - 15:30, CO016
WARIS is an in-house multi-purpose framework focused on solving scientific problems using Finite Difference Methods (FDM) as numerical scheme. Its framework
was designed from scratch to solve in a parallel and efficient way Earth Science
and Computational Fluid Dynamic problems among a wide variety of architectures. Structured meshes are employed to represent the problem domains, which
are better suited to be optimized in accelerator-based architectures. To succeed in
such challenge, WARIS framework was initially designed to be modular in order
to ease development cycles, portability, reusability and future extensions of the
framework.
Our framework is composed of two primary systems, the Physical Simulator Kernel
(PSK) and the Workflow Manager (WM). The PSK system is in charge of providing the spatial and temporal discretization scheme code for the simulated physics.
Its aim is also to provide a base for the specialization of physical problems (i.e.
Advection-Diffusion-Reaction, Navier-Stokes governing equations) on any forthcoming architecture (i.e. general purpose processors, GPGPUs, Intel Xeon Phi).
So, this module is basically a template that provides the appropriate framework
for implementing a specific simulator. As a consequence, flexibility in design must
be attained to let the specialization accommodate any kind of physics by reusing
as much code as possible. This approach will minimize the development cycle by
reducing the code size and the debugging efforts.
In order to provide such a system, the PSK is divided in two components: the
host and the device. The former is the part of the framework responsible for the
general issues about any simulator kernel, such as domain decomposition, neighbor
communications and I/O operations (PSK framework subsystem). The latter is
composed of a set of specializations that are used to configure the framework in
order to have a functional simulator (PSK Specialization subsystem). The specialization framework may depend on many aspects, such as the physical problem
to simulate, the hardware platform target and the numerical method (explicit,
implicit, low-order or high-order accuracy schemes).
Host and device components are interrelated through the computational architecture model of the PSK. Figure 1 shows the Computational Node (CN) concept
considered in such model and their relation. A CN is composed of host and device
components (computational elements), which are attached to a common address
space (CAS) memory. The host component may be assigned to a general purpose
processor that runs the PSK framework subsystem. On the other hand, the device
component can be assigned either to a general purpose processor or an accelerator device (i.e. GPGPU, Intel Xeon Phi, FPGA or any other specific processor)
running the specialized functions of the PSK Specialization subsystem. The communication among all the computational elements in the same CN are conducted
using the CAS memory, whereas the MPI standard is used to communicate several
CNs along the network.
Besides, the WM system is in charge of providing a framework that allows to
process several physical problems in parallel. This framework includes all the necessary components to provide a distributed application in the sense that is capable
of process several independent problems using different computational nodes. This
90
approach enables to tackle a statistical study or an optimal parameter search in a
massively parallel way for a given physical problem. Such designed system architecture implements a Master-Worker pattern. The Master node manages, schedules
and commands the Workers’ sets, which are allocated in several CN, by assigning them new tasks and including also fault-tolerance features. Then, each set of
Workers run a specific PSK framework with a given input parameter configuration
for a physical problem. The Workers’ executions can include a kernel computation
or a data post-process step. Finally, at the end of the execution, the Master collect
the whole information left by Workers as the result of the computation.
To summarize, WARIS framework has shown appealing capabilities by providing
successful support for scientific problems using FDM. In the foreseeable future, as
the amount of computational resources will increase, more sophisticated physics
may be simulated. Furthermore, it provides support for a wide-range of hardware
platforms. Therefore, as the computational race keep the hardware changing everyday, support for specific platforms that will give the best performance results
will be supplied for the different simulated physics.
Figure 1: Architecture Model that interrelates the components of Computational
Nodes.
Joint work with Mauricio Hanzich, Arnau Folch, Guillaume Houzeaux, and José
Maria Cela.
91
Kristian Debrabant
University of Southern Denmark, Department of Mathematics and Computer Science, DK
Monotone approximations for Hamilton-Jacobi-Bellman equations
Minisymposium Session NMFN: Monday, 11:10 - 11:40, CO2
In this talk we consider the numerical solution of diffusion equations of HamiltonJacobi-Bellman type
n
o
ut − inf Lα [u](t, x) + cα (t, x)u + f α (t, x) = 0
in (0, T ] × RN ,
α∈A
u(0, x) = g(x)
in RN ,
where
Lα [u](t, x) = tr[aα (t, x)D2 u(t, x)] + bα (t, x)Du(t, x).
The solution of such problems can be interpreted as value function of a stochastic
control problem. We introduce a class of monotone approximation schemes relying
on monotone interpolation. Besides providing a unifying framework for several
known first order accurate schemes, the presented class of schemes includes new
first and higher order approximation methods. Some stability and convergence
results are given, as well as numerical examples.
References
[1] Kristian Debrabant and Espen Robstad Jakobsen. Semi-Lagrangian schemes
for linear and fully non-linear diffusion equations. Math. Comp., 82(283):1433–
1462, 2013.
Joint work with Espen Robstad Jakobsen.
92
Luca Dede
CMCS-MATHICSE-Ecole Polytechnique Federale de Lausanne, CH
Numerical approximation of Partial Differential Equations on surfaces by Isogeometric Analysis
Minisymposium Session GEOP: Wednesday, 12:00 - 12:30, CO122
We consider the numerical approximation of Partial Differential Equations (PDEs)
on surfaces by means of Isogeometric Analysis, an approximation method based
on the isoparametric concept for which the basis functions used to represent the
computational domain are then used for the approximation of the unknown solutions of the PDEs (Hughes, Cottrell, Bazilevs, Comput. Methods Appl. Mech.
Eng. 2005). The method facilitates the encapsulation of the exact geometrical
description of lower dimensional manifolds in the analysis when these are represented by B–splines or NURBS. In particular, since NURBS allow to represent a
wide range of geometries, including conic sections, we consider the approximation
of the PDEs on surfaces by means of NURBS–based Isogeometric Analysis.
In this work, we solve linear, nonlinear, time dependent, and geometric PDEs involving the second order Laplace-Beltrami and high-order operators on surfaces.
Moreover, we propose a priori error estimates under h–refinement which confirm
the accuracy properties of the NURBS–based Isogeometric Analysis.
Joint work with Alfio Quarteroni.
93
Ekaterina Dementyeva
Institute of Computational Modeling SB RAS, RU
The Inverse Problem of a Boundary Function Recovery by Observation Data for
the Shallow Water Model
Contributed Session CT3.1: Thursday, 17:00 - 17:30, CO1
Shallow water models adequately describe a large class of natural phenomena
such as large-scale free surface waves arising in seas and oceans, tsunamis, flood
currents, surface and channel run-offs, gravitation oscillation of the ocean surface.
In this paper the problem of long-wave propagation in a large water area is considered. The mathematical model of the shallow water equations on a spherical
surface is used. A boundary of a numerical domain consists of a coastline (“hard”)
part and an open-water (“liquid”) part. In general case the influence of the ocean
through an open-water part of a boundary is uncertain. Therefore at “liquid”
part of a boundary the boundary conditions contain a special unknown function d
which should be determined together with velocity and free surface level. Thus, the
ill-posed inverse problem of reconstruction of the boundary function is considered.
To solve this problem we use additional information, e.g. observation of free surface
level on a part of a “liquid” boundary. We investigate three different approaches to
regularization of our ill-posed problem using adjoint operators and optimal control
theory. The advantages and disadvantages of each regularizer, which uses norm
in a search space of d, are researched. As a result, the numerical solving of the
inverse problem is a iterative process on alternate solutions of direct and adjoint
equations and d refinement equation. The differential problems are reduced to
algebraic ones by the finite element method.
Numerical experiments of data recovery are carried out on the Sea of Okhotsk
region. We use the model observation data of different smoothness — smooth,
with white noise (Fig. 1 a), with gaps (Fig. 2 a). Some results of numerical
recovery of the unknown boundary function d are represented on Fig. 1, 2.
Parallel software using MPI and OpenMP technologies is developed. Considerable
attention is paid to the description of effective parallel numerical algorithms based
on the MPI.
The work was supported by Russian Foundation of Fundamental Researches (grant
11-01-00224-a) and by SB RAS (Project 130).
94
a)
b) 6
2
1
1
1,5
4,5
19
2
3
d
ξ
1
0,5
0
1,5
0
0
44
46
48
φ
50
52
c) 4,5
54
1
40
3
d
0
42
44
46
48
φ
50
52
d) 6
1,5
54
1
4,5
45
3
d
42
0
1,5
0
0
42
44
46
48
φ
50
52
42
54
44
46
48
50
52
54
φ
Figure 1: Recovery of the function d along one of a boundaries of the numerical
domain by noisy observation data. Fig. a) – model observation data of free surface
level ξ: 1 – smooth, 2 – “noisy”. Iteration processes of the function d recovery: b)
d ∈ L2 (Γ); c) d ∈ H 1 (Γ); d) d ∈ H 1/2 (Γ). “0” is an exact d
b) 4,5
2
1,5
ξ
1
1
2
1
9
3
0
d
a)
1,5
0,5
0
0
42
44
46
48
φ
50
52
c) 4,5
42
54
1
48
φ
50
52
13
3
1,5
54
1
0
d
d
0
46
d) 4,5
71
3
44
1,5
0
0
42
44
46
48
φ
50
52
54
42
44
46
48
50
52
54
φ
Figure 2: Recovery of the function d along one of a boundaries of the numerical
domain by observation data with gaps. Fig. a) – model observation data of free
surface level ξ: 1 – smooth, 2 – “noisy”. Iteration processes of the function d
recovery: b) d ∈ L2 (Γ); c) d ∈ H 1 (Γ); d) d ∈ H 1/2 (Γ). “0” is an exact d
Joint work with Karepova Evgeniya, and Shaidurov Vladimir.
95
Dennis den Ouden
Materials innovation institute, NL
Application of the level-set method to a multi-component Stefan problem
Contributed Session CT2.9: Tuesday, 15:00 - 15:30, CO124
This study focuses on the dissolution and growth of small particles within a matrix phase. The interface between the particle and the matrix phase can have a
non-smooth shape. The dissolution or growth of the particle is assumed to be
affected by concentration gradients of several chemical elements within the matrix
phase at the particle/matrix boundary and by an interface reaction, resulting into
a mixed-mode formulation. The mathematical formulation of the dissolution is
described by a Stefan problem, in which the location of the interface changes in
time. At the interface two conditions are present for each chemical element, one
governs the mass balance at the interface and results into an equation of motion,
and another condition describes the reaction at the interface which results into a
Robin boundary condition. Within the matrix phase we assume that the standard diffusion equation applies to the concentration of the considered chemical
elements.
The formulated Stefan problem is solved using a level-set method by introducing a
time-dependent signed-distance function for which the zero-level contour describes
the particle/matrix interface. The evolution of this signed distance function is
described by a standard convection equation in which the convection speed is
derived from the interface velocity. To ensure the signed-distance property of the
level-set function we employ a novel pde-free technique for reinitialization of the
level-set function.
The equilibrium concentrations of all chemical elements are influenced by each
other and the solutions to the moving boundary problem using local equilibria.
This leads to a highly non-linear root-finding problem which is solved using an
adapted form of Broyden’s Method, which for our problem minimises the number
of function calls. Furthermore are the number of function calls in the first iteration
independent of the mesh size, which are used in the estimation of the Jacobian
matrix.
Both the convection equation for the signed-distance function and the diffusion
equations are discretised by the use of finite-element techniques. The convection
equations for evolution of the signed-distance function are solved on a pre-defined
grid using a Streamline Upwind Petrov Galerkin finite-element method. The diffusion equations are solved on a part of the pre-defined grid, which is determined by
the negative value of the signed-distance function. The convection equation for the
computation of the convection speed is solved on the pre-defined grid which is enriched with extra nodes, which are located on the zero-level of the signed-distance
function.
Simulations with the implemented methods for the dissolution and growth of various particle shapes show that the methods employed in this study correctly capture
the evolution of the particle/matrix interface, especially for non-smooth interfaces
and breaking and merging of particles. In the early stages of dissolution and growth
the results show that our numerical methods are in good agreement with analytical
similarity solutions, where at later stages of growth and dissolution physical equilibrium is attained. We have also seen that our solutions show mass conservation
when we let the time-step and mesh-coarseness tend to zero. The most important
innovation is the extension of the existing method [1] to the simulation of growth
96
and dissolution of particles in multi-component alloys.
References
[1] D. Ouden den, A. Segal, F.J. Vermolen, L. Zhao, C. Vuik & J. Sietsma,
Application of the level-set method to a mixed-mode driven Stefan problem
in 2D and 3D, Computing, (2012), DOI:10.1007/s00607-012-0247-3
Joint work with D. den Ouden, A. Segal, F.J. Vermolen, L. Zhao, C. Vuik, and J.
Sietsma.
97
Simone Deparis
CMCS - MATHICSE - EPFL, CH
On the continuity of flow rates, stresses and total stresses in geometrical multiscale
cardiovascular models
Minisymposium Session SMAP: Monday, 11:40 - 12:10, CO015
After a short revision of the geometric multiscale modeling for the cardiovscular
system, we present an algorithm for the implicit coupling of average or mean quantities over the interface between models.
The more common ones are the flow rate and the average stress. Recently it has
been pointed out the the latter shall be replaced by the average total stress. Indeed,
conservation of mean total normal stress in the coupling of heterogeneous models
is mandatory to satisfy energetic consistency between them. Existing methodologies are based on modifications of the Navier–Stokes variational formulation,
which are undesired when dealing with fluid-structure interaction or black box
codes. The presented methodology makes possible to couple one-dimensional and
three-dimensional fluid-structure interaction models, enforcing the continuity of
mean total normal stress while just imposing flow rate data or even the classical
Neumann boundary data to the models. This is accomplished by modifying an
existing iterative algorithm, which is also able to account for the continuity of the
vessel area, whenever required. Comparisons are performed to assess differences
in the convergence properties of the algorithms when considering the continuity
of mean normal stress and the continuity of mean total normal stress for a wide
range of flow regimes. Finally, examples in the physiological regime are shown
to evaluate the importance, or not, of considering the continuity of mean total
normal stress in hemodynamics simulations.
98
Bruno Despres
UPMC-LJLL, FR
Uniform convergence of Asymptotic Preserving schemes on general meshes
Minisymposium Session ASHO: Wednesday, 10:30 - 11:00, CO2
Diffusion Aymptotic Preserving schemes on multiD general meshes display many
difficulties, some which are technical and some which are fundamental. The salient
one is the structure of the mesh which generates a distortion of the A.P. properties
of 1D scheme: in practice the multiD scheme can become non A.P. This topic
has been investigated recently in a joint work with Buet and Franck. In this
context uniform estimates of convergence are mandatory to assess the A.P. feature:
uniform means uniform with respect to the stiffness parameter and the mesh size.
I will explain the problem for diffusion A.P. schemes for the hyperbolic heat equation in 2D, and show a new error estimate. This estimate is finally incorporated
in the standard strategy of proof.
Joint work with C.Buet (CEA), E. Franck (Max-Planck), T. LEroy (CEA, and
PhD).
99
Daniele Di Pietro
Université de Montpellier 2, FR
A generalization of the Crouzeix–Raviart and Raviart–Thomas spaces with applications in subsoil modeling
Contributed Session CT3.4: Thursday, 17:30 - 18:00, CO015
In the context of industrial simulators, lowest-order methods capable of handling
general polygonal or polyhedral meshes have received an increasing attention over
the last few years. The use of general elements may ease the discretization of
complex domains, allow the use of nonconforming h-adaptivity, and is mandatory
whenever the user cannot adapt the mesh to the needs of their numerical scheme.
This is the case, e.g., in computational geosciences, where the discretization of
the subsoil aims at integrating data from the seismic analysis. As a result, fairly
general meshes can be encountered, possibly featuring nonmatching interfaces or
degenerated elements in eroded layers. Polyhedral elements may also be used in
near wellbore regions to exploit (qualitative) a priori knowledge of the solution.
Among the methods that have appeared in recent years, we recall the Mimetic
Finite Difference method of [Brezzi et al.(2005)], the Mixed/Hybrid Finite Volume
(HFV) methods of [Eymard et al. (2010)], and the cell centered Galerkin (ccG)
method of [Di Pietro (2012)].
The main result of the present work is to show how ideas from HFV and ccG
methods can be combined to construct a discrete space of piecewise affine functions
which extends two key properties of the classical Crouzeix–Raviart space to general
meshes, namely,
1. the continuity of mean values at interfaces;
2. the existence of an interpolator which preserves the mean value of the gradient inside each element and ensures optimal approximation properties.
For H set of positive meshsizes having 0 as its unique accumulation point, let
(Kh )h∈H denote an admissible mesh family in the sense of Di Pietro (2012). In the
spirit of Cell Centered Galerkin (ccG) methods, the discrete space is constructed
in three steps:
1. we fix the vector space Vh of face- and cell-centered degrees of freedom
(DOFs) on Kh ;
2. we define a discrete gradient reconstruction operator Gh acting on Vh . The
reconstructed gradient results from the sum of two terms: a consistent part
depending on face unknowns only and a subgrid correction involving both
face- and cell-centered DOFs. The continuity of mean values at interfaces is
ensured by finely tuning the latter contribution;
3. we define an affine reconstruction operator Rh acting on Vh which maps
every vector of DOFs on a broken affine function obtained by an affine perturbation of face unknowns based on the discrete gradient. The discrete
space is then defined as
CRg(Kh ) = Rh (Vh ).
An important point is that all the relevant geometric information is computed
on the mesh Kh , which is therefore the only one that needs to be described and
100
manipulated by the end-user. Similar ideas can be used to construct a H(div; Ω)conforming discrete space on general meshes which mimics two key properties of
the lowest-order Raviart–Thomas space on matching simplicial meshes, namely
the (full) continuity of normal values at interfaces and the optimal approximation
of vector-valued fields.
The generalized Crouzeix–Raviart space is used to construct a locking-free primal
discretization of the linear elasticity equations inspired by the method of [Brenner and Sung (1992)].
In the context of linear elasticity, locking refers to the loss of accuracy of the lowestorder Lagrange finite elements when dealing with quasi-incompressible materials
for which Poisson’s ratio tends to 1/2. Numerical examples showing the robustness
of the proposed method with respect to numerical locking are provided based on
different mesh sequences including highly distorted and general polygonal ones.
References
[Brenner and Sung (1992)] S. C. Brenner and L.-Y. Sung. Linear finite element
methods for planar linear elasticity. Math. Comp., 59(200):321–338, 1992.
[Brezzi et al.(2005)] F. Brezzi, K. Lipnikov, and M. Shashkov. Convergence of the
mimetic finite difference method for diffusion problems on polyhedral meshes.
SIAM J. Numer. Anal., 43(5):1872–1896, 2005.
[Di Pietro (2012)] D. A. Di Pietro. Cell centered Galerkin methods for diffusive
problems. M2AN Math. Model. Numer. Anal., 46(1):111–144, 2012.
[Eymard et al. (2010)] R. Eymard, T. Gallouët, and R. Herbin. Discretization of
heterogeneous and anisotropic diffusion problems on general nonconforming
meshes. SUSHI: a scheme using stabilization and hybrid interfaces. IMA J.
Numer. Anal., 30(4):1009–1043, 2010.
Joint work with Simon Lemaire.
101
Gabriel Dimitriu
"Grigore T. Popa" University of Medicine and Pharmacy, RO
POD-DEIM Approach on Dimension Reduction of a Multi-Species Host-Parasitoid
System
Contributed Session CT3.5: Thursday, 17:00 - 17:30, CO016
The reduced-order approach is based on projecting the dynamical system onto subspaces consisting of basis elements that contain characteristics of the expected solution. Currently, Proper orthogonal Decomposition (POD) is probably the mostly
used and most successful model reduction technique, where the basis functions
contain information from the solutions of the dynamical system at pre-specified
time-instances, so-called snapshots. Due to a possible linear dependence or almost linear dependence, the snapshots themselves are not appropriate as a basis.
Hence a singular value decomposition is carried out and the leading generalized
eigenfunctions are chosen as a basis, referred to as the POD basis.
Unfortunately, for nonlinear PDEs, the efficiency in solving the reduced-order
systems constructed from standard Galerkin projection with any reduced globally
supported basis set, including the one from POD, is limited to the linear or bilinear
part, both for finite element or finite difference schemes since nonlinear terms still
require calculation on the full dimensional model.
A considerable reduction in complexity is achieved by DEIM – a discrete variation
of Empirical Interpolation Method (EIM), proposed by Barrault, Maday, Nguyen
and Patera in: An “empirical interpolation” method: Application to efficient reducedbasis discretization of partial differential equations, C. R. Math. Acad. Sci. Paris,
339 (2004), 667–672. According to this method, the evaluation of the approximate nonlinear term does not require a prolongation of the reduced state variables
back to the original high dimensional state approximation required to evaluate the
nonlinearity in the POD approximation.
In this study we carry out an application of DEIM combined with POD to provide dimension reduction of a model that focuses on the aggregative response of
parasitoids to hosts in a coupled multi-species system comprising two parasitoid
species, two host species and a chemoattractant. The model defined by a system of
five reaction-diffusion-chemotaxis equations was introduced by I.G. Pearce, M.A.J.
Chaplain, P.G. Schofield, A.R.A. Anderson and S.F. Hubbard in: Chemotaxisinduced spatio-temporal heterogeneity in multi-species host-parasitoid systems, J.
Math. Biol., 55 (2007), 365–388. We show DEIM improves the efficiency of the
POD approximation and achieves a complexity reduction of the nonlinear term.
Numerical results are presented.
Joint work with Ionel Michael Navon, and Razvan Stefanescu.
102
Sergey Dolgov
Max Planck Institute for Mathematics in the Sciences, DE
Alternating minimal energy methods for linear systems in higher dimensions. Part
II: implementation hints and application to nonsymmetric systems
Contributed Session CT2.8: Tuesday, 15:00 - 15:30, CO123
In this talk we further develop and investigate the rank-adaptive alternating methods for high-dimensional tensor-structured linear systems. The ALS method is
reformulated in a recurrent variant, which performs a subsequent linear system reduction, and the basis enrichment is derived in terms of the reduced system. This
algorithm appears to be more robust than the method based on a global steepest
descent correction, and additional heuristics allow to speedup the computations.
Furthermore, the very same method is applied to nonsymmetric systems as well.
Though its theoretical justification is based on the FOM method, and is more
difficult than in the SPD case, the practical performance is still very satisfactory,
which is demonstrated on several examples of the Fokker-Planck and chemical
master equations.
Joint work with Dmitry Savostyanov.
103
Marco Donatelli
Department of Science and High Technology - University of Insubria, IT
Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation
Minisymposium Session CTNL: Wednesday, 11:00 - 11:30, CO015
We consider linear systems of large dimensions, (locally) structured, resulting from
the linearization of systems of nonlinear equations obtained by discretizing nonlinear parabolic equations (possibly degenerate) by means of finite differences in
space and implicit schemes in time.
In the first part, we consider a uniform discretization in space. Using the theory of
sequences of (locally) Toeplitz matrices for studying the spectrum of the Jacobian
matrix of Newton’s method, we prove the convergence and we derive optimal
preconditioners based on multigrid techniques. The numerical tests are conducted
on the equation of porous media and on a particular nonlinear parabolic equation
that models the sulfation of marble by polluting agents [1,2].
Subsequently, driven by the presence of a boundary layer in the model of sulfation,
we extend the previous results to the case of not uniform grids in space, using
preconditioners based on algebraic multigrid [3].
References:
[1] M. Donatelli, M. Semplice, S. Serra-Capizzano - Analysis of multigrid preconditioning for implicit PDE solvers for degenerate parabolic equazions - SIAM J.
Matrix Anal. Appl. 32 (2011) 1125–1148.
[2] M. Semplice - Preconditioned implicit solvers for nonlinear PDEs in monument
conservation - SIAM J. Sci. Comp. 32 (2010) 3071-3091.
[3] M. Donatelli, M. Semplice, S. Serra-Capizzano - AMG preconditioning for
nonlinear degenerate parabolic equations on nonuniform grids with application to
monument degradation - Appl. Numer. Math., in press.
Joint work with M. Semplice, and S. Serra-Capizzano.
104
Martin Ehler
Helmholtz Zentrum Muenchen, DE
Signal reconstruction from magnitude measurements via semidefinite programming
Minisymposium Session ACDA: Monday, 11:10 - 11:40, CO122
Inspired by high-dimensional data analysis and multi-spectral imaging, we aim to
reconstruct a finite dimensional vector from a set of magnitudes of its subspace
components. First, we develop closed formulas for signal reconstruction. Second,
we use semi-definite programming and random subspaces to reduce the number of
required subspace components. We also address the optimal choice of the subspace
dimension.
Motivated by applications in physics, we also discuss the reconstruction of a finitedimensional signal from the absolute values of its Fourier coefficients. In many
optical experiments the signal magnitude in time is also available. We combine
time and frequency magnitude measurements to obtain closed reconstruction formulas. A hybrid scheme of random Fourier and deterministic time measurements
are discussed to reduce the number of required frequency magnitudes.
Joint work with Christine Bachoc.
105
Virginie Ehrlacher
Ecole des Ponts Paristech/INRIA, FR
Greedy algorithms for high-dimensional eigenvalue problems
Minisymposium Session LRTT: Tuesday, 10:30 - 11:00, CO3
In this talk, some new greedy algorithms in order to compute the lowest eigenvalue and an associated eigenvector of a high-dimensional eigenvalue problem will
be presented. The principle of these numerical methods consists in expanding
a tentative eigenvector associated to this eigenvalue as a sum of so-called tensor
product functions and compute each of these tensor product function iteratively
as the best possible, in a sense which will be made clear in the talk. The advantage of this family of methods relies in the fact that the resolution of the
original high-dimensional problem is replaced with the resolution of several lowdimensional problems, which are more easily implementable. The convergence
results we proved for our algorithms will be detailed, along with some convergence
rates in finite dimension.
Joint work with Eric Cancès, and Tony Lelièvre.
106
Virginie Ehrlacher
Ecole des Ponts Paristech/INRIA, FR
Optimization of a structurally graded microstructured material
Minisymposium Session MSMA: Monday, 15:30 - 16:00, CO3
An approach for the optimization of non-periodic microstructured material through
the homogenization method will be presented. The central idea, simsilar to the one
used by Pantz and Trabelsi [1], consists in modeling the material as a macroscopic
deformation of an initially periodic material. Following the path of Bensoussan,
Lions and Papanicolaou [2], homogenization fomulas can be derived to obtain the
expression of the effective stiffness elasticity tensor in the limit when the size of
the microcells composing the material tends to zero. Using reduced-order models
obtained via greedy algorithms, the optimization procedure is performed either using the homogenization method. Numerical results obtained on a two-dimensional
material will be presented.
Joint work with Claude Le Bris, Frederic Legoll, Günter Leugering, Michael Stingl,
and Fabian Wein.
107
Lukas Einkemmer
University of Innsbruck, Austria
A discontinuous Galerkin approximation for Vlasov equations
Minisymposium Session TIME: Thursday, 11:30 - 12:00, CO015
In astro- and plasma physics the behavior of a collisionless plasma is modeled by
the Vlasov equation
∂t f (t, x, v) + v · ∇f (t, x, v) + F · ∇v f (t, x, v) = 0,
a kinetic model that in certain applications is also called the collisionless Boltzmann equation. It is posed in a 3 + 3 dimensional phase space, where x denotes
the position and v the velocity. The density function f is the sought-after particleprobability distribution, and the (force) term F describes the interaction of the
plasma with the electromagnetic field.
Discontinuous Galerkin methods have received considerable attention in recent
years; they have been used and analyzed for various kinds of applications. In this
talk we will consider a discontinuous Galerkin based Strang splitting method for
solving the Vlasov equation coupled to an appropriate model of the electromagnetic
field. Due to the Strang splitting scheme, the problem is essentially reduced to
solving two (Vlasov–Poisson) or three (Vlasov–Maxwell) advection equations per
step. High order approximations in space, which are easy to achieve in this context,
can provide a significant advantage due to the up to six dimensional phase space
employed in such simulations.
A rigorous convergence analysis of this Strang splitting algorithm with a discontinuous Galerkin approximation in space can be conducted, for example, for the 1+1
dimensional Vlasov–Poisson equations. It is shown that for f0 ∈ C max{`+1,3} , i.e.
if the initial value is sufficiently regular, the error is of order O τ 2 + h` + h` /τ ,
where τ is the size of a time step, h is the cell size, and ` the order of the discontinuous Galerkin approximation.
It is well known that piecewise constant approximations in velocity space lead to a
recurrence phenomenon that is purely numerical in origin. We will present a number of numerical simulations which show that a recurrence-like effect, originating
from the finite cell size, is still visible even for higher order approximations.
To confirm the stability properties of the method investigated, a numerical simulation of the Molenkamp–Crowley test has been conducted. It is shown that the
scheme investigated does not suffer from the instabilities described in [3].
This talk is based on [1, 2].
References
[1] L. E INKEMMER AND A. O STERMANN, Convergence analysis of Strang splitting
for Vlasov-type equations. Preprint (arXiv:1207.2090), 2012.
[2] L. E INKEMMER AND A. O STERMANN, Convergence analysis of a discontinuous Galerkin/Strang splitting approximation for the Vlasov–Poisson equations.
Preprint (arXiv:1211.2353), 2012.
[3] K.W. M ORTON , A. P RIESTLEY, AND E. S ÜLI, Stability of the Lagrange-Galerkin
method with non-exact integration, Modél. Math. Anal. Numér., 22 (1988),
pp. 625–653.
108
Joint work with Alexander Ostermann.
109
Daniel Elfverson
Uppsala University, SE
Discontinuous Galerkin method for convection-diffusion-reaction problems
Minisymposium Session SMAP: Monday, 12:10 - 12:40, CO015
In this work we study the solution of second order convection-diffusion-reaction
problems. The coefficients are heterogeneous and highly varying without any assumptions on scale separation of periodicity. This type of problems arise in many
branches of scientific computing and is often impossible to simulate using standard
(one scale) methods, since the variations in the coefficients need to be resolved to
reach an acceptable tolerance. Instead, we use a different approach with a corrected basis which takes the variations into account without resolving it globally
on a single mesh. We are interested in finding a solution u ∈ H01 such that
a(u, v) = (A∇u, ∇v)L2 (Ω) + (β · ∇u + γu, v)L2 (Ω) = (f, v)L2 (Ω) := F (v),
for all v ∈ H01 . Here, Ω ∈ Rd for d = 2, 3, is a Lipschitz domain with polygonal
∞
d
boundary, A ∈ L∞ (Ω, Rd×d
sym ) is uniformly elliptic, β ∈ [L (Ω)] and divergence
∞
2
free, 0 ≤ γ ∈ L (Ω), and f ∈ L (Ω). This is approximated using the discontinuous
Galerkin multiscale method [1, 2]. To this end let us first introduce a fine and a
coarse shape-regular mesh, Th and TH with mesh functions h < H, and theirs
respective discontinuous Galerkin space Vh and VH ⊂ Vh for tetrahedral and
quadrilateral elements. Also, let λT,j denote the coarse basis function that spans
VH , i.e., VH = span{λT,j | T ∈ TH , j = 1, . . . , r} where r is the number of degrees
of freedom on element T . The multiscale splitting is defined by VH := ΠH Vh
and V f := (1 − ΠH )Vh , where ΠH is the (orthogonal) L2 -projection onto the
coarse space VH . The multiscale method uses a space spanned by corrected basis
ms,L
L
functions VH
= span{λT,j − φL
T,j | T ∈ TH , j = 1, . . . , r}, where each φT,j is
calculated on patches/subgrids and has local support, and L indicates the size of
f
L
the patches. That is for all T ∈ TH and j = 1, . . . , r, we seek φL
T,j = V (ωT ) =
f
{v ∈ V | v|Ω\ωTL = 0} such that
ah (φL
T,j , v) = ah (λT , v),
for all v ∈ V f (ωTL ),
where the bilinear form ah (·, ·) is associated with the fine mesh Th . The disconms,L
tinuous Galerkin multiscale method reads: find ums,L
∈ VH
such that,
H
, v) = F (v),
ah (ums,L
H
ms,L
.
for all v ∈ VH
ms,L
Note that dim(VH
) = dim(VH ). The following result holds under moderate
assumptions on the magnitude of β,
|||u − ums,L
||| ≤ |||u − uh ||| + CkH(f − ΠH f )kL2
H
choosing the size of the patches proportional to H log(H −1 ). The constant C is
independent of the variation in the coefficients but may depend on the ratio of the
their minimum and maximum bounds and uh ∈ Vh is the (one scale) discontinuous
Galerkin solution on the fine scale. This result holds independent of the regularity
of the solution. For the method to make sense we assume that uh resolves the
variation in the coefficients. However, note that uh is not computed in practice.
[1] D. Elfverson, G. H. Georgoulis and A. Målqvist, An Adaptive Discontinuous
Galerkin Multiscale Method for Elliptic Problems, Submitted for publication, 2011.
110
[2] D. Elfverson, G. H. Georgoulis, A. Målqvist and D. Peterseim, Convergence of
a discontinuous Galerkin multiscale method, Submitted for publication, available
as preprint arXiv:1211.5524, 2012.
Joint work with Axel Målqvist.
111
Stefan Engblom
Uppsala University, SE
Sensitivity estimation and inverse problems in spatial stochastic models of chemical
kinetics
Contributed Session CT3.2: Thursday, 18:00 - 18:30, CO2
In this talk I will consider computational modeling of diffusion-controlled reactions
with applications mainly in molecular cell biology. I will give a brief overview of
the modeling involved, in the non-spatial as well as in the fully spatial setting, and
I will consider practical means by which perturbations can be propagated through
the simulations. This is relevant as experimental data is often not known with a
high degree of accuracy, but also because inverse formulations generally relies on
being able to effectively and accurately estimate the effects of small perturbations.
For this purpose I will present our implementation of an “all events method” and
give two concrete examples of its use.
Spatial stochastic chemical kinetics
In the classical case of non-spatial stochastic modeling of chemical kinetics, the
reaction rates are understood as transition intensities in a Markov chain. When
spatial considerations are important, space is discretized in voxels. Between voxels
diffusion rates become transition intensities in a Markov chain which now takes
place in a state space which is much larger. As before reactions take place within
each voxel, but scaled appropriately to take into account the voxel volume.
Large such stochastic reaction-diffusion models can be simulated by resolving the
geometry using two- or three-dimensional unstructured meshes as in our modular
software framework URDME (www.urdme.org). Thanks to its flexible structure
and well-defined interfaces, new solvers may be developed in an independent manner and connected directly to the underlying layers of the simulation environment.
A viable “All Events Method”-implementation
Within the URDME framework we have developed a solver for stochastic sensitivity analysis which allows for path-wise control of all discrete events occurring
during the simulation of kinetic networks. This allows us to compare single trajectories under arbitrary perturbations of input data and opens up for accurate
estimation of model parameters as well as optimizing models under different configurations. Refer to Figure 1 for a very intuitive usage.
To demonstrate the use of this method, in the first setup we consider a spatial
model of the following enzymatic law,
k c·e
C + E −−−→ E,
(1)
where we think of E as an enzyme and C an intermediate complex which matures
into a product not explicitly modeled here. The model is completed by adding the
laws
αC
αE
∅ C,
∅ E,
βC c
βE e
(2)
where a certain part of these rates actually cover transport events. The effects of
the perturbation αE → αE (1 − δ) are studied, where δ depends on the position.
See Figure 2 for results on this simulation.
112
In a second setup (not detailed here) a simple concept of ‘optimality’ will be defined
for a certain biochemical network, and then tentative solutions for the control
signal which achieves this optimality will be presented. Here one is particularly
interested in the differences between the stochastic regime and the deterministic
one.
60
50
40
30
20
10
0
0
50
100
Figure 1: Comparing single trajectories from a birth-death model.
40
30
20
10
−1
0
1
1
0
−1
Figure 2: Spatial stochastic focusing. Here the perturbation depends on the spatial
coordinate and is increasing towards the center of the circle. We study here a
measure of the effect of increasing the secretion of the enzyme E and discover
an almost fourfold increase of the production rate in the critical region where the
perturbation is the highest. This is a stochastic nonlinear phenomenon and cannot
be explained by linear analysis nor by deterministic approximations.
113
Christian Engwer
University of Münster, DE
Mini-symposium keynote: Bridging software design and performance tuning for
parallel numerical codes
Minisymposium Session PARA: Monday, 11:10 - 11:40, CO016
Motivation As applications grow in complexity, the need for sustainable development of software for partial differential equations (PDEs) is increasing rapidly.
Modern numerical ingredients such as unstructured grids, adaptivity, high-order
discretiations and fast and robust multilevel solvers are required to achieve high
numerical efficiency, and several physical models must be combined in challenging
applications. Reusable software components are crucial to maintain flexibility in
this situation, this applies particularly to well-designed interfaces between different components. Numerical and implementational details should ideally be hidden
from users of the software, especially application scientists, and at the same time
the interfaces must be designed so that efficiency is not lost when different building
blocks are combined, e.g., discretisations and solvers.
At the same time a dramatic change in the underlying hardware can be observed:
The memory and power wall problems are becoming hard limitations, and further
performance improvements are only achieved by locality, multiple levels of parallelism, heterogeneity and specialisation. It is no longer feasible to neglect tuning
techniques designed for these aspects of the hardware; rather, current increasingly
parallel and heterogeneous systems require uniform tuning and code specialisation of all components. Typical workstations and cluster nodes now comprise at
least two multicore CPUs capable of executing tens of concurrent threads simultaneously. Accelerator technologies such as GPUs or Xeon Phi are included as
coprocessors in workstations and bigger machines, their performance stems from a
much more fine-grained execution of hundreds to thousands of hardware threads.
Furthermore, carefully arranged data structures and memory access patterns are
crucial to extract reasonable performance.
Challenges It is important to note that this change in hardware is not just a
momentary aspect, but a definite trend that will not be reverted, since it stems
from underlying physical principles in electrical engineering, mostly from energy
considerations and associated issues like leaking voltage and heat dissipation. Consequently, these changes results in substantial challenges for designers and implementers of numerical software packages:
• Modularity, maintainability, reusability and flexibility of software packages
must be maintained.
• Hardware details must be hidden as much as possible from application scientists, and to a certain degree also from numerical analysts.
• Generic implementations with maximum flexibility in mind must be balanced
with specialisations for certain hardware architectures.
• Careful compromises must be made when choosing the level of specialisation,
honouring rather generic trends of hardware architectures (e.g., implications
of coarse- and fine-grained parallelism) than utmost performance extraction
for one particular processor instance.
114
• Most importantly, the numerical methodology must be revisited to substantially improve their locality, fine-grained parallelism, communication vs. computation ratios, arithmetic intensities etc. (hardware-oriented numerics).
Summary of this talk The goal of the mini-symposium is to bridge these gaps,
by bringing together experts from all involved areas. The mini-symposium has a
very practical focus, preferring algorithmic and implementation aspects over advances in application domains. This introductory talk will summarise the state of
the art in terms of hardware and software engineering, survey generic design and
implementation techniques, present tips and tricks associated with higher-level abstractions, and set the stage for fruitful discussions based on the contributed talks.
Joint work with Dominik Goeddeke.
115
Alexandre Ern
University Paris-Est, CERMICS, FR
Adaptive inexact Newton methods with a posteriori stopping criteria for nonlinear
diffusion PDEs
Minisymposium Session STOP: Thursday, 14:00 - 14:30, CO1
We consider nonlinear algebraic systems resulting from numerical discretizations
of nonlinear partial differential equations of diffusion type. To solve these systems,
some iterative nonlinear solver, and, on each step of this solver, some iterative linear solver are used. We derive adaptive stopping criteria for both iterative solvers.
Our criteria are based on an a posteriori error estimate which distinguishes the
different error components, namely the discretization error, the linearization error, and the algebraic error. We stop the iterations whenever the corresponding
error does no longer affect the overall error significantly. Our estimates also yield
a guaranteed upper bound on the overall error at each step of the nonlinear and
linear solvers. We prove the (local) efficiency and robustness of the estimates with
respect to the size of the nonlinearity owing, in particular, to the error measure
involving the dual norm of the residual. Our developments hinge on equilibrated
flux reconstructions and yield a general framework. We show how to apply this
framework to various discretization schemes like finite elements, nonconforming
finite elements, discontinuous Galerkin, finite volumes, and mixed finite elements;
to different linearizations like fixed point and Newton; and to arbitrary iterative
linear solvers. Numerical experiments for the p-Laplacian illustrate the tight overall error control and important computational savings achieved in our approach.
More details on the overall approach, analysis, and results can be found in [1].
References
[1] A. E RN AND M. V OHRALÍK, Adaptive inexact Newton methods with a posteriori
stopping criteria for nonlinear diffusion PDEs. SIAM J. Sci. Comput., to
appear (2013), HAL Preprint 00681422 v2, 2012.
Joint work with Martin Vohralik.
116
Antonio Falcó
Unversidad CEU Cardenal Herrera, ES
Proper Generalized Decomposition for Dynamical Systems
Minisymposium Session SMAP: Monday, 14:30 - 15:00, CO015
Tensor-based methods are receiving a growing interest in scientific computing for
the numerical solution of problems defined in high dimensional tensor product
spaces, such as partial differential equations arising from stochastic calculus (e.g.
Fokker-Planck equations) or quantum mechanics (e.g. Schrödinger equation),
stochastic parametric partial differential equations in uncertainty quantification
with functional approaches, and many mechanical or physical models involving
extra parameters (for parametric analyses) among others. For such problems,
classical approximation methods based on the a priori selection of approximation
bases suffer from the so called “curse of dimensionality” associated with the exponential (or factorial) increase in the dimension of approximation spaces. In [1]
the authors give a mathematical analysis of a family of progressive and updated
Proper Generalized Decompositions for a particular class of problems associated
with the minimization of a convex functional over a reflexive tensor Banach space.
In this talk we discuss the approach for continuous time dynamical systems. To
this end we revise the Dirac-Frenkel variational principle to justify the Proper
Generaliced Decomposition in this time–dependent framework.
References
[1] A. Falcó and A. Nouy: Proper generalized decomposition for nonlinear convex
problems in tensor Banach spaces. Numer. Math. 121 (2012), 503–530.
117
Miloslav Feistauer
Charles University Prague, Faculty of Mathematics and Physics, CZ
Space-time DGFEM for the solution of nonstationary nonlinear convection-diffusion
problems and compressible flow
Contributed Session CT3.4: Thursday, 18:00 - 18:30, CO015
The paper will be concerned with the numerical solution of nonstationary problems
with nonlinear convection and diffusion by the space-time discontinuous Galerkin
finite element method (DGFEM) and with applications to the simulation of compressible flow.
The first part will be devoted to some theoretical aspects of the space-time DGFEM.
The time interval is split into subintervals and on each time level a different space
mesh with hanging nodes may be used in general. In the discontinuous Galerkin
formulation we use the nonsymmetric, symmetric or incomplete version of the discretization of the diffusion terms and interior and boundary penalty (i.e., NIPG,
SIPG or IIPG versions). For the space and time discretization, piecewise polynomial approximations of different degrees p and q, respectively, are used. The
question of optimal error estimates will be treated under various assumptions on
the boundary conditions and nonlinearities in the convection and diffusion. Special attention will be paid to the question of the stability of the method. It is
an important question, because in works [1], [2] and [3], in the case of a general
form of the boundary condition (when the boundary data do not behave in time
as a polynomial of degree ≤ q), the error estimates were derived under the CFLlike condition τ ≤ Ch. The goal is to prove the stability without this condition.
Theoretical results will be demonstrated by numerical experiments.
In the second part, the space-time DGFEM will be applied to the solution of the
compressible Navier-Stokes equations. Our goal is to develop sufficiently accurate,
efficient and robust numerical schemes allowing the solution of compressible flow
for a wide range of Reynolds and Mach numbers. The main attention will be paid
to the analysis of the low Mach number flows close to incompressible limit.
References
[1] M. Feistauer, V. Kučera, K. Najzar and J. Prokopová, Analysis of space-time
discontinuous Galerkin method for nonlinear convection-diffusion problems,
Numer. Math. 117 (2011), pp. 251–288.
[2] J. Česenek, M. Feistauer: Theory of the space-time discontinuous Galerkin
method for nonstationary parabolic problems with nonlinear convection and
diffusion. SIAM J. Numer. Anal. 30 (No.3) (2012) 1181–1206.
[3] M. Vlasák, V. Dolejší, J. Hájek: A priori error estimates of an extrapolated
space-time discontinuous Galerkin method for nonlinear convection-diffusion
problems. Numer. Methods Partial Differential Eq. 27 (2011), 1456–1482.
Joint work with M. Balazsova, M. Hadrava, and A. Kosik.
118
Dalia Fishelov
Afeka Tel-Aviv Academic College of Engineering, IL
Convergence analysis of a high-order compact scheme for time-dependent fourthorder differential equations
Contributed Session CT3.8: Thursday, 16:30 - 17:00, CO123
In [1] we established the convergence of a fourth-order compact scheme to the
time-independent one-dimensional biharmonic problem
(4)
u (x) = f (x), 0 < x < 1,
(1)
u(0) = 0, u(1) = 0, u0 (0) = 0, u0 (1) = 0.
It approximate solution satisfies

δx4 vj = f (xj ) 1 ≤ j ≤ N − 1,

 (a)
2
1
1
vx,j−1 + vx,j + vx,j+1 = δx vj , 1 ≤ j ≤ N − 1,
(b)

6
3
6

(c)
v0 = 0, vN = 0, vx,0 = 0, vx,N = 0.
Here, δx4 is the three-point compact operators defined by
12 vx,j+1 − vx,j−1
vj+1 + vj−1 − 2vj
δx4 vj = 2
−
, 1 ≤ j ≤ N − 1,
h
2h
h2
(2)
(3)
and vx,j is the Padé approximation of the derivative of v at point xj ,
1
2
1
vj+1 − vj−1
vx,j−1 + vx,j + vx,j+1 =
, 1 ≤ j ≤ N − 1.
6
3
6
2h
(4)
This scheme invokes values of the unknown function as well as Padé approximations of its first-order derivative. The truncation error of the scheme is of fourthorder at interior points and of first order at near boundary points. Although the
truncation error of the discrete biharmonic scheme is of fourth-order at interior
point, the truncation error drops to first-order at near-boundary points. Nonetheless, we prove that the scheme retains its fourth-order (optimal) accuracy. We
have proved the following theorem.
Theorem 1. Let u be the exact solution of (1) and assume that u has continuous
derivatives up to order eight on [0, 1]. Let v be the approximation to u, given by the
(2). Then, the error e = v − u satisfies
max
1≤j≤N −1
|ej | ≤ C(f )h4 .
(5)
where C depends only on f .
A number of numerical examples corroborate this effect.
We extend our study to time-dependent problems. We present a proof for the
convergence of the scheme for the problem ut = −uxxxx and show that the error
is bounded by Ch4− for arbitrary > 0. Then, we consider a more-general
time-dependent problem
ut = −uxxxx + b uxx + c ux + d u + f (x, t), 0 < x < 1,
(6)
u(0) = 0, u(1) = 0, u0 (0) = 0, u0 (1) = 0.
119
It is approximated by the compact scheme
d
4
2
(a)
dt vj = −δx vj + b δ̃x v + c vx,j + d vj + f (xj , t),
(b)
v0 = 0, vN = 0, vx,0 = 0, vx,N = 0,
where δ̃x2 vj = 2δx2 vj − δx vx,j = δx2 vj −
We prove the following proposition.
h2 4
12 δx vj .
1 ≤ j ≤ N − 1,
(7)
Proposition 2. Let u(x, t) be the exact solution of (6) and assume that u has continuous derivatives with respect to x up to order eight on [0, 1] and up to order 1
with respect to t. Let v(t) be the approximation to u, given by (7). The, the error
ej (t) = vj (t) − u(xj , t) satisfies
max |e(t)|h ≤ C(T )h3.5 ,
0≤t≤T
(8)
where C(T ) depends only on f , g and T .
In addition, we have also proved convergence of the compact scheme for the timedependent equation
uxxt = uxxxx + buxx + dux + cu + f (x, t)
(9)
on an interval, and by its approximation by the semi-discrete finite-difference
scheme
d 2
δ̃ vj = δx4 vj + bδ̃x2 vj + cvx,j + dvj + f (xj , t).
(10)
dt x
In this case too we prove that the error is bounded by Ch3.5 .
Proposition 3. Let u(x, t) be the exact solution of (9) and assume that u has continuous derivatives with respect to x up to order eight on [0, 1] and up to order 1
with respect to t. Let v(t) be the approximation to u, given by (10). The, the error
ej (t) = vj (t) − u(xj , t) satisfies
max |e(t)|h ≤ max |δx+ e(t)|h ≤ C(T )h3.5 ,
0≤t≤T
0≤t≤T
(11)
where C(T ) depends only on f , g and T .
In addition, we study of the eigenvalue problem uxxxx = νuxx . This is related to
the stability of the linear time-dependent equation uxxt = νuxxxx . We derive the
full set of continuous eigenvalues and eigenfunctions. In addition, the discrete set
of eigenvalues and eigenvectors are computed. The latter are displayed graphically
and compared with the continuous eigenfunctions and eigenvalues.
References
[1] D. Fishelov and M. Ben-Artzi and J-P. Croisille Recent advances in the study
of a fourth-order compact scheme for the one-dimensional biharmonic equation, J. Sci. Comput., , Vol. 53, pp. 55–70, (2012).
Joint work with M. Ben-Artzi, and J.-P. Croisille.
120
Michel Flueck
EPFL, CH
Domain decomposition for computing ferromagnetic effects
Minisymposium Session MMHD: Thursday, 14:30 - 15:00, CO017
We consider a physical model for simulating the screen effect of ferromagnetic steel
plates in presence of very strong direct currents. There is no electric current in
the plates, and we assume in this model that the plates have no impact on the
surrounding currents.
First we theoretically study the mathematical model which is expressed for one
unknown scalar field defined on the whole tridimensional space. Then we give a
Dirichlet-Dirichlet domain decomposition method using Poisson’s formula to solve
this problem.
Finaly we present a standard finite element approximation of that problem with
some numerical results. First we show the academic situation of an electric conductor on one side of a rectangular plate and we study the induction field on the
other side, showing a screening effect. Then we turn to an industrial example of an
aluminum electrolysis cell which is built in a big steel shell protecting the interior
of the cell from surrounding currents feeding that cell.
Joint work with J. Rappaz, and A. Janka.
121
Melina Freitag
University of Bath, GB
Computing Jordan blocks in parameter-dependent eigenproblems
Minisymposium Session NEIG: Thursday, 14:30 - 15:00, CO2
We introduce a general method for computing a 2-dimensional Jordan block in
a parameter-dependent matrix eigenvalue problem. The requirement to compute
Jordan blocks arises in a number of physical problems, for example panel flutter
problems aerodynamical stability, the stability of electrical power systems, and in
quantum mechanics.
The algorithm we suggest is based on the Implicit Determinant Method and requires the solution of a small nonlinear system instead of solving a large eigenvalue
problem. We provide theory and convergence properties for this method and give
numerical results for a number of problems arising in practice.
Joint work with Alastair Spence.
122
Maxim Frolov
Saint-Petersburg State Polytechnical University, RU
Reliable a posteriori error estimation for plane problems in Cosserat elasticity
Contributed Session CT2.4: Tuesday, 15:30 - 16:00, CO015
Functional type a posteriori error estimates are proposed for approximate solutions
to plane problems arising in the Cosserat theory of elasticity. In comparison to
the classical elasticity such type of models possesses an advanced spectrum of
properties - they can more adequately describe materials with microstructure. The
growing interest to generalizations of the classical elasticity theory arose from the
beginning of 60s and, nowadays, it is important to provide an efficient procedure
of reliable error estimation for numerical solutions. The implemented approach is
based on functional grounds (in particular, on the duality theory in the Calculus of
Variations). Estimates, which we present for the Cosserat model, are reliable under
quite general assumptions and are explicitly applicable not only to approximations
possessing the Galerkin orthogonality property.
This work is based on previous investigations of the authors (Probl. Mat. Anal.,
2011; J. Math. Sci., 2012) in which we only dealt with the case of isotropic
media with displacements and independent rotation given on the boundary of a
computational domain.
For numerical justification of the approach, we use approximation of the nonsymmetric stress tensor that is based on the lowest-order element suggested by
D.N. Arnold, D. Boffi, and R.S. Falk (SIAM J. Numer. Anal., 2005). According to recent numerical results for the functional type a posteriori error estimate
for linear elasticity problems (S.I. Repin. Radon Series on Computational and
Applied Mathematics, 4. Berlin: de Gruyter, 2008), this approach allowing a
non-symmetric stress approximation is promising. It provides a significant improvement of the efficiency index in comparison with continuous approximations
by the standard finite element procedure based on bilinear shape functions.
Joint work with Prof. Sergey I. Repin.
123
Petr Furmanek
Faculty of Mechanical Engineering, CTU in Prague, CZ
Numerical Simulation of Flow Induced Vibrations with Two Degrees of Freedom
Contributed Session CT4.5: Friday, 08:50 - 09:20, CO016
Aeroelastic effects that appear in real flows around wings and profiles have usually
a huge influence on both the flow field and the profile itself. Possibilities of numerical simulation of these effects (as is buffeting or flutter) in commercial CFD codes
are still limited and are often solved by a problem-tailored software. The aim of
this contribution is to show and investigate one of such approaches. The so called
Modified Causon’s scheme of the 2nd order in space and time (based on TVD form
of the classical MacCormack scheme for finite volume method) is enhanced with
the use of the ALE method for simulations of unsteady flows, namely flow over
the NACA0012 profile. The profile is considered with two degrees of freedom oscillations around a given reference point and vibration along the vertical axis.
Motion in given directions is induced by the flow itself and is described by a set
of ordinary differential equations. More values of initial velocities are considered
and the fluid is simulated both as incompressible and compressible. The final results are compared in-between and with computational data from NASTRAN and
in-house codes by P. Svacek and R. Honzatko from the Department of Technical
Mathematics, Faculty of Mechanical Engineering, CTU in Prague. Based on the
obtained results, the following conclusions can be drawn: there is a significant
difference in computational demands of compressible and incompressible model,
which allows for computations with much smaller velocities; critical velocities for
instability are in both cases in the same range. The future steps intended are
implementation of turbulence model and extension into three dimensions.
Joint work with Karel Kozel.
124
Lucia Gastaldi
University of Brescia, IT
Fictitious Domain Formulation for Immersed Boundary Method
Minisymposium Session ANMF: Tuesday, 10:30 - 11:00, CO1
The aim of this talk is to present a new variational formulation of the Immersed
Boundary Method (IBM) which can present improved stability properties. The
Immersed Boundary Method was proposed by Peskin (see [7] for a review) in order
to simulate the blood dynamics in the heart. It was then applied to several fluidstructure interaction problems. The main feature of the IBM is that the structure
is considered as a part of the fluid where additional forces are located. Hence
the Navier–Stokes equations have to be solved all over the domain, with a source
term which is localized on the structure by means of the Dirac delta function.
Moreover, the movement of the structure is imposed by constraining the velocity
of the structure to be equal to that of the fluid at the points where the structure is
located. The original discretization was based on finite differences which require
the approximation of the Dirac delta function. In [2, 3, 4, 1], we have used a
variational formulation of the Navier–Stokes equations, which allows to deal with
the Dirac delta function in a natural way, so that the finite element method can be
applied for the space discretization of the the IBM. On the contrary the position
of the structure is determined pointwise. The stability analysis of the space-time
discretization of the resulting scheme shows that a priori estimates for the energy
can be obtained provided a CFL condition involving the time step and the mesh
parameters of the fluid and in the structure domains is satisfied.
Here we present a new approach based on a totally variational formulation of the
problem. In this approach the equation describing the movement of the structure is seen as a constraint which links the equations for the fluid and for the
structure. Therefore we introduce a distributed Lagrange multiplier, so that the
final formulation can be interpreted as a Fictitious Domain approach to the fluidstructure interaction problems. We refer for the fictitious domain method to the
works [6, 5]. The space-time scheme based on this formulation provide improved
stability condition with weak restrictions on the discretization parameters.
References
[1] Daniele Boffi, Nicola Cavallini, and Lucia Gastaldi. Finite element approach
to immersed boundary method with different fluid and solid densities. Math.
Models Methods Appl. Sci., 21(12):2523–2550, 2011.
[2] Daniele Boffi and Lucia Gastaldi. A finite element approach for the immersed
boundary method. Comput. & Structures, 81(8-11):491–501, 2003. In honour
of Klaus-Jürgen Bathe.
[3] Daniele Boffi, Lucia Gastaldi, and Luca Heltai. On the CFL condition for
the finite element immersed boundary method. Comput. & Structures, 85(1114):775–783, 2007.
[4] Daniele Boffi, Lucia Gastaldi, Luca Heltai, and Charles S. Peskin. On the
hyper-elastic formulation of the immersed boundary method. Comput. Methods
Appl. Mech. Engrg., 197(25-28):2210–2231, 2008.
125
[5] Vivette Girault, Roland Glowinski, and T. W. Pan. A fictitious-domain method
with distributed multiplier for the Stokes problem. In Applied nonlinear analysis, pages 159–174. Kluwer/Plenum, New York, 1999.
[6] R. Glowinski and Yu. Kuznetsov. Distributed Lagrange multipliers based on
fictitious domain method for second order elliptic problems. Comput. Methods
Appl. Mech. Engrg., 196(8):1498–1506, 2007.
[7] Charles S. Peskin. The immersed boundary method. Acta Numer., 11:479–517,
2002.
126
Ludwig Gauckler
TU Berlin, DE
Plane wave stability of the split-step Fourier method for the nonlinear Schrödinger
equation
Minisymposium Session TIME: Thursday, 14:30 - 15:00, CO015
The cubic nonlinear Schrödinger equation
i∂t ψ = −∆ψ + λ|ψ|2 ψ,
ψ = ψ(x, t),
(1)
with periodic boundary conditions in space (x ∈ Rd /(2πZ)d ) has solutions that
are plane waves:
ψ(x, t) = ρei(m·x−ωt)
with m ∈ Zd solves (1) for ω = |m|2 + λρ2 . In the talk we will discuss the stability
of these solutions and the stability of their numerical approximation.
We first study the stability of plane waves in the exact solution. We show orbital
stability of plane waves over long times.
In the second part of the talk we study a very popular method for the numerical
discretization of the nonlinear Schrödinger equation, the split-step Fourier method.
This method combines a Fourier spectral method in space with a splitting integrator in time. We will pursue the question whether the stability of plane waves
in the exact solution transfers to this numerical discretization.
Joint work with Erwan Faou and Christian Lubich.
127
Jean-Frédéric Gerbeau
INRIA, FR
Luenberger observers for fluid-structure problems
Minisymposium Session NFSI: Thursday, 14:00 - 14:30, CO122
This talk is devoted to inverse problems in fluid-structure interaction problems, in
particular for blood flow in arteries. Our strategy is based on two kinds of methods:
Luenberger observers for the state variables and Unscented Kalman Filter for the
parameters. This presentation will mainly focused on Luenberger observers.
We analyze the performances of two types of observers, the Direct Velocity Feedback
and the Schur Displacement Feedback algorithms, originally devised for elastodynamics. The measurements are assumed to be restricted to displacements or
velocities in the solid. We first assess the observers using hemodynamics-inspired
test problems with the complete model, including the Navier-Stokes equations in
Arbitrary Lagrangian-Eulerian formulation, in particular. Then, in order to obtain more detailed insight we consider several well-chosen simplified models, each
of which allowing a thorough analysis – emphasizing spectral considerations – while
illustrating a major phenomenon of interest for the observer performance, namely,
the added mass effect for the structure, the coupling with a lumped-parameter
boundary condition model for the fluid flow, and the fluid dynamics effect per se.
Whereas improvements can be sought when additional measurements are available
in the fluid domain in order to more effectively deal with strong uncertainties in the
fluid state, in the present framework this establishes Luenberger observer methods
as very attractive strategies – compared, e.g. to classical variational techniques
– to perform state estimation, and more generally for uncertainty estimation since
other observer procedures, like nonlinear filtering, can be conveniently combined
to estimate uncertain parameters.
Joint work with C. Bertoglio, D. Chapelle, M.A. Fernández, and P. Moireau.
128
Tomas Gergelits
Faculty of Mathematics and Physics, Charles University in Prague, CZ
Composite polynomial convergence bounds in finite precision CG computations
Contributed Session CT3.3: Thursday, 16:30 - 17:00, CO3
The convergence rate of the method of conjugate gradients (CG) used for solving
linear algebraic system
Ax = b
(1)
with large and sparse Hermitian and positive definite (HPD) matrix A ∈ CN ×N
with eigenvalues 0 < λ1 < . . . < λN is commonly associated with linear convergence bounds derived using shifted and scaled Chebyshev polynomials. However,
the CG method is nonlinear and its convergence tends to accelerate during the
iteration process (it exhibits the so-called superlinear convergence) and thus the
linear bounds are typically highly pessimistic. In order to describe the superlinear convergence, Axelsson [1] and Jennings [5] considered in presence of m large
outlying eigenvalues the composite polynomial
qm (λ)χk−m (λ)/χk−m (0),
(2)
where χk−m (λ) denotes the Chebyshev polynomial of degree k−m shifted to the interval [λ1 , λN −m ] and qm (λ) has the roots at the outlying eigenvalues λN −m+1 , . . . , λN ,
which resulted in the bound
!k−m
p
κm (A) − 1
kx − xk kA
≤2 p
, k = m, m + 1, . . . ,
(3)
kx − x0 kA
κm (A) + 1
where κm (A) ≡ λN −m /λ1 is the so-called effective condition number. This quantity is typically substantially smaller than the condition number κ(A) ≡ λN /λ1
which indicates a possibly faster convergence after m initial iterations. All this
assumes, however, exact arithmetic.
In finite precision computations the CG convergence can be significantly delayed
due to rounding errors. Such delays are pronounced, in particular, in the presence of large outlying eigenvalues, and they can make the composite convergence
bounds practically useless; see [5], [3], [7], [2], [6, Chapter 5]. Despite the early
experimental warnings and theoretical arguments, misleading conclusions and inaccurate statements keep reappearing in literature. This contribution emphasizes
that due to the requirement of keeping short recurrences, in practical CG computations effects of rounding errors must always be taken into consideration.
Acknowledgement: This work has been supported by the ERC-CZ project LL1202,
by the GACR grant 201/09/0917 and by the GAUK grant 695612.
References
[1] O. Axelsson: A class of iterative methods for finite element equations. Comput.
Methods Appl. Mech. Engrg., 9, 2, pp. 123–127, 1976.
[2] T. Gergelits, Z. Strakoš: Composite convergence bounds based on Chebyshev
polynomials and finite precision conjugate gradient computations. accepted for
publication in Numerical Algorithms, April, 2013,
129
[3] A. Greenbaum: Behaviour of slightly perturbed Lanczos and conjugategradient recurrences. Linear Algebra Appl., 113, pp. 7–63, 1989.
[4] M. R. Hestenes, E. Stiefel: Methods of conjugate gradients for solving linear
systems. J. Research Nat. Bur. Standards, 49, pp. 409–436, 1952.
[5] A. Jennings: Influence of the eigenvalue spectrum on the convergence rate of
the conjugate gradient method. J. Inst. Math. Appl., 20, 1, pp. 61–72, 1977.
[6] J. Liesen, Z. Strakoš: Krylov subspace methods: principles and analysis. Numerical Mathematics and Scientific Computation, Oxford University Press,
2012.
[7] Y. Notay: On the convergence rate of the conjugate gradients in presence of
rounding errors. Numer. Math., 65, 3, pp. 301–317, 1993.
Joint work with Zdenek Strakos.
130
Omar Ghattas
The University of Texas at Austin, US
Stochastic Newton MCMC Methods for Bayesian Inverse Problems, with Application to Ice Sheet Dynamics
Plenary Session: Thursday, 08:20 - 09:10, CO1
We address the problem of quantifying uncertainties in the solution of ill-posed
inverse problems governed by expensive forward models (e.g., PDEs) and characterized by high-dimensional parameter spaces (e.g., discretized heterogeneous
parameter fields). The problem is formulated in the framework of Bayesian inference, leading to a solution in the form of a posterior probability density. To
explore this posterior density, we propose several variants of so-called Stochastic
Newton Markov chain Monte Carlo (MCMC) methods, which employ, as MCMC
proposals, a local Gaussian approximation whose covariance is the inverse of a local Hessian of the negative log posterior, made tractable via randomized low rank
approximations and adjoint-based matrix-vector products. The stochastic Newton variants are applied to an inverse ice sheet flow problem governed by creeping,
viscous, incompressible, non-Newtonian flow. The inverse problem is to infer the
coefficient field of the basal boundary condition from surface velocity observations.
We assess the performance of the methods and interpret the resulting parameter
uncertainties with respect to the information content of both the prior and the
data.
This is joint work with Tan Bui-Thanh, Carsten Burstedde, Tobin Isaac, James
Martin, Noemi Petra, and Georg Stadler.
131
Luc Giraud
Inria, FR
Recovery policies for Krylov solver resiliency
Minisymposium Session CTNL: Wednesday, 10:30 - 11:00, CO015
The advent of exascale machines will require the use of parallel resources at an
unprecedented scale with potentially billions of computing units leading to a high
rate of hardware faults. High Performance Computing applications that aim at
exploiting all these resources will thus need to be resilient, i.e., being able to eventually compute a correct output in presence of core faults. Contrary to checkpointing techniques or Algorithm Based Fault Tolerant (ABFT) mechanisms, strategies
based on interpolation for recovering lost data do not require extra work or memory when no fault occurs. We apply this latter strategy to Krylov iterative solvers
for systems of linear equations, which are often the most computational intensive
kernels in HPC simulation codes. We propose and discuss several variants able to
possibly handle multiple simultaneous faults. We study the impact and the overhead of the recovery methods, the fault rate and the number of processors on the
resilience of the most popular solvers that are CG, GMRES and BiCGStab solvers.
Joint work with E. Agullo, A. Guermouche, J. Roman, and M. Zounon.
132
Mohammad Golbabaee
CNRS, CEREMADE (Applied Math Research Centre), Universite Paris 9-Dauphine,
FR
Model Selection with Piecewise Regular Gauges
Minisymposium Session ACDA: Monday, 11:40 - 12:10, CO122
In this talk, we investigate in a unified way the structural properties of a large
class of convex regularizers for linear inverse problems. We consider regularizations
with convex positively 1-homogenous functionals (so-called gauges) which obey a
weak decomposability property. Weak decomposability promotes solutions of the
inverse problem conforming to some notion of simplicity/low complexity by living
on a low dimensional sub-space. This family of priors encompasses many special
instances routinely used in regularized inverse problems such as l1, l1l2 (group
sparsity), Trace norm, or the l-inf norm. The weak decomposability requirement
is flexible enough to cope with analysis-type priors that include a pre-composition
with a linear operator, such as for instance the total variation (TV). Weak decomposability is also stable under summation of regularizers, thus enabling to handle
mixed regularizations (e.g. Trace+l1l2).
We discuss the theoretical recovery performance of this class of regularizers. We
provide sufficient conditions that allow to provably controlling the deviation of
the recovered solution from the true underlying object, as a function of the noise
level. More precisely we show that the solution to the inverse problem is unique
and lives on the same low dimensional subspace as the true vector to recover, with
the proviso that the minimal signal to noise ratio is large enough. This extends
previous results well-known for the l1 norm, analysis l1 semi-norm, and the Trace
norm to the general class of weakly decomposable gauges.
Joint work with Samuel Vaiter, Jalal Fadili, and Gabriel Peyre.
133
Maria Gonzalez
Universidad de A Coruña, ES
A new a posteriori error estimator of low computational cost for an augmented
mixed FEM in linear elasticity
Contributed Session CT2.4: Tuesday, 14:30 - 15:00, CO015
We consider the augmented mixed finite element method introduced in [4, 5]
for the linear elasticity problem in the plane and extended in [6] to the threedimensional case. When Dirichlet boundary conditions are prescribed, the corresponding Galerkin scheme is well-posed and free of locking for any choice of finite
element subspaces. This fact turns out to be the main advantage of this method.
The use of adaptive algorithms based on a posteriori error estimates guarantees
good convergence behavior of the finite element solution of a boundary value problem. Several a posteriori error estimators are already available in the literature
for the usual mixed finite element method in linear elasticity. Concerning the
a posteriori error analysis of the augmented scheme presented in [4], an a posteriori error estimator of residual type was introduced in [2] in the case of pure
homogeneous Dirichlet boundary conditions. That analysis was extended recently
to the cases of pure non-homogeneous Dirichlet boundary conditions and mixed
boundary conditions with non-homogeneous Neumann data; cf. [3]. The a posteriori error estimators derived in [2] and [3] are both reliable and efficient, but
involve the computation of eleven residuals per element in the case of homegeneous
Dirichlet boundary conditions, and thirteen residuals per element in the case of
non-homogeneous Dirichlet boundary conditions, including in both cases normal
and tangential jumps.
In this work, we derive a new a posteriori error estimator for the augmented dualmixed method proposed in [4]-[6] in the case of Dirichlet boundary conditions. The
analysis is based on the use of a projection of the error and allows to derive an
a posteriori error estimator that only requires the computation of four residuals
per element in the case of homogeneous boundary conditions, and six residuals
per element in the case of non-homogeneous boundary conditions. In both cases,
the derived a posteriori error indicators do not require the computation of normal
nor tangential jumps across the edges or faces of the mesh, which simplifies the
numerical implementation, specially in the 3d case. Besides, we prove that the
new a posteriori error estimator is both reliable and locally efficient in the case of
homogeneous Dirichlet boundary conditions. When non-homogeneous boundary
conditions are imposed, it is reliable and locally efficient only in those elements
that does not touch the boundary. Finally, we provide numerical experiments that
illustrate the performance of the corresponding adaptive algorithms and support
the theoretical results.
References
[1] T.P. Barrios and G.N. Gatica. An augmented mixed finite element method
with Lagrange multipliers: A priori and a posteriori error analyses. J. Comput. Appl. Math. 200, 653–676 (2007).
[2] T.P. Barrios, G.N. Gatica, M. González and N. Heuer. A residual based a
posteriori error estimator for an augmented mixed finite element method in
linear elasticity. M2AN Math. Model. Numer. Anal. 40, 843–869 (2006).
134
[3] T.P. Barrios, E.M. Behrens and M. González. A posteriori error analysis of
an augmented mixed formulation in linear elasticity with mixed and Dirichlet
boundary conditions. Comput. Methods Appl. Mech. Engrg. 200, 101-113
(2011).
[4] G.N. Gatica. Analysis of a new augmented mixed finite element method for
linear elasticity allowing RT0 -P1 -P0 approximations. M2AN Math. Model.
Numer. Anal. 40, 1–28 (2006).
[5] G.N. Gatica. An augmented mixed finite element method for linear elasticity with non-homogeneous Dirichlet conditions. Electronic Transactions on
Numerical Analysis, vol. 26, pp. 421-438, (2007).
[6] G.N. Gatica, A. Márquez and S. Meddahi. An augmented mixed finite element
method for 3D linear elasticity problems. J. Comput. Appl. Math. 231, 2,
526–540 (2009).
Joint work with T.P. Barrios, and E.M. Behrens.
135
Simsek Gorkem
Eindhoven University of Technology, NL
Error Estimation for The Convective Cahn – Hilliard Equation
Contributed Session CT2.4: Tuesday, 15:00 - 15:30, CO015
The Cahn–Hilliard phase-field (or diffuse-interface) model has a wide range of applications where the interest is the modelling of phase segregation and evolution
of multiphase flow systems. In order to capture the physics of these systems,
diffuse-interface models presume a nonzero interface thickness between immiscible
constituents, see [1]. The multiscale nature inherent in these models (interface
thickness and domain size of interest) urges the use of space-adaptivity in discretization. In this contribution we consider the a-posteriori error analysis of
the convective Cahn–Hilliard [4] model for varying Péclet number and interfacethickness (diffusivity) parameter. The adaptive discretization strategy uses mixed
finite elements, a stable time-stepping algorithm and residual-based a-posteriori
error estimation [2, 5].
Let Ω ⊂ Rd be a bounded domain with d = 1, 2, 3 and ∂Ω be the boundary which
has an outward unit normal n. The convective Cahn-Hilliard equation can be
written as follows:
Find the real valued functions (c, µ) : Ω × [0, T ] → R for T > 0 such that
∂t c −
1
4µ + ∇ · (uc) = 0
Pe
µ = φ0 (c) − 2 ∇c
c(·, 0) = c0
∂n c = ∂n µ = 0
in
ΩT := Ω × (0, T ]
in
ΩT
in
Ω
on ∂ΩT := ∂Ω × (0, T ],
where ∂t (·) = ∂(·)/∂t, ∂n (·) = n · ∇(·) is the normal derivative, φ is the real-valued
free energy function, u is a given function such that ∇ · u = 0 in Ω and u · n = 0
on ∂Ω, P e is the P éclet number and is the interface thickness.
The nonlinear energy function φ(c) is of the double well form and we consider the
following C 2 -continuous function :

2
(c + 1)
c < −1,





2
1
2
φ(c) :=
c ∈ [−1, 1] ,
4 c −1





2
(c − 1)
c > 1.
In order to obtain the weak formulation, we consider the following function space
and the corresponding norm as a suitable space for µ:
Z T
2
1
2
V := L (0, T; H (Ω)), kvkV :=
kv(t)k2H1 dt
(Ω)
0
and the space suitable for the phase variable c is W := {v ∈ V : vt ∈ V 0 }, where
V 0 := L2 (0, T; [H1 (Ω)]0 ) is the dual space of V with the norm
Z T
kvt k2W := kvk2V + kvt k2V 0 and kvt k2V 0 :=
kvt (t)k2[H1 (Ω)]0 dt.
0
Then the weak form of the problem becomes: Find (c, µ) ∈ Wc0 × V :
136
1
(∇µ, ∇w) = 0
Pe
2
(µ, v) − (φ(c), v) + (∇c, ∇v) = 0
∀w ∈ H1 (Ω)
hct , wi + (u∇c, w) +
∀v ∈ H1 (Ω),
for t ∈ [0, T], where Wc0 is the subspace of W of which the trace at t = 0 coincide
with c0 .
To derive an a-posteriori error representation, we will employ the mean-valuelinearized adjoint problem. The dual problem can be defined in terms of dual
variables (p, χ) where the dual variable p is a function in the space
W q̄ := {v ∈ W : v(T ) = q̄} .
Then the dual problem can be written as follows:
Find (p, χ) ∈ W q̄ × V :
−∂t p + u∇p + 2 4χ − φ0 (c, ĉ)χ = q1
1
4p = q2
χ−
Pe
p = q̄
∂n p = ∂n χ = 0
in
Ω × [0, T )
in
Ω × [0, T )
on Ω × {t = T }
on ∂Ω × [0, T ],
where the nonlinear function φ0 (c, ĉ) is a mean-value-linearized function
0
Z
φ (c, ĉ) =
0
1
φ00 (sc + (1 − s)ĉ) ds.
This analysis for the convective model forms a basic step in our research and will
be helpful for the coupled Cahn–Hilliard/Navier–Stokes system [3] which is the
desired model for future research.
References
[1]
Anderson, D.M., McFadden, G.B. and Wheeler, A.A. Diffuse-Interface Methods in Fluid Mechanics. Annu. Rev. Fluid Mech. 30:139–65, 1998
[2]
Bartels, S., Müller, R. A-posteriori error controlled local resolution of evolving interfaces for generalized Cahn–Hilliard equations. Interfaces and Free
Boundaries, 12:45–73, 2010
[3]
Boyer, F., Lapuerta, C., Minjeaud, S., Piar, B. and Quintard, M. Cahn–
Hilliard Navier-Stokes Model for the Simulation of Three-Phase Flows. Transport in Porous Media, 82:463 – 483, 2010
[4]
Kay, D., Styles, V. and Süli, E. Discontinuous Galerkin Finite Element Approximation of the Cahn–Hilliard Equation with Convection. SIAM J. Numer.
Anal., 47:2660–2685, 2009
[5]
Van der Zee, K. G., Oden, J. T., Prudhomme, S. and Hawkins-Daarud, A.
Goal-oriented error estimation for Cahn–Hilliard models of binary phase transition. Numer. Methods Partial Differ. Equationsm, 27:160–196, 2011
Joint work with Kris G. van der Zee, and E. Harald van Brummelen.
137
Alexandre Grandchamp
LCVMM-EPFL, CH
Multi-scale DNA Modelling and Birod Mechanics
Contributed Session CT1.4: Monday, 17:00 - 17:30, CO015
A standard description of DNA fragments is provided by the Kratky-Porod model
of bending and twisting, which corresponds in continuum mechanics language to
an inextensible and unshearable elastic rod [1,2]. Classical rod theory, which is
useful in applications over a wide range of length scales, e.g. from polymers to
plant tendrils and wire ropes, allows one to describe more complex mechanical
behaviours involving shear and extension which are also pertinent to DNA [3,4].
However, at short scales, strand separation is central to the function of DNA, which
requires a continuum mechanics theory of interacting double-stranded filaments
called birods [5]. We show that birod equilibrium configurations are solutions of
non-canonical Hamiltonian evolution in arc-length. For multi-scale DNA modeling we parametrize the birod Hamiltonian in a sequence-dependent way starting
from coarse grained rigid base models which are in turn parameterized by finer
grain molecular dynamic simulations [6]. Finally, we show how birod two-point
boundary value problems can be solved with parameter continuation in order to investigate features of DNA equilibrium probability distributions that can be probed
experimentally.
As time allows, we will present the analogous non-canonical Hamiltonian system
of N coupled Cosserat rods, as arises for example in collagen filaments (N=3) or
in bacterial flagellum (N=20).
[1] C. J. Benham, S. P. Mielke, DNA Mechanics, Ann. rev. Biomed. Eng. 2005.
7:21-53 [2] M. Doi, S. F. Edwards, The Theory of Polymer Dynamics, Clarendon
Press, Oxford, 1986 [3] S. S. Antman , Nonlinear Problem of Elasticity, SpringerVerlag, New-York, 1995. [4] D. J. Dichmann, Y. Li and J. H. Maddocks, Hamiltonian Formulations and Symmetry in Rod Mechanics, Math. Appr. to Biomol.
Struct. and Dyn., 82 (1996), Springer, New-York [5] M. Moakher, J.H. Maddocks,
A Double-Strand Elastic Rod Theory, Arch. Rational Mech. Anal. 177 (2005)
53 - 91. [6] O. Gonzalez, D. Petkeviciute, J.H. Maddocks, A Sequence-Dependent
Rigid Base Model of DNA, J. Chem. Phys. 138, 055102 (2013)
Joint work with Prof. J. H. Maddocks and Jarosław Głowacki.
138
Gwenol Grandperrin
EPFL - SB - MATHICSE - CMCS, CH
Multiphysics Preconditioners for Fluid–Structure Interaction Problems
Minisymposium Session PSPP: Thursday, 15:00 - 15:30, CO3
We are interested in preconditioning problems arising in blood–flow simulations.
These are characterized by Fluid–Structure Interaction (FSI) in three dimensional
geometries. The nonlinearity of the equations is solved by using the Newton
method and the Jacobian system by preconditioned GMRES iterations. The preconditioner is based on an inexact factorization derived from a block Gauss-Seidel
method. Each factor represents a specific physics: the fluid, the structure, and
the harmonic extension. This allows for the selection of physics–specific preconditioners. For example, we take advantage of approximate versions of state of
the art preconditioners for the fluid part of the FSI model (modeled with the
Navier–Stokes equations), namely the Pressure Convection–Diffusion (PCD) preconditioner, and SIMPLE.
We focus on the important factors to measure parallel performances of a preconditioner: the independence on the number of iterations, in terms of CPU time
(scalability of the preconditioner), on the mesh size (optimality), and on the physical parameters (robustness), as well as the strong and weak scalability. We use
these metrics to demonstrate the efficiency of our preconditioners in typical situations for blood–flow simulations. All the computations are carried out using LifeV
(http://www.lifev.org), an open source finite element library based on Trilinos.
Joint work with Dr. Paolo Crosetto, Dr. Simone Deparis, and Prof. Alfio Quarteroni.
139
Isabelle Greff
Laboratoire de Mathématiques et de leurs Applications - Pau, FR
Conservation of Lagrangian and Hamiltonian structure for discrete schemes
Contributed Session CT1.4: Monday, 18:00 - 18:30, CO015
Many problems arising in various fields (such as physics, mechanics, fluid mechanics or finance) are described using partial differential equations (PDEs). Although
explicit solutions are not available in general, important classes of PDEs do present
strong structural properties: classical examples are symmetry properties, maximum principle or conservation properties. It is quite essential for the numerical
methods to provide a translation of these structural properties from the continuous
level to the discrete level so enforcing the numerical solutions to obey qualitative
behaviours in agreement with the underlying physics of the problem.
Two fundamental notions arising in classical mechanics are Lagrangian and Hamiltonian structures. Lagrangian systems are made of one functional, called the Lagrangian functional, and a variational principle called the least action principle.
From the least action principle is derived a second order differential equation called
the Euler-Lagrange equation, see e.g. [1]. The Lagrangian structure is much more
fundamental than its associated Euler-Lagrange equation: it contains information
that the Euler-Lagrange equation does not. A range of numerical methods forget
about the Lagrangian to focus on the Euler-Lagrange equation itself.
Let us consider the following question: consider a PDE deriving from a Lagrangian/Hamiltonian and a least action principle. When discretising this PDE,
how is embedded the attached Lagrangian/Hamiltonian structure at the discrete
level ? More precisely we ask whether the discretised PDE can be seen as deriving
from a discrete least action principle associated with a discrete Lagrangian/Hamiltonian
structure. Basically, in case the Lagrangian structure is embedded at the discrete
level, then the variational property of the original equation (at the continuous
level) may be preserved by the discrete problem. Our purpose is to study the conservation of variational properties for a given problem when discretising it. This
can be seen as an extension to PDEs of the works on variational integrators (as
[2, 3]). Precisely we are interested with Lagrangian or Hamiltonian structures and
thus with variational problems attached to a least action principle. Considering
a partial differential equation (PDE) deriving from such a variational principle, a
natural question is to know whether this structure at the continuous level is preserved at the discrete level when discretising the PDE. To address this question a
concept of coherence is introduced. Both the differential equation (the PDE translating the least action principle) and the variational structure can be embedded at
the discrete level. This provides two discrete embeddings for the original problem.
In case these procedures finally provide the same discrete problem we will say that
the discretisation is coherent.
Our purpose is illustrated with the Poisson problem. Coherence for discrete embeddings of Lagrangian structures is studied for various classical discretisations
(finite elements, finite differences and finite volumes). Hamiltonian structures are
shown to provide coherence between a discrete Hamiltonian structure and the discretisation of the mixed formulation of the PDE, both for mixed finite elements
and mimetic finite differences methods.
Nevertheless, many PDEs do not derive from a variational formulation in the classical sense. However it is possible to bypass the obstruction to the existence of a
Lagrangian formulation, as an example, we will consider the convection-diffusion
140
equation.
References
1 Vladimir I. Arnold. Mathematical methods of classical mechanics, volume 60
of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1989. Translated from the Russian by K. Vogtmann and A. Weinstein.
2 Ernst Hairer, Christian Lubich, and Gerhard Wanner. Geometric numerical
integration, volume 31 of Springer Series in Computational Mathematics.
Springer-Verlag, Berlin, second edition, 2006. Structure-preserving algorithms for ordinary differential equations.
3 Jerrold E. Marsden and Matthew West. Discrete mechanics and variational
integrators. Acta Numer., 10:357–514, 2001.
141
Sven Gross
Chair of Numerical Mathematics, RWTH Aachen University, DE
XFEM for pressure and velocity singularities in 3D two-phase flows
Minisymposium Session FREE: Tuesday, 10:30 - 11:00, CO2
Two-phase systems play an important role in chemical engineering, for example
mass transport between droplets and a surrounding liquid in extraction columns
(liquid-liquid system) or heat transfer in falling films (liquid-gas system). The
velocity and pressure field are smooth in the interior of each phase, but undergo
certain singularities at the interface Γ between the phases. Surface tension induces
a pressure jump across Γ, and a large viscosity ratio leads to a kink of the velocity
field at Γ, especially for liquid-gas systems.
If interface capturing methods (like VOF or level set techniques) are applied, the
finite element grid is usually not aligned with the interface. Then for standard
√
FEM the approximation of functions with such singularities leads to poor O( h)
convergence. The application of suitable extended finite element methods (XFEM)
provides optimal approximation properties, essentially reducing spurious currents
at the interface. Figure 1 shows the pressure jump of a static bubble induced by
surface tension using a standard and an extended finite element space.
In this talk we consider 3D flow simulations of such two-phase systems on adaptive
multilevel tetrahedral grids. We present a Heaviside enrichment of the pressure
space [1] yielding second order convergence of the L2 (Ω) pressure error [4] and
a ridge enrichment [3] of the velocity space leading to first order convergence of
the H 1 (Ω1 ∪ Ω2 ) velocity error. At the end of the talk, we present application
examples of 3D droplet and falling film simulations obtained by our 3D two-phase
flow solver DROPS [2, 5].
References
[1] S. Groß and A. Reusken. An extended pressure finite element space for twophase incompressible flows with surface tension. J. Comput. Phys., 224:40–58,
2007.
[2] S. Groß and A. Reusken. Numerical Methods for Two-phase Incompressible
Flows, volume 40 of Springer Series in Computational Mathematics. Springer,
2011.
[3] N. Moës, M. Cloirec, P. Cartraud, and J.-F. Remacle. A computational approach to handle complex microstructure geometries. Comput. Methods Appl.
Mech. Engrg., 192:3163–3177, 2003.
[4] A. Reusken. Analysis of an extended pressure finite element space for two-phase
incompressible flows. Comput. Visual. Sci., 11:293–305, 2008.
[5] DROPS package for simulation of two-phase flows.
http://www.igpm.rwth-aachen.de/DROPS/.
142
Figure 1: Pressure jump of a static bubble using piecewise linear FEM (left) and
suitable XFEM (right).
Joint work with Arnold Reusken.
143
Marcus Grote
University of Basel, CH
Runge-Kutta based explicit local time-stepping methods for wave propagation
Minisymposium Session TIME: Thursday, 11:00 - 11:30, CO015
The efficient simulation of time-dependent wave phenomena is of fundamental
importance in a wide variety of applications from acoustics, electromagnetics and
elasticity. For acoustic wave propagation, the scalar damped wave equation
utt + σut − ∇ · (c2 ∇u)
= f in Ω × (0, T )
σ ≥ 0,
(1)
often serves as a model problem. Next, we discretize (1) in space by using standard continuous (H 1 -conforming) finite elements with mass lumping or a nodal
DG discretization, while leaving time continuous. Either discretization leads to a
system of ordinary differential equations with an essentially diagonal mass matrix,
which can written as a first order system
y 0 (t) = By(t) + F (t)
(2)
Locally refined meshes impose severe stability constraints on explicit time-stepping
methods for the numerical solution of (1). Local time-stepping (LTS) methods
overcome that bottleneck by using smaller time-steps precisely where the smallest
elements in the mesh are located. In [1, 2], explicit second-order LTS integrators
for transient wave motion were developed, which are based on the standard leapfrog scheme. In the absence of damping, i.e. σ = 0, these time-stepping schemes,
when combined with the modified equation approach, yield methods of arbitrarily
high (even) order. To achieve arbitrarily high accuracy in the presence of damping,
while remaining fully explicit, explicit LTS methods based on Adams-Bashforth
multi-step schemes were derived in [3].
We now propose explicit LTS methods of high accuracy based on both explicit
classical and low-storage Runge-Kutta schemes. In contrast to Adams-Bashforth
methods, Runge-Kutta methods are one-step methods; hence, they do not require a starting procedure and easily accommodate adaptive time-step selection.
Although, Runge-Kutta methods do require several further evaluations per timestep, that additional work is compensated by a less stringent stability restriction
on the time-step. The resulting LTS-RK schemes have the same high rate of
convergence as the original classical or low-storage RK methods.
To illustrate the versatility of our approach, we consider a computational rectangular domain of size [0, 2] × [0, 1] with two rectangular barriers inside forming a
narrow gap. We use continuous P 2 elements on a triangular mesh, which is highly
refined in the vicinity of the gap, as shown in Fig. 1 (left). For the time discretization, we choose the third-order low-storage Runge-Kutta based LTS. Since the
typical mesh size inside the refined region is about p = 7 times smaller than that
in the surrounding coarser region, we take p local time steps of size ∆τ = ∆t/p for
every time step ∆t. Thus, the numerical method is third-order accurate both in
space and time with respect to the L2 -norm. As shown in Fig. 2 (right), a vertical
Gaussian pulse initiates two plane waves propagating in opposite directions. The
right-moving wave propagates until it impinges on the obstacle. A fraction of the
wave then penetrates the gap and generates a circular wave.
144
References
[1] J. Diaz, M. Grote: Energy conserving explicit local time-stepping for secondorder wave equations. SIAM Journal on Scientific Computing, 31 (2009),
1985–2014.
[2] M. Grote, T. Mitkova: Explicit local time-stepping for Maxwell’s equations.
Journal of Computational and Applied Mathematics, 234 (2010), 3283–3302.
[3] M. Grote, T. Mitkova: High-order explicit local time-stepping methods for
damped wave equations. Journal of Computational and Applied Mathematics,
239 (2013), pp. 270–289.
Figure 1: The initial triangular mesh.
Figure 2: The numerical solution at time t = 0.6
Joint work with Michaela Mehlin, and Teodora Mitkova.
145
Nicola Guglielmi
University of L’Aquila, IT
Computing the distance to defectivity
Minisymposium Session NEIG: Thursday, 15:30 - 16:00, CO2
Let A be a complex either real matrix with distinct eigenvalues. We are interested
to compute the distance of A from the set of complex/real defective matrices.
In order to compute such a distance we propose a novel method which is based on
a gradient system in a low-rank manifold of matrices.
We provide the main theoretical results and several illustrative examples.
This is a joint work with Paolo Butta’ and Silvia Noschese (Roma) and Manuela
Manetta (L’Aquila).
146
Elie Hachem
MINES ParisTech, FR
Unified variational multiscale method for compressible and incompressible flows
using anisotropic adaptive mesh
Minisymposium Session ADFE: Tuesday, 10:30 - 11:00, CO016
We propose in this work a unified numerical method to address easily the coupling between compressible and incompressible multiphase flows. The same set of
primitive unknowns and equations is described everywhere in the flow. A levelset
function enables the precise position of the interfaces and provides homogeneous
physical properties for each subdomain. The coupling between the pressure and the
flow velocity is ensured by introducing mass conservation terms in the momentum
and energy equations. The system is then solved using a new derived Variational
Multiscale stabilized finite element method [1]. Combined with anisotropic mesh
adaptation, we show that the proposed method provides an accurate modeling
framework for two-phase compressible isothermal flows and for fluid-structure interaction problems. Therefore, a new a posteriori estimate based on the length
distribution tensor approach and the associated edge based error analysis is presented to ensure an accurate capturing of the discontinuities at the interfaces [2].
It enables to calculate a stretching factor providing a new edge length distribution, its associated tensor and the corresponding metric. The optimal stretching
factor field is obtained by solving an optimization problem under the constraint
of a fixed number of edges in the mesh. With such an advantage, we can now
provide a useful tool for doing accurate numerical simulations [3]. We assess the
behaviour and accuracy of the proposed formulation in the simulation of 2D and
3D time-dependent numerical examples.
[1] E. Hachem, B. Rivaux, T. Kloczko, H. Digonnet, T. Coupez, Stabilized finite
element method for incompressible flows with high Reynolds number, Journal of
Computational Physics, Vol. 229 (23), 8643-8665, 2010
[2] T. Coupez, G. Jannoun, N. Nassif, H.C. Nguyen, H. Digonnet, E. Hachem,
Adaptive Time-step with Anisotropic Meshing for Incompressible Flows, accepted
in Journal of Computational Physics, http://dx.doi.org/10.1016/j.jcp.2012.12.010,
2013
[3] E. Hachem, S. Feghali, R. Codina and T. Coupez, Anisotropic Adaptive Meshing and Monolithic Variational Multiscale Method for Fluid-Structure Interaction,
Computer and Structures, http://dx.doi.org/10.1016/j.compstruc.2012.12.004, 2013
Joint work with Thierry Coupez.
147
Martin Hadrava
Charles University in Prague, Faculty of Mathematics and Physics, CZ
Space-time Discontinuous Galerkin Method for the Problem of Linear Elasticity
Contributed Session CT2.5: Tuesday, 14:00 - 14:30, CO016
The paper will be concerned with the numerical solution of the problem of dynamic
linear elasticity by several time-discretization techniques based on the application
of the discontinuous Galerkin method (DGM) in space.
The DGM is a class of numerical methods for solving partial differential equations. It combines features of the finite volume method (the ability to capture
discontinuities) and the finite element method (arbitrary polynomial degree yielding accurate high-order schemes). The method was initially introduced in 1970s
by Reed and Hill as a technique to solve neutron transport problems [3]. During
the subsequent decades, the DGM has been applied to various problems arising
from physics, biology and economics and in particular became a popular method
for solving problems in computational fluid dynamics, electrodynamics and plasma
physics. A detailed survey about the evolution of the DGM can be found in, e.g.,
[1].
The first part of the paper will be devoted to the description of the problem and
the derivation of the discretization schemes under investigation. We will present
several discretizations based on finite-difference approximations of the time derivative terms and the discretization based on the space-time discontinuous Galerkin
method (STDGM). In contrast to the standard applications of the DGM to nonstationary problems, in the STDGM the main concept of the discontinuous Galerkin
method - discontinuous piecewise polynomial approximation - is applied both in
space and in time and hence a more robust and accurate scheme is obtained. The
application of the STDGM to solve the nonlinear convection-diffusion equation
was presented in [2]. A general presentation of the theory and applications of
DG methods can be found in the manuscript of Rivière [4], where the STDGM is
applied to the parabolic equation. A detailed description of the application of the
STDGM to the problem of dynamic linear elasticity will be given in the paper.
In the second part we will investigate the numerical properties of the presented
methods by comparing the results obtained by numerical experiments. We will
show the estimated rate of convergence of all methods under investigation, while
keeping focus on showing the difference between the accuracy of the STDGM and
the discretizations based on finite-difference approximations. We will demonstrate
the efficiency of the STDGM - although it consumes more computational time, it
rewards us with a significantly more accurate approximate solution.
References
[1] B. Cockburn, G. E. Karniadakis and C. W. Shu, The development of discontinuous Galerkin methods, (1999).
[2] M. Feistauer, V. Kučera, K. Najzar and J. Prokopová, Analysis of space-time
discontinuous Galerkin method for nonlinear convection-diffusion problems,
Numer. Math. 117 (2011), pp. 251–258.
[3] W. Reed and T. Hill, Triangular Mesh Methods for the Neutron Transport
Equation, Technical Report LA-UR-73-479, Los Alamos Scientific Laboratory
(1973).
148
[4] B. M. Rivière, Discontinuous Galerkin Methods for Solving Elliptic and
Parabolic Equations: Theory and Implementation, Frontiers in Applied Mathematics (2008).
Joint work with Miloslav Feistauer, Adam Kosík, and Jaromír Horáček.
149
Ernst Hairer
Université de Genève, CH
Long-term analysis of numerical and analytical oscillations
Plenary Session: Monday, 09:50 - 10:40, Rolex Learning Center Auditorium
Two completely different topics will be adressed in this talk:
- the numerical solution of Hamiltonian systems with linear multistep methods
over long times,
- adiabatic invariants in highly oscillatory Hamiltonian differential equations.
In both situations, high oscillations are present and their influence to the longtime dynamics of solutions is of interest to us. Whereas in the second situation
oscillatory solutions arise from the special form of the differential equation, they are
due to the multistep character and the presence of parasitic solution components
in the first situation.
We show that certain symmetric multistep methods, when applied to second order
Hamiltonian systems, behave very similar to symplectic one-step methods (excellent long-time energy-preservation, near-preservation of angular momentum, linear
error growth for nearly integrable systems). On the other hand, for multiscale systems where harmonic oscillators with several high frequencies are coupled to a
slow system, near-preservation of the oscillatory energy over long times is shown
without any non-resonance condition.
For the proof of these results the technique of modulated Fourier expansions is
used. The surprising fact is that the same ideas that permit to prove the nearpreservation of the oscillatory energy in multiscale Hamiltonian systems, can also
be applied to get insight into the long-time behavior of numerical solutions obtained by symmetric linear multistep methods.
The presented results have been obtained in collaboration with Christian Lubich,
David Cohen, Ludwig Gauckler, and Paola Console.
References
[1] E. Hairer, Ch. Lubich: Long-time energy conservation of numerical methods
for oscillatory differential equations. SIAM J. Numer. Anal. 38 (2001),
414–441.
[2] E. Hairer, Ch. Lubich, G. Wanner: Geometric Numerical Integration. StructurePreserving Algorithms for Ordinary Differential Equations. 2nd edition.
Springer Verlag, Berlin, Heidelberg, 2006.
[3] L. Gauckler, E. Hairer, Ch. Lubich: Energy separation in oscillatory Hamiltonian systems without any non-resonance condition. To appear in Comm.
Math. Phys. (2013).
[4] P. Console, E. Hairer, Ch. Lubich: Symmetric multistep methods for constrained Hamiltonian systems. To appear in Numerische Mathematik (2013).
150
Abdul-Lateef Haji-Ali
King Abdullah University of Science and Technology - KAUST, SA
Optimization of mesh hierarchies for Multilevel Monte Carlo
Contributed Session CT4.8: Friday, 08:50 - 09:20, CO123
We consider the Multilevel Monte Carlo (MLMC) method in applications involving differential equations with random data where the underlying approximation
method of individual samples is based on uniform spatial discretizations of arbitrary approximation order and cost. We perform a general optimization of the
parameters defining the MLMC hierarchy in such cases.
We recall the MLMC estimator A of the quantity of interest g,
M
M0
L
X
1 X̀
1 X
g0 (·; ω0,m ) +
(g` (·; ω`,m ) − g`−1 (·; ω`,m )) ,
A=
M0 m=1
M` m=1
`=1
where gl (·; ωl,m ) is a sample realization of g using mesh size hl in the underlying discretization method. We assume the strong and weak errors of gl are well
approximated by
E[g − g` ] ' hq` 1 QW ,
2
Var[g` − g`−1 ] ' hq`−1
QS ,
for problem and method specific constants QW , QS , q1 , q2 . For a given error tolerance, TOL, and confidence parameter, Cα , we solve the optimization problem
Work =
minimize
(
subject to
hqL1 QW
V ar(g0 )
M0
+
q
2
h`−1
`=1 M` QS
PL
PL
M`
`=0 hdγ ,
`
≤ (1 − θ) TOL,
2
,
≤ θ TOL
Cα
[bias],
[statistical error],
where γ is the order of solver cost, typically 1 ≤ γ ≤ 3 for linear systems, and d
is the spatial dimension of the discretization domain. The optimization parameL
ters are the mesh sizes {hl }L
l=0 , the numbers of samples {Ml }l=0 , the number of
levels L + 1, and the optimal splitting between bias and statistical errors modeled by θ. Note that the numbers of samples {Ml }L
l=0 are constrained to integers.
Moreover, {hl }L
are
constrained
to
discrete
sets
by possible uniform discretizal=0
tions. However, in order to reduce to complexity of the optimization we do not
initially enforce these constraints on the parameters. We then use the optimal
unconstrained parameters as an initial guess to do a limited brute force search of
the best hierarchy that takes these constraints into account. Similarly, we do a
brute force search for the best integer L that minimizes the work.
The resulting hierarchies are different from typical MLMC hierarchies in that they
do not have a fixed ratio between successive mesh sizes. Moreover, our analysis
shows that θ, which determines the best splitting of tolerance between bias and
statistical errors, can be drastically different from value 1/2 traditionally used in
MLMC.
We will also present numerical results which highlight the functionality of the optimization by applying our method to an elliptic PDE with stochastic coefficients.
We will emphasize how the optimal hierarchies change from the standard MLMC
method as you include the effects of real problem parameters, such as the solver
151
cost exponent.
Joint work with Nathan Collier, Abdul-Lateef Haji-Ali, Fabio Nobile, Erik von
Schwerin, and Raul Tempone.
152
Helmut Harbrecht
University of Basel, CH
On multilevel quadrature for elliptic stochastic partial differential equations
Minisymposium Session UQPD: Wednesday, 12:00 - 12:30, CO1
In this talk we show that the multilevel Monte Carlo method for elliptic stochastic
partial differential equations can be interpreted as a sparse grid approximation.
By using this interpretation, the method can straightforwardly be generalized to
any given quadrature rule for high dimensional integrals like the quasi Monte Carlo
method or the polynomial chaos approach. Besides the multilevel quadrature for
approximating the solution’s expectation, a simple and efficient modification of
the approach is proposed to compute the stochastic solution’s variance. Numerical results are provided to demonstrate and quantify the approach.
Joint work with Michael Peters, and Markus Siebenmorgen.
153
Markus Hegland
The Australian National University, AU
Solving the chemical master equations for biological signalling cascades using tensor factorisation
Minisymposium Session LRTT: Wednesday, 10:30 - 11:00, CO3
Signalling cascades are essential parts of biological organisms. In particular they
can amplify and also denoise signals. As they are based on chemical reactions with
relatively few copy numbers they exhibit noise and their probability functions are
described as solution of a chemical master equation.
Cascades have a particularly simple structure. The matrix of their chemical master
equation is shown to have tensor train ranks of 3 for practically important cases.
For discrete time one can then establish a graphical model for the components
of the states. We discuss how to exploit this structure computationally and how
to use well-known algorithms from statistics for the determination of marginal
distributions.
We discuss various tensor fractorisations for cascades and their application. In a
practical example the Arnoldi method and the singular value decomposition are
used to solve the stationary chemical master equation for simple cascades. Probability distributions with up to 10 stages have been solved. We discuss these
solutions.
Joint work with Jochen Garcke.
154
Claus-Justus Heine
Universität Stuttgart, IANS, DE
Mean-Curvature Reconstruction with Linear Finite Elements
Contributed Session CT1.2: Monday, 18:00 - 18:30, CO2
We present a numerical method to construct an approximative curvature-vector
field from piece-wise linearly approximated co-ordinate data of a given n-dimensional hypersurface Γ ⊂ Rn+1 (n = 2, 3). The reconstruction uses an L2 -projection
with an additional Laplacian diffusion term and works with piece-wise linear Lagrangian finite elements. It can be shown that the curvature reconstruction converges with h2/3 in the L2 -sense where h denotes the discretisation parameter of
the underlying finite element mesh. The method can be applied to triangulated
surfaces as well as to discrete surfaces defined by level-sets of piece-wise linear
finite element functions.
155
Claus-Justus Heine
Universität Stuttgart, IANS, DE
Mean-Curvature Reconstruction with Linear Finite Elements
Minisymposium Session GEOP: Tuesday, 10:30 - 11:00, CO122
We present a numerical method to construct an approximative curvature-vector
field from piece-wise linearly approximated co-ordinate data of a given n-dimensional hypersurface Γ ⊂ Rn+1 (n = 2, 3). The reconstruction uses an L2 -projection
with an additional Laplacian diffusion term and works with piece-wise linear Lagrangian finite elements. It can be shown that the curvature reconstruction converges with h2/3 in the L2 -sense where h denotes the discretisation parameter of
the underlying finite element mesh. The method can be applied to triangulated
surfaces as well as to discrete surfaces defined by level-sets of piece-wise linear
finite element functions.
156
Patrick Henning
Uppsala University, SE
Error control for a Multiscale Finite Element Method
Minisymposium Session ADFE: Tuesday, 11:00 - 11:30, CO016
In this presentation, we introduce an adaptive mesh refinement strategy for the
multiscale finite element method (MsFEM) for solving elliptic problems with rapidly
oscillating coefficients. Starting from a general version of the MsFEM with oversampling, we present an a posteriori estimate for the H 1 -error between the exact
solution of the problem and a corresponding MsFEM approximation. Our estimate holds without any assumptions on scale separation or on the type of the
heterogeneity. The estimator splits into different contributions which account for
the coarse grid error, the fine grid error and even the oversampling error. Based
on the error estimate we construct an adaptive algorithm that is validated in numerical experiments.
Joint work with Mario Ohlberger, and Ben Schweizer.
157
Henar Herrero
Universidad de Castilla-La Mancha, ES
The reduced basis approximation applied to a Rayleigh-Bénard problem
Contributed Session CT3.5: Thursday, 17:30 - 18:00, CO016
titleThe reduced basis approximation applied to a Rayleigh-Bénard problem
The reduced basis approximation is a discretization method that can be implemented for solving parameter-dependent problems P(φ(µ), µ) = 0 with parameter
µ in cases of many queries. This method consists of approximating the solution
φ(µ) of P(φ(µ), µ) = 0 by a linear combination of appropriate preliminary computed solutions φ(µi ) with i = 1, 2, ...N such that µi are parameters chosen by
an iterative procedure using a strong greedy algorithm [2, 4].
In this work [1] it is applied to a two dimensional Rayleigh-Bénard problem with
constant viscosity that depends on the Rayleigh number R, P (φ(R), R) = ~0, as
follows:
0 = ∇ · ~v , in Ω,
1
(∂t~v + ~v · ∇~v ) = R θe~3 − ∇P + ν∆~v , in Ω,
Pr
∂t θ + ~v · ∇θ = ∆θ, in Ω,
(1)
(2)
(3)
with boundary conditions defined in Ref. [1], where Ω = [0, Γ]×[0, 1], φ = (~v , θ, P )
and ~v is the velocity vector field, θ is the temperature field, P is the pressure, ~e3
is the unitary vector in the vertical direction and P r the Prandtl number.
The classical approximation scheme used here to solve the stationary version of
equations (1-3) with the corresponding boundary conditions for different values
of the Rayleigh number R is a Legendre spectral collocation method. A linear
stability analysis of these solutions has been performed in [3]. The value of the aspect ratio Γ = 3.495 has been chosen and R varies in two intervals: [1, 150; 3, 000]
associated to the upper branch of stationary solutions after the primary Pitchfork bifurcation and [1, 560; 3, 000] associated to the upper branch of stationary
solutions after the secondary Pitchfork bifurcation. We apply the reduced basis
method within this framework to compute the stable solutions corresponding to
many values of R on these two branches.
For each branch, the reduced basis has been obtained by considering a greedy approach on the corresponding branch. In figure 1 the projection error of stationary
solutions on the space generated by the reduced basis is represented. From this
figure we check the maximum error is upper bounded by O(10−6 ), the projection
on the reduced basis space is a good approximation to a stationary solution.
The problem is numerically solved by the Galerkin variational formulation using
the Legendre Gauss-Lobatto quadratureP
formulas together with the reduced basis
N
{φ(Ri ), i = 1, 2, ...N } such that φ(R) ∼ i=1 λi φ(Ri ). The difference between the
solution obtained with the reduced basis method and the solution obtained with
Legendre collocation in case R ∈ [1, 150; 3, 000] on the upper branch is O(10−4 ). A
rather simple post-processing allows to recover the same accuracy as the projection
O(10−6 ) from the Reduced Basis Galerkin approximation. The reduced basis
method permits to speed up the computations of these solutions at any value of
the Rayleigh number chosen in a fixed interval associated with a single bifurcation
branch while maintaining accuracy.
158
References
[1] H. Herrero, Y. Maday and F. Pla. RB (Reduced basis) for RB (Rayleigh-Bénard).
Computer Methods in Applied Mechanics and Engineering (to appear) (2013).
[2] Y. Maday, A.T. Patera and G. Turinici. Convergence theory for reduced-basis
approximations of single-parameter elliptic partial differential equations, J. Sci.
Comput. 17, no. 1-4, 437-446 (2002).
[3] F. Pla, A.M. Mancho and H.Herrero. Bifurcation phenomena in a convection
problem with temperature dependent viscosity at low aspect ratio. Physica D
238 , 572-580 (2009).
[4] C. Prud’homme, D.V. Rovas, K. Veroy, L. Machiels, Y. Maday, A.T. Patera
and G. Turinici. Reliable real-time solution of parametrized partial differential
equations: Reduced-basis output bound methods. Journal of Fluids Engineering, 124 (1), 70-80 (2002).
Figure 1: Errors of the projections of the global stationary solutions on the reduced
basis. Blue lines correspond to the branches of the primary bifurcation and red
lines to the branches of the secondary bifurcation.
Joint work with Yvon Maday, and Francisco Pla.
159
Martin Hess
Max Planck Institute Magdeburg, DE
Reduced Basis Methods for Maxwell’s Equations with Stochastic Coefficients
Contributed Session CT3.5: Thursday, 18:00 - 18:30, CO016
Parametrized partial differential equations (PDEs) in many-query and real-time
settings require the solution of high-dimensional systems for a large number of
parameter configurations. We discuss the Reduced Basis Method (RBM) for timeharmonic Maxwell’s equations under deterministic and stochastic parameters.
We consider the time-harmonic Maxwell’s equation
∇ × µ−1 ∇ × E + iωσE − ω 2 E = iωJ
in Ω,
(1)
in the electric field E with permeability µ, conductivity σ and permittivity . The
considered frequency is ω, the source current density J and i the imaginary unit.
The equation (1) is considered in the computational domain Ω.
To study material uncertainties, we introduce a stochastic coefficient (x; ω) with
x ∈ Ω and ω the stochastic inputs.
The RBM model reduction significantly reduces the system size while preserving a
certified accuracy by employing rigorous error estimators, see [1]. The uncertainty
in the coefficients is considered in the Karhunen-Loève expansion.
(x; ω) =
∞ p
X
λk ξk (ω)k (x),
(2)
k=1
with λk and k (x) the eigenvalues and eigenfunctions of the covariance operator
and ξk (ω) uncorrelated random variables with zero mean and variance 1.
To quantify the statistical outputs like mean and variance for Maxwell’s equations
under stochastic uncertainties, we combine the techniques from [1] with sparse
grid approaches as presented in [3]. As an alternative, we review the method
introduced in [2] and show how to apply it to (1).
Numerical experiments are performed on large-scale 3D models of microwave semiconductor devices.
[1] B. Haasdonk, K. Urban, B. Wieland, Reduced Basis Methods for Parametrized
Partial Differential Equations with Stochastic Influences using the KarhunenLoève Expansion, SIAM Journal on Uncertainty Quantification, (2013). Accepted
[2] P. Nair, A. Keane, Stochastic Reduced Basis Methods, American Institute
Aeronautics and Astronautics Journal (2002) 40, (8), 1653-1664.
[3] B. Peherstorfer, S. Zimmer, H.-J. Bungartz, Model Reduction with the Reduced Basis Method and Sparse Grids, Publisher: In Sparse Grids and Applications, Volume 88 of Lecture Notes in Computational Science and Engineering, (2013), Springer.
Joint work with Peter Benner.
160
Jan Hesthaven
Brown University, USA
High-order accurate reduced basis multiscale finite element methods
Plenary Session: Thursday, 09:10 - 10:00, CO1
The development of numerical methods for problems with highly oscillating coefficients remains an active and important field of research. To overcome the
computational cost of resolving the fine scale(s), multiscale finite element methods
(MsFEM) have been proposed and developed by several authors - see [1] for an
overview and introduction. In such methods, accuracy is achieved by solving a local fine scale problem to build the multiscale finite element basis functions needed
to capture the small scale information of the leading order differential operator.
Alternative approaches to this particular technique are several, e.g., the multiscale variational method [2] and the heterogeneous multiscale method(HMM) [3],
although they all share some aspects.
In this presentation we focus on techniques most naturally formlated as multiscale
finite element methods and develop a new multiscale finite element method for
problems with highly oscillating coefficients. The method, discussed first in the
context of elliptic problems, is inspired by the multiscale finite element method
developed in [4]. Howeverm rather than using a composition rule, a more explicit
nonconforming multiscale finite element space is constructed. Accuracy is ensured
by using a Petrov-Galerkin formulation and oversampling techniques to reduced
the impact of the resonance error. We show that the method is natural for highorder finite element methods used with advantage to solve the coarse grained
problem and discuss optimal error estimates.
Following related past work [5,6,7], we consider the use of a reduced model to
accurately and efficiently represent the multiscale basis. For uniform rectangular
meshes, the local oscillating test functions are most naturally parametrized by the
centers of the elements. For triangular meshes, inspired by the idea that oversampled oscillating test functions yield a better approximation of the global map,
we propose to first build the reduced basis set on uniform rectangular elements
containing the original triangular elements and then restrict the oscillating test
function on triangular elements. This approach allows for the development of
efficient and accurate multiscale methods on general unstructured grids and can
also be generalized to the case where coefficients dependent on other independent
parameters.
Time permitting, we shall discuss the extension to include the development of
efficient high-order accurate multiscale methods for wave problems where the highorder accuracy of the coarse solver is of particular value.
Throughout the presentation we shall illustrate the behavior and results with
computational examples
[1] Y. Efendiev and T. Y. Hou, Multiscale Finite Element Methods: Theory and
Applications, Springer, 2009.
[2] T. J. R. Hughes, G. R. Feijoo, L. Mazzei, and J-B. Quincy The variational
multiscale methoda paradigm for computational mechanics, Computer Methods
in Applied Mechanics and Engineering 166(12), 3–24,1998.
[3] A. Abdulle, W. E, B. Engquist and E. Vanden-Eijnden, The heterogeneous
multiscale method, Acta Numerica, 2012, 1–87.
[4] G. Allaire and R. Brizzi, A multiscale finite element method for numerical
homogenization, SIAM MMS 4, (2005) 790-812.
161
[5] N.C. Nguyen, A multiscale reduced-basis method for parametrized elliptic partial differential equations with multiple scales. J. Comput. Phys., 227 (2007)
9807–9822.
[6] S. Boyaval, Reduced-Basis approach for homogenization beyond the periodic
setting, Multiscale Model. Simul. 7 (1) (2008) 466–494.
[7] A. Abdulle and Y. Bai, Reduced basis finite element heterogeneous multiscale
method for high-order dis- cretizations of elliptic homogenization problems, J.
Comput. Physics, 231, 21, 2012, 7014–7036.
Joint work with S. Zhang, and X. Zhu.
162
Holger Heumann
Department of Mathematics, Rutgers University, New Brunswick, NJ, US
Stabilized Galerkin for Linear Advection of Differential Forms
Minisymposium Session MMHD: Thursday, 15:30 - 16:00, CO017
The spaces H(curl) and H(div) are the natural spaces of the various vector fields
in Maxwell’s equations and magnetohydrodynamics. In the language of exterior
calculus, the vector fields in these two spaces correspond either to 1-forms or
2-forms; that means, that we differentiate between vector fields that have a welldefined action on lines (e.g. the electric field E) and well-defined action on surfaces
(e.g. the magnetic induction B). The so-called Lie-derivative is the natural linear
advection operator for differential forms. It is the generalization of the directional
derivative of scalar functions to differential forms and measures the rate of change
of the action of a differential form on advected manifolds.
In this talk we will exploit such structural properties to formulate and analyze
stabilized Galerkin methods for linear advection problems of vector fields. We
will pay particular attention to stabilized Galerkin methods with H(curl)- and
H(div)-conforming approximation spaces [1]. Vector fields of H(curl)- and H(div)conforming approximation spaces are globally continuous only in certain components. Hence, stabilized Galerkin with these spaces is beyond the existing terminology of stabilized methods for globally continuous or discontinuous approximation
spaces.
[1 ] H. Heumann and R. Hiptmair, Stabilized Galerkin methods for magnetic
advection, SAM-report, 2012-26, submitted to M2AN.
Joint work with Ralf Hiptmair.
163
Christian Himpe
WWU Muenster - Institute for Computational and Applied Mathematics, DE
Combined State and Parameter Reduction of Large-Scale Hierarchical Systems
Minisymposium Session ROMY: Thursday, 14:00 - 14:30, CO016
Hierarchical systems have widespread use in computer science and applied mathematics. One example is a hierarchical network, distributing information from a
single source. Such a system, with L levels and a maximum of M children per
L+1
node can be treated as a N × N linear system, with N = M
M −1 . In large scale
settings, with many levels or many nodes per level, such that N 1, the issue of
reducibility arises to cap computational cost. By treating this system as a linear
control system with a single input and unit output of the leaf nodes, the proven
model reduction concept of balanced truncation commends oneself. Here random
but stable, rooted M -ary trees with parametrized edges are explored in terms of
state reduction, parameter reduction and combined reduction by the use of empirical gramians, from the emgr framework.
Joint work with Mario Ohlberger.
164
Michael Hintermueller
Humboldt-University of Berlin, DE
An adaptive finite element method for variational inequalities of second kind with
applications in L2-TV-based image denoising and Bingham fluids
Minisymposium Session ADFE: Wednesday, 12:00 - 12:30, CO016
Adaptive finite element methods for a class of variational inequalities of second
kind are studied. In particular problems related to the first order optimality system of a total variation regularization based variational model with L2-data-fitting
in image denoising (L2-TV problem) as well as to Bingham fluids are highlighted.
For a finite element discretization of the variational inequality problem, an a posteriori error residual based error estimator is derived and its reliability and (partial)
efficiency are established. The results are applied to solve the L2-TV problem
by means of the adaptive finite element method. The adaptive mesh refinement
relies on the newly derived a posteriori error estimator and, in the case of image processing, on an additional heuristic providing a local variance estimator to
cope with noisy data. The numerical solution of the discrete problem on each
level of refinement is obtained by a superlinearly convergent algorithm based on
Fenchel-duality and inexact semismooth Newton techniques and which is stable
with respect to noise in the data. Numerical results justifying the advantage of
adaptive finite elements solutions are presented.
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.3
0.4
0.5
Joint work with M. M. Rincon-Camacho.
165
0.6
0.7
Michael Hintermueller
Humboldt-University of Berlin, DE
Optimal shape design subject to elliptic variational inequalities
Minisymposium Session GEOP: Tuesday, 11:30 - 12:00, CO122
The shape of the free boundary arising from the solution of a variational inequality is controlled by the shape of the domain where the variational inequality is
defined. Shape and topological sensitivity analysis is performed for the obstacle
problem and for a regularized version of its primal-dual formulation. The shape
derivative for the regularized problem can be defined and converges to the solution
of a linear problem. These results are applied to an inverse problem and to the
electrochemical machining problem.
Joint work with A. Laurain.
166
Marlis Hochbruck
Karlsruhe Institute of Technology, Germany
Error analysis of implicit Runge-Kutta methods for discontinuous Galerkin discretizations of linear Maxwell’s equations
Minisymposium Session TIME: Thursday, 10:30 - 11:00, CO015
In this talk we present an error analysis for implicit Runge–Kutta methods for linear Maxwell’s equations. We start with the time discretization and write consider
Maxwell’s equations as an abstract initial value problem
u0 (t) = Au(t) + f (t),
u(0) = u0
on a suitable Hilbert space H. Here, A is an unbounded operator with domain
D(A) ⊂ H. Since A is a skew-symmetric operator on its domain, we consider
Gauß collocation methods with constant time step size τ . These methods are
unconditionally stable and have nice geometric properties.
Our error analysis is based on energy technique discussed in [1]. For s-stage Gauss
collocation methods we obtain an order reduction to order s + 1 instead of the
classical order 2s of Gauß collocation methods.
Next, the we consider the full discretization error. We discretize Maxwell’s equations in space using the discontinuous Galerkin finite element method. This yields
the semidiscrete problem
u0h (t) = Ah uh (t) + πh f (t),
uh (0) = πh u0 .
Here h denotes maximum diameter of the finite elements, Ah denotes the discrete
operator that approximates A, and πh denotes the L2 -projection onto the finite
element space. It is well known that the spatial error for general simplicial meshes
is of size O(hp+1/2 ), where p denotes the degree of the polynomials used in the
finite elements. As in the continuous case we apply Gauß collocation methods
to discretize in time. We can prove that the full discretization error is of size
O(hp+1/2 + τ s+1 ).
Finally, we illustrate our theoretical results by numerical experiments.
References
[1] C. Lubich, A. Ostermann, Runge-Kutta approximation of quasi-linear parabolic
equations, Mathematics of Computation, Vol. 64, No. 210, pp 601-628, 1995.
Joint work with Tomislav Pazur.
167
Haakon Hoel
KAUST university, SA
On non-asymptotic optimal stopping criteria in Monte Carlo simulations
Contributed Session CT4.8: Friday, 09:20 - 09:50, CO123
We consider the setting of estimating the mean of a random variable Xi by a
sequential stopping rule Monte Carlo (MC) method: Given small, fixed constants
T OL > 0 and δ > 0, and the estimation goal
!
M
X
Xi
− µ > T OL ≤ δ,
(1)
P M
i=1
our objective is to construct a sequential stopping rule method for determining
the number of samples M (T OL, δ) that is required to ensure that (1) is met.
The performance of a typical second moment based sequential stopping rule MC
method, determining M (T OL, δ) by means of sequential samples of the sample
variance, is shown to be unreliable in such settings both by numerical examples and
through analysis. By analysis and approximations, we construct a higher moment
based stopping rule which is shown in numerical examples to perform more reliably
and only slightly less efficiently than the second moment based stopping rule.
Figure 1: A Pareto
distributed r.v.
Xi with parameters α = 3.1 and
p
xm = (α − 1) (α − 2)/α is sampled. Plots of the function f (T OL, δ) =
PM (T OL,δ)
δ −1 P (|M (T OL, δ)−1 i=1
Xi − µ| > T OL) where M (T OL, δ) is generated
by a second moment based stopping rule in the top plot and by our higher moment
based stopping rule in the lower plot.
168
Joint work with Christian Bayer, Erik von Schwerin, and Raul Tempone.
169
Johan Hoffman
KTH Royal Institute of Technolgy, SE
Adaptive finite element methods for turbulent flow and fluid-structure interaction:
theory, implementation and applications
Minisymposium Session ADFE: Wednesday, 10:30 - 11:00, CO016
We present recent advanced in the area of adaptive finite element methods for
turbulent flow and fluid-structure interaction, including high performance parallel algorithms and software implementation in the open source software project
FEniCS. Basic theory is presented, as well as applications to aerodynamics, aeroacoustics and biomedicine, including modeling of the turbulent flow past a full
aircraft, the blood flow in the human heart, and phonation by simulation of the
fluid-structure interaction of air and the vocal folds.
Joint work with Johan Jansson, Aurélien Larcher, Niclas Jansson, Rodrigo Vilela
De Abreu, Jeannette Hiromi Spühler, Kaspar Müller, and Cem Niyanzi Degirmenci.
170
Jiří Holman
CTU in Prague, Faculty of Mechanical Engineering, Dept. of Technical Mathematics, CZ
Numerical Simulation of Compressible Turbulent Flows Using Modified Earsm
Model
Contributed Session CT4.5: Friday, 09:20 - 09:50, CO016
This work deals with the numerical solution of compressible turbulent flows. Turbulent flows are modeled by the system of averaged Navier-Stokes equations [6]
closed by the Explicit Algebraic Reynolds Stress Model (EARSM) of turbulence
[5]. The EARSM model used in this work is based on the Kok’s TNT model equations [4]. New set of model constants which is more suitable for conjunction with
EARSM model has been derived. The most crucial part is calibration of the diffusion constants. Their values are determined from the simplified model behavior
near the outer edges of the shear layer. Kok derived requirements in form of the
set of inequations for the diffusion constants. Hellsten shows [2] that this set of
inequations si not valid for the nonlinear constitutive relations and proposes new
set of inequations which respects nonlinear behavior of the EARSM model. We
are using this inequations to obtain new diffusion constants for the closure TNT
model equations.
Recalibrated model of turbulence together with the system of averaged NavierStokes equations is then solved by the in-house made software based on the finite
volume method [1]. Inviscid numerical fluxes are approximated by the HLLC
Riemann solver with the piecewise linear MUSCL or WENO reconstruction [3].
Viscous numerical fluxes are approximated by the central differencing with aid of
the dual mesh [3]. The resulting system of ordinary differential equations is then
solved by the explicit two-stage TVD Runge-Kutta method with local time-step
and point implicit treatment of the source terms [3].
For validation purposes the subsonic flow over a flat plate was solved at first.
From Figure 1 one can see that velocity profile obtained with EARSM model with
original Kok’s model constants has qualitatively wrong shape. On the other hand,
recalibrated model is in good argeement with Hellsten model. Example of application in external aerodynamics is flow around the RAE 2822 airfoil (AGARD case
10: M∞ = 0.754, α∞ = 2.57◦ , Re = 6.2 · 106 ). This problem is transonic flow
with small separation of the boundary layer due to interaction with shockwave.
From Figure 2 we can see very good agreement of the modified EARSM model
with experiment.
References
[1] Ferziger J. H., Peric M.: Computational Methods for Fluid Dynamics.
Springer, 1999.
[2] Hellsten, A.: New two-equation turbulence model for aerodynamics applications, Report A-21, Helsinki University of Technology, 2004
[3] Holman, J.: Numerical solution of compressible turbulent flows in external
and internal aerodynamics, Diploma thesis CTU in Prague, 2007 (in czech)
[4] Kok, J. C.: Resolving the Dependence on Freestream Values for k − ω Turbulence Model, AIAA Journal, Vol. 38, No. 7, 2000
171
[5] Wallin, S.: Engineering turbulence modeling for CFD with focus on explicit
algebraic Reynoldes stress models, Ph.D. thesis, Royal Institutte of Technology, 2000
[6] Wilcox, D. C.: Turbulence Modeling for CFD, Second Edition, DWC Industries, 1994
Figure 1: Velocity profile
Figure 2: Comparison with experimental data
Joint work with Jiří Fürst.
172
Thomas Huckle
Technische Universitaet München, DE
Tensor representations of sparse or structured vectors and matrices
Minisymposium Session LRTT: Wednesday, 12:00 - 12:30, CO3
In this talk we derive tensor train representations for vectors/matrices with special
symmetries that often appear in applications. Typical symmetries are persymmetry, centrosymmetry, translation invariance. These representations can be helpful
to derive faster and more accurate solutions esp. for Matrix Product States in
Quantum simulation.
Furthermore, we discuss tensor train forms of sparse vectors that might be useful for applications like high-dimensional compressed sensing or ill-posed inverse
problems with sparse solution data.
173
Matteo Icardi
KAUST, SA
Bayesian parameter estimation of a porous media flow model
Contributed Session CT2.7: Tuesday, 15:30 - 16:00, CO122
Porous materials appear in a number of important industrial applications, in particular related to subsurface flows, membranes and filters, materials modeling,
structural mechanics, etc. The exact micro-scale description of these materials is
often impractical, therefore, average models are usually used to describe, for example, the flow and transport in such porous media. These models are, however,
often affected by high uncertainty (and therefore unreliability) because of the high
heterogeneity and multi-scale structure of the media. Homogenization and spatial
averaging methods can be applied to the fluid flow equations in the pore space,
under the assumption that the scales are well separated, obtaining the well-known
Darcy equation (and Forchheimer extension for non-linear regime). In this case,
the parameters appearing in the Darcy equation (porosity and permeabilities) are
well defined and in general can be represented at the field scale with anisotropic
and inhomogeneous scalar fields. The same holds for other upscaled equations
such as the Advection-Dispersion-Reaction (ADR) equation for solute transport
and its parameters (reaction rates and diffusivity tensor). These parameters are
usually more affected by uncertainty and variability than the parameters of the
flow (Darcy) equation.
In this work, we assume an equation for macro-scale scalar transport (ADR equation) with unknown parameters (effective porosity and diffusivity) and we setup a
statistical method, based on the Bayesian framework, to validate the model and
estimate a distribution for the unknown parameters. The data used to solve the inverse problem are the results of accurate pore-scale computational fluid dynamics
(CFD) simulations. Three-dimensional geometries, representing actual sand samples, have been reconstructed and the flow field and scalar transport are solved
with a finite volume code on an unstructured octree-based mesh. The micro-scale
results are then averaged and used as data for solving the inverse problem at the
macro-scale. The one-dimensional macro-scale forward problem is solved numerically with standard finite difference techniques. A Bayesian method has been
implemented to estimate the overall hydrodynamic dispersion under different flow
rates and bulk (molecular) diffusivities. In this problem, in addition to the standard measurement normal-distributed error, also the possible modeling error in
the selection of the macro-scale model should also be taken into account.
The results are in agreement with previous works on dispersion in porous media
but highlight the inaccuracy of the simple model assumed at the macro-scale for
realistic irregular three-dimensional geometry. Possible modifications and extension are then discussed to take into account the stagnant zone and the effect of
the boundary conditions.
174
Figure 1: Posterior distribution of porosity and dispersion parameters.
Figure 2: Solute concentration data, least square fitting and maximum a posteriori
estimation.
175
Reijer Idema
Delft University of Technology, NL
On the Convergence of Inexact Newton Methods
Contributed Session CT3.3: Thursday, 17:30 - 18:00, CO3
Assume an iterative method that, given current iterate xi , has a unique new iterate
x̂i+1 . If instead an approximation xi+1 of the exact iterate x̂i+1 is used to continue
the process, we speak of an inexact iterative method. Inexact Newton methods
are examples of inexact iterative methods.
Let δ c and εc be the distance of the current iterate xi to the exact iterate x̂i+1
and the solution x∗ respectively. Likewise, let δ n and εn be the distance of the
new (inexact) iterate xi+1 to the exact iterate x̂i+1 and the solution x∗ . The
superscript c denotes “current”, while the superscript n denotes “new”. Let further
ε̂ be the distance of the exact iterate x̂i+1 to the solution x∗ . For a graphical
representation, see Figure 1.
n
The ratio εεc is a measure for the improvement of the inexact iterate xi+1 over the
current iterate xi , in terms of the distance to the solution x∗ . Likewise, the ratio
δn
δ c is a measure for the improvement of the inexact iterate xi+1 , in terms of the
n
distance to the exact iterate x̂i+1 . As the solution is unknown, so is the ratio εεc .
n
Assume, however, that some measure for the ratio δδc is available and controllable.
For example, for an inexact Newton method the relative linear residual norm
krk k
δn
kF (xi )k can be used as a measure for δ c .
The aim is to have an improvement in the controllable error translate into a similar
improvement in the distance to the solution, i.e., to have
εn
δn
≤ (1 + α) c
c
ε
δ
(1)
for some reasonably small α > 0.
We show that the minimum α for which Equation (1) is guaranteed to hold can
be written as:
" #
−1
γ
δn
αmin =
+1 ,
(2)
1−γ
δc
where γ = δε̂c is a measure for the quality of the exact iterate x̂i+1 . This means
n
that the smaller γ is, the smaller δδc can be made without compromising αmin .
We combine the above ideas with inexact Newton convergence theory to proof the
following theorem, where J(x) denotes the Jacobian of the nonlinear problem, and
ηi are the forcing terms. The linearized equations are solved up to an accuracy
kri k
kF (xi )k ≤ ηi .
Theorem: Let ηi ∈ (0, 1) and choose α > 0 such that (1 + α) ηi < 1. Then there
exists an ε > 0 such that, if kx0 − x∗ k < ε, the sequence of inexact Newton iterates
xi converges to x∗ , with
kJ(x∗ ) (xi+1 − x∗ ) k < (1 + α) ηi kJ(x∗ ) (xi − x∗ ) k.
(3)
This theorem implies that, if the initial iterate is close enough to the solution, it is
always possible to choose forcing terms ηi such that the method converges without
oversolving. With proper choice of the forcing terms, if the initial iterate is close
enough to the solution, it is therefore possible to solve a nonlinear problem with
176
an inexact Newton method in such a way that the nonlinear residual converges as
fast as the linear residuals of the linearized equations. Numerical experiments on
power flow problems [1,2] are presented that illustrate the practical value of these
results.
[1] R. Idema, D. J. P. Lahaye, C. Vuik, and L. van der Sluis. Scalable NewtonKrylov solver for very large power flow problems. IEEE Transactions on Power
Systems, 27(1):390–396, February 2012.
[2] R. Idema. Newton-Krylov Methods in Power Flow and Contingency Analysis.
PhD Thesis, Delft University of Technology, November 2012.
xi
x∗
εc
δc
ε̂
x̂i+1
εn
δn
xi+1
Figure 1: Inexact iterative step
Joint work with D.J.P. Lahaye, and C. Vuik.
177
Hiroki Ishizuka
Keio University, JP
Simulating information propagation by near-field P2P wireless communication
Contributed Session CT1.1: Monday, 18:00 - 18:30, CO1
The main target of this research is information propagation by near-field P2P
wireless communication. More specifically, this research focuses on information
propagation in an ad-hoc network of P2P communication by bluetooth on smartphone carried by each pedestrian in a city.
This phenomenon is similar to that of the spread of disease, in that these are both
caused by person-to-person contact. There are a lot of researches about spread of
disease. For example, Draief [1] carried out research into the spread of a virus by
proposing a simple model of infection by use of graph theory and Markov chain.
In that model, there are two walkers, one being infectious and the other healthy.
They do random walk on a regular graph and if these two random walkers encounter at the same node by accident, the healthy walker will be infected with a
certain probability. In this research, using Markov chain, Draief [1] derived the
equation to figure out the time that elapses before the healthy walker is infected.
Learning from the above study, we used graph theory to analyze information propagation with bluetooth. Specifically, as fig.1 shows, we treated the field as “almost
8-regular graph” and made pedestrians walk on this graph. The black pedestrian
has information and the gray pedestrians don’t. The circle, the center of which is
the black pedestrian, shows transmission distance of bluetooth. All smartphones
which the pedestrians hold try to connect to other smartphones in certain intervals. If any pedestrian whose smartphone is in the “try connect” mode, moves into
that circle when the condition of the black pedestrian’s smartphone is also in the
“try connect” mode, he will get information.
The most difficult point of this research was for the purpose of making the pedestrians walk realistic, we couldn’t use random walk method, which also meant,
we couldn’t use Markov chains. For this reason, it was difficult to express the
phenomenon of information propagation with bluetooth in a mathematical form.
Therefore, we developed “directed walk algorithm” reflecting real pedestrian movement, and used multi-agent simulation(MAS) to simulate this phenomenon. Changing the number of pedestrian in the field and interval of “try connect”, we found
percolation transition of the number of pedestrian who has information.
[1] M. Draief, A. Ganesh, “A random walk model for infection on graphs: spread
of epidemics & rumours with mobile agent”, Discrete Event Dynamical Systems,
Vol.21, pp. 41-61, (2011).
178
Figure 1: Pedestrians walk on the graph
Joint work with Kenji Oguni.
179
Alessandra Jannelli
Department of Mathematics and Computer Science, University of Messina, IT
Quasi-uniform Grids and ad hoc Finite Difference Schemes for BVPs on Infinite
Intervals
Contributed Session CT2.6: Tuesday, 15:00 - 15:30, CO017
We consider finite differences schemes on quasi-uniform grids applied to the numerical solution of BVPs defined on infinite intervals. Quasi-uniform grids have
successfully applied to the numerical solution of partial differential equations on
unbounded domains. We apply the proposed approach to the Falkner-Skan model
of boundary layer theory, to a problem of interest in foundation engineering and
to a nonlinear problem arising in physical oceanography.
Let us consider the smooth strict monotone quasi-uniform map x = x(ξ), the
so-called grid generating function,
x = −c · ln(1 − ξ) ,
(1)
where ξ ∈ [0, 1], x ∈ [0, ∞], and c > 0 is a control parameter. We notice that
xN −1 = c ln N for (1). The problem under consideration can be discretized by
introducing a uniform grid ξn of N +1 nodes in [0, 1] with ξ0 = 0 and ξn+1 = ξn +h
with h = 1/N , so that xn is a quasi-uniform grid in [0, ∞]. The last interval in
(1), namely [xN −1 , xN ], is infinite but the point xN −1/2 is finite, because the non
integer nodes are defined by
n+α
,
(2)
xn+α = x ξ =
N
with n ∈ {0, 1, . . . , N − 1} and 0 < α < 1. This map allows us to describe the
infinite domain by a finite number of intervals. The last node of such grid is placed
on infinity so right boundary condition is taken into account correctly. Top frame
of figure shows the quasi-uniform mesh defined by (1) with c = 5 and N = 20.
We can define the values of u(x) on the mid-points of the grid
un+1/2 ≈
xn+1 − xn+1/2
xn+1/2 − xn
un +
un+1 .
xn+1 − xn
xn+1 − xn
(3)
As far as the first derivative is concerned we apply the following approximation
du un+1 − un
.
≈
(4)
dx n+1/2
2 xn+3/4 − xn+1/4
This formulae uses the value uN = u∞ , but not xN = ∞. In order to justify finite
difference formula (4), by considering u = u(ξ(x)), we can write
du du dξ un+1 − un 2 ξn+3/4 − ξn+1/4
.
(5)
=
≈
dx n+1/2
dξ n+1/2 dx n+1/2
ξn+1 − ξn 2 xn+3/4 − xn+1/4
The last formula on the right hand side of equation (5) reduces to the right hand
side of equation (4)
because we are using a uniform grid for ξ and therefore
2 ξn+3/4 − ξn+1/4 = ξn+1 − ξn . The two finite difference approximations (3)
and (4) have order of accuracy O(N −2 ). Figure shows the numerical solution of
Falkner-Skan model with β = 1 obtained by (1) with c = 5 for N = 80.
180
x
x
0
19
0
5
15
1.5
u(x)
u′ (x)
1
0.5
u′′ (x)
0
0
5
10
15
x
Joint work with Riccardo Fazio.
181
20
x→∞
Bärbel Janssen
Universität Bern, CH
The hp-adaptive Galerkin time stepping method for nonlinear differential equations
with finite time blow-up
Contributed Session CT1.5: Monday, 18:30 - 19:00, CO016
We consider hp-adaptive Galerkin time stepping methods for nonlinear ordinary
differential equations. The occuring nonlinearity is assumed to be bounded by
a constant times the solution to a power β which is larger than one. We prove
dual based a posteriori error estimates. Existence of discrete solutions is shown
using reconstruction techniques. By means of numerical examples we show that
the blow-up is well preserved.
Joint work with Thomas P. Wihler.
182
Manuel Jaraczewski
Helmut Schmidt University, DE
On the asymptotics of discrete Riesz energy with external fields
Contributed Session CT3.8: Thursday, 17:00 - 17:30, CO123
Potential theory has been intensively studied for a long time due to its intrinsic
relations to many other fields both in physics and in mathematics [2]. Particular
for plane sets, the close connection of logarithmic potentials and complex analysis
offers an extremely rich theory, see, e.g., [6]. During the last 20 years an increasing interest in algorithmic and computational aspects of potential theory has
arisen. This among other aspects motivated research in the discrete counterparts
of potentials and minimal energy.
Many techniques related to discrete minimal logarithmic energy or minimal Newton energy can be transferred to the more general setting of s-Riesz energy (s ≥ 0)
in Rd (d ≥ 1). It is well know that the discrete minimal s-Riesz energy of a system
of n points on a compact set Ω ⊆ Rd converges to its continuous counterpart as n
tends to infinity, if the latter exists, i.e., if 0 ≤ s < d. The corresponding asymptotic behavior and discrete minimal configurations have been intensively studied.
However, apart from many results in the complex plain (see, e.g., [6]) most of
the higher dimensional investigations focus on the sphere or on a torus: For the
sphere in Rd , e.g., explicit error bounds for the asymptotic approximation of the
continuous s-Riesz energy by discrete energy follow from results due to Wagner
[7], and due to Kuijlaars and Saff [1].
This work deals with two extensions of the theory of discrete minimal energy
problems: First, an estimate on the asymptotic behavior of the discrete s-Riesz
energy in the cases 0 < s ≤ d − 2 for a large class of sets is derived. It turned
out that Ahlfors-David regularity (see, e.g., [3]) is a suited notion of regularity
to derive an estimate on the asymptotic behavior of the discrete minimal s-Riesz
energy. This is a very mild hypothesis, which is fulfilled by a large class of sets,
including images of a ball under Bilipschitz maps. The s-Riesz potentials with
0 ≤ s ≤ d − 2 are (super) harmonic and, hence, the equilibrium measure of the
potentials concentrates on the outer boundary of the considered set Ω ⊆ Rd . As a
consequence, results for d-Ahlfors-David regular sets carry over to sets that bound
an Ahlfors-David regular set.
The second extension is related to minimal s-Riesz energy in the presence of an
external field. This has been intensively studied by, e.g., Saff and Totik in the case
of logarithmic potentials [5], where it leads to the notion of weighted extremal
points. We, hence, examine the asymptotic behavior of discrete minimal s-Riesz
energy under an external field. Finally, it is discussed, in how far known relations
can be transferred to the more general setting, such as, e.g., the connection of
extremal points and good quadrature formulae.
References:
[1] A. B. J. Kuijlaars and E. B. Saff, Asymptotics for minimal discrete Energy
on the Sphere, Transactions of the American Mathematical Society, Volume 350,
Number 2, 523-538 (1998)
[2] N. S. Landkof, Foundations of Modern Potential Theory, Springer-Verlag Berlin
Heidelberg New York (1972)
[3] P. Mattila and P. Saaranen, Ahlfors-David regular sets and Bilipschitz maps,
Annales Academiæ Scientiarum Fennicæ Mathematica, Volume 34, 487-502 (2009)
183
[4] E. B. Saff and A. B. J. Kuijlaars, Distributing Many Points on a Sphere, The
Mathematical Intelligencer, Springer-Verlag New York, Volume 19, Number 1 ,
5-11 (1997)
[5] E. B. Saff and V. Totik, Logarithmic Potentials with external Fields, Grundlehren
der mathematischen Wissenschaften 316, Springer-Verlag Berlin Heidelberg (1997)
[6] M. Tsuji, Potential theory in modern function theory, 2nd edition, Chelsea
Publishing Company, New York (1975)
[7] G. Wagner, On Means of Distances on the Surface of a Sphere (Lower Bounds),
Pacific Journal of Mathematics, Volume 144, No. 2, 389-398 (1990)
Joint work with M. Stiemer.
184
Elias Jarlebring
KTH Royal Institute of Technology, SE
An iterative block algorithm for eigenvalue problems with eigenvector nonlinearities
Minisymposium Session NEIG: Thursday, 11:30 - 12:00, CO2
Let A(V ) ∈ Rn×n be a symmetric matrix depending on V ∈ Rn×k which is a
basis of a vector space, and suppose A(V ) is independent of the choice of basis of
the vector space. We here consider the problem of computing V such that (Λ, V )
is an invariant pair of the matrix A(V ), i.e., A(V )V = V Λ. We present a block
algorithm for this problem, where every step involves solving one or several linear
systems of equations. We show that the algorithm is a generalization of (shift-andinvert) simultaneous iteration for the standard eigenvalue problem and that the
generalization inherits many of its properties. The algorithm is illustrated with
the application to a model problem in quantum chemistry.
185
Pavel Jiranek
CERFACS, FR
A general framework for algebraic multigrid methods
Minisymposium Session CTNL: Tuesday, 11:00 - 11:30, CO015
Algebraic multigrid methods (AMG) form a popular class of solvers and preconditioners for large sparse systems of linear algebraic equations arising mainly in the
context of discretised partial differential equations due to their scalability properties inherited from their geometric counterpart. Unlike in geometric multigrid,
AMG constructs the hierarchy of levels using solely the algebraic information contained in the system to be solved and thus can be easily applied in the “black-box”
manner in practice.
Various AMG algorithms and software packages implementing them exist nowadays and differ essentially in the way how the coarsening on the fine levels is
realised and how the transfer operators are constructed on the given coarsening.
Main representatives of different coarsening approaches are classical AMG methods (where the coarse grid is identified with certain independent subset of the
fine-grid variables) and aggregation-based methods (where the coarse grid is associated with some disjoint subsets of the fine-grid variables). The basic AMG approaches for solving scalar problems can be also usually extended to more general
problems including systems of partial differential equations and indefinite saddle
point problems.
One of the drawbacks of the most of the existing AMG implementations is the focus on a particular AMG scheme and to some extent to a fixed problem type while
there are certainly various multigrid components which are common to any AMG
implementation. Our attempt is to create a general object-oriented environment
for AMG methods which would cover this gap and allow to realise essentially any
kind of AMG method for a large variety of problem types in a single framework.
The setup of the multigrid hierarchy is realised by a set of interconnected components implementing certain elementary part of the coarsening algorithm and
because of their hierarchical object structure they can be easily modified and extended with new features. The general design of the setup process also allows to
reuse these elementary algorithms for more general types of problems including
structured saddle point systems arising, e.g., in the mixed finite element method.
We illustrate the use of the framework and its parallel performance on some academic test problems including practical problems arising in reservoir simulations.
Joint work with S. Gratton, X. Vasseur, and P. Henon.
186
Lorenz John
Institute of Computational Mathematics, Graz University of Technology, AT
A multilevel preconditioner for the biharmonic equation
Contributed Session CT4.2: Friday, 08:20 - 08:50, CO2
We present a multilevel preconditioner for the mixed finite element discretization
of the biharmonic equation of first kind. While for the interior degrees of freedom
a standard multigrid method can be applied, a different approach is required on
the boundary. The construction of the preconditioner is based on a BPX type
multilevel representation in fractional Sobolev spaces. Numerical examples illustrate the obtained theoretical results.
Joint work with Olaf Steinbach.
187
Pierre Jolivet
Laboratoire Jacques-Louis Lions, FR
How to easily solve PDE with FreeFem++ ?
Minisymposium Session PARA: Monday, 14:30 - 15:00, CO016
Implementing a finite element software that can support arbitrary meshes and arbitrary finite elements spaces can be highly time-consuming. In this talk, FreeFem++
will be presented. It is a simple Domain Specific Language that can be used to
quickly solve partial differential equations given their variational formulation. In
the first part of the talk, the inner workings of the language will be explained (lexical and syntactical analysis and code generation). Then, we will move on to the
second part of the talk which explains how FreeFem++ can be used in conjuction
with a simple framework for domain decomposition methods to solve problems on
large scale architectures.
Joint work with Frédéric Hecht, Frédéric Nataf, and Christophe Prud’homme.
188
Mika Juntunen
Aalto University Department of Mathematics and Systems Analysis, FI
A posteriori estimate of Nitsche’s method for discontinuous material parameters
Contributed Session CT1.9: Monday, 17:30 - 18:00, CO124
One of the advantages of the Nitsche’s method is the simplicity of joining subdomains with non-matching meshes. If the division follows material boundaries,
the parameters are discontinuous over the subdomain interfaces. If the jump in
the material parameters is moderate, the straightforward extension of the method
as it was described in, e.g., [3] readily applies but large discontinuities may lead
to poor results [1,4]. Some of the problems are avoided with material parameter
weighted mean flux [1,5] but to fully avoid the problems the stabilizing terms need
to be modified too [2].
In this work we propose Nitsche’s method for discontinuous material parameters
and derive a residual based a posteriori estimate for the method. Both the method
and the a posteriori estimate take the discontinuity in the material parameter
carefully into account. If the material parameters are continuous, the method
reduces to the straightforward method. In the case of extreme discontinuity, the
method reduces to setting Dirichlet boundary conditions with Nitsche’s method.
The straightforward a posteriori estimate tends to over-refine the mesh near the
interface if the material parameters have large discontinuity. The proposed a
posteriori estimate inherits the good properties of the method and avoids the overrefinement even in the case of extreme discontinuity of the material parameters.
The derived method and a posteriori estimate are tested numerically for a Poisson
model problem.
[1] Chandrasekhar Annavarapu, Martin Hautefeuille, and John E. Dolbow. A
robust Nitsche’s formulation for interface problems. Computer Methods in
Applied Mechanics and Engineering, 225-228:44–54, 2012.
[2] Nelly Barrau, Roland Becker, Eric Dubach, and robert Luce. A robust
variant of NXFEM for the interface problem. C. R. Math. Acad. Sci.
Paris, 350(15–16):789–792, 2012.
[3] Roland Becker, Peter Hansbo, and Rolf Stenberg. A finite element method
for domain decomposition with non-matching grids. M2AN Math. Model.
Numer. Anal., 37(2):209–225, 2003.
[4] Tod A. Laursen, Michael A. Puso, and Jessica Sanders. Mortar contact formulations for deformable-deformable contact: past contributions and new
extensions for enriched and embedded interface formulations. Comput.
Methods Appl. Mech. Engrg., 205/208:3–15, 2012.
[5] Rolf Stenberg. Mortaring by a method of J. A. Nitsche. In Computational
mechanics (Buenos Aires, 1998), pages CD–ROM file. Centro Internac.
Métodos Numér. Ing., Barcelona, 1998.
189
Ashraful Kadir
Royal Institute of Technology, SE
How accurate is molecular dynamics for crossings of potential surfaces?
Part II: numerical tests
Contributed Session CT4.9: Friday, 08:50 - 09:20, CO124
I will present numerical examples related to the talk given by Prof. Szepessy on
‘How accurate is molecular dynamics for crossings of potential surfaces? Part
I: error estimates’. The numerical tests show that the Schrödinger observables
are approximated with the error estimate O(pe + M −1/2 ) by molecular dynamics
observables, where pe is the probability for an electron to be in an excited state
and M is the nuclei-electron mass ratio.
A numerical algorithm is developed to approximate pe based on Ehrenfest molecular dynamics simulations, which enables the practical use of the error estimate.
I will compare the approximated pe with the solutions obtained from the discrete
time-independent Schrödinger eigenvalue problems for crossings and near avoided
crossings of potential surfaces, see Figure 1.
Based on numerical tests the talk will explain the approximation results: namely
the discretization error, the sampling error and the modeling error. The time discretization error comes from approximating the differential equation for molecular
dynamics with a numerical method, based on replacing time derivatives with difference quotients for a positive step size ∆t. The sampling error is due to truncating
the infinite T in an ergodic limit and using a finite value of T . The modeling error
originates from eliminating the electrons in the Schrödinger nuclei-electron system
and replacing the nuclei dynamics with their classical paths; this approximation
error was first analyzed by Born and Oppenheimer.
Figure 1: Plots showing pe + M −1/2 with conical intersections at (a1 , 0).
Joint work with Håkon Hoel (KAUST), Petr Plechac (Univ. Delaware), Mattias
Sandberg (KTH), and Anders Szepessy (KTH).
190
Dante Kalise
Johann Radon Institute for Computational and Applied Mathematics (RICAM),
Austria
An accelerated semi-Lagrangian/policy iteration scheme for the solution of dynamic programming equations
Minisymposium Session NMFN: Monday, 11:40 - 12:10, CO2
We present some recent results concerning the efficient numerical approximation
of static Hamilton-Jacobi-Bellman equations of the form
λu(x) + sup {−f (x, a) · Du(x) − g(x, a)} = 0,
a∈A
x ∈ Rn ,
characterizing the value function u(x) of an optimal control problem in Rn . One of
the main challenges in the solution of this equation relates to its high-dimensionality,
and therefore the design of efficient methods turns to be a fundamental task.
In this talk we present a scheme based on a semi-Lagragian/finite differences discretization [2] combined with an iterative scheme in the space of policies [1, 3].
Moreover, we exploit the idea that a reasonable initialization of the policy iteration procedure yields a faster numerical convergence to the optimal solution. For
such purpose, the scheme features a pre-processing step with value iterations in a
coarse grid. A series of numerical tests, spanning a wide variety of applications,
assess the robust and efficient performance of the method.
References
[1] O. Bokanowski, S. Maroso, H. Zidani,Some convergence results for Howard’s
algorithm, SIAM Journal on Numerical Analysis 47 (2009), 3001–3026.
[2] E. Carlini, M. Falcone, R. Ferretti,An efficient algorithm for Hamilton-Jacobi
equations in high dimension, Computing and Visualization in Science (2004),
15–29.
[3] M.S. Santos and J. Rust,Convergence properties of policy iteration, SIAM J.
Control Optim., 42 (2004), 2094–2115.
Joint work with Alessandro Alla, and Maurizio Falcone.
191
Kenichi Kamijo
Graduate School of Life Sciences, Toyo University, JP
Numerical Method for Fractal Analysis on Discrete Dynamical Orbit in n-Dimensional Space Using Local Fractal Dimension
Contributed Session CT4.1: Friday, 08:50 - 09:20, CO1
The orbit of a discrete dynamical system in n-dimensional space can be considered
to be a kind of discrete signal. The local fractal dimension (LFD) has been defined
and calculated in a finite short “processing window” on the orbit. In order to evaluate the fractal structure in the orbit, a numerical method for signal processing
has been proposed. Then, the moving LFD can be obtained by sliding the said
window along the line on the orbit. Logistic mapping has been selected at each
coordinate as an example, and a computer simulation has been carried out in this
paper. It is shown that the probability distribution of the moving LFD becomes
almost a normal distribution within the restricted range of the control parameter
concerned with logistic time development, in which these parameters raise the socalled chaotic fluctuations up as discrete dynamical orbits. Also, the relationships
between the control parameter and the mean or standard deviation of the moving
LFD, or SN ratio have been investigated. The proposed method can be applied to
statistical quality control or analysis for general random processes with the same
procedure.
Key words: discrete dynamical system, random process, logistic time development,
local fractal dimension, statistical quality control
192
Bulent Karasozen
Institute of Applied Mathematics, TR
Adaptive Discontinuous Galerkin Methods for nonlinear Diffusion-Convection-Reaction
Models
Contributed Session CT3.4: Thursday, 17:00 - 17:30, CO015
Many engineering problems such as chemical reaction processes, population dynamics, ground water contamination are governed by coupled diffusion-convectionreaction partial differential equations (PDEs) with nonlinear source or sink terms.
In the linear case, when the system is convection dominated, stabilized finite elements and discontinuous Galerkin methods are capable of handling the nonphysical oscillations. Nonlinear reaction terms pose additional challenges. Nonlinear
transport systems are typically convection and/or reaction dominated with characteristic solutions possessing sharp layers. In order to eliminate spurious localized
oscillations in the numerical solutions discontinuity or shock-capturing techniques
are applied in combination with the streamline upwind Petrov-Galerkin(SUPG)
method.
In contrast to standard Galerkin finite element methods, the discontinuous Galerkin
methods produce stable solutions without need of extra stabilization techniques to
overcome the spurious oscillations for convection dominated problems. In this talk
we present the application of adaptive discontinuous Galerkin methods to convection dominated models containing quadratic and Monod type reaction rates. A
posteriori error estimates for linear problems in space discretization are extended
to PDEs with nonlinear reaction terms. Numerical results for steady state and
time dependent coupled systems arising in contaminant biodegradiation process
demonstrate the accuracy and efficiency of the adaptive DGFEM compared over
the SUPG and shock capturing techniques
Joint work with Murat Uzunca.
193
Vladimir Kazeev
Seminar for Applied Mathematics, ETH Zürich, CH
Tensor-structured approach to the Chemical Master Equation
Minisymposium Session LRTT: Wednesday, 11:00 - 11:30, CO3
The Chemical Master Equation (CME) is a cornerstone of stochastic analysis and
simulation of models of biochemical reaction networks. Yet direct solutions of the
CME have remained elusive. Although several approaches overcome the infinite
dimensional nature of the CME through projections or other means, a common
feature of proposed approaches is their susceptibility to the curse of dimensionality, i.e. the exponential growth in memory and computational requirements with
respect to the number of problem dimensions.
To “lift” this curse of dimensionality, we use the recently proposed Quantized Tensor Train (QTT) decomposition of high-dimensional tensors. It relies on two key
ingredients. The first is quantization which consists in splitting each “physical”
dimension into a few virtual levels and results in a tensor with the same entries,
but more dimensions and smaller mode sizes. The second is the Tensor Train (TT)
representation of high-dimensional arrays based on the separation of variables.
The TT representation enjoys two crucial advantages. First, the low-rank approximation of a tensor in the TT format is related to the low-rank approximation
of certain matrices related to the tensor in question. Therefore, it can be performed with the use of standard, well-established matrix algorithms. Second, the
TT format has complexity and memory requirements which are linear or almost
linear in the number of dimensions for many applications. The use of quantization,
leading to the QTT decomposition, allows to resolve even more structure in the
matrices and vectors involved and to further reduce the complexity and memory
requirements.
We analyze the QTT structure of the CME and apply the exponentially-converging
hp-discontinuous Galerkin discretization in time to reduce it to a QTT-structured
system of linear equations to be solved at each time step. As a solver of linear
systems, we use an algorithm based on the Density Matrix Renormalization Group
(DMRG) approach from quantum chemistry. While there is currently no estimate
of the convergence rate for the DMRG algorithm, our numerical experiments show
the solver to be highly efficient. We demonstrate the efficiency of our method,
compared to Monte Carlo simulations, by applying it to a few examples from
systems biology.
194
Radka Keslerova
Czech Technical University in Prague, Dep. of Tech. Mathematics, CZ
Numerical Simulation of Steady and Unsteady Flows for Viscous and Viscoelastic
Fluids
Minisymposium Session MANT: Tuesday, 11:30 - 12:00, CO017
This work deals with the numerical solution of viscous and viscoelastic fluids flow.
The governing system of equations is based on the system of balance laws for mass
and momentum for incompressible laminar fluids. Different models for the stress
tensor are considered. For viscous fluids flow Newtonian model is used. For the
describing of the the behaviour of the mixture of viscous and viscoelastic fluids
Oldroyd-B model is used.
div u = 0
∂u
ρ
+ ρ(u.∇)u = −∇P + div Ts + div Te
∂t
2µe
1
∂Te
+ (u.∇)Te =
D − Te + (W Te − Te W ) + (DTe + Te D)
∂t
λ1
λ1
where P is the pressure, ρ is the constant density, u is the velocity vector. The
symbols Ts and Te represent the Newtonian and viscoelastic parts of the stress
tensor and
δTe
= 2µe D
Ts = 2µs D,
Te + λ1
δt
where D is symmetric part of the velocity gradient and W is antisymmetric part
of the velocity gradient.
Numerical solution of the described models is based on cell-centered finite volume
method using explicit Runge–Kutta time integration. Steady state solution is
achieved for t → ∞. In this case the artificial compressibility method can be
applied.
The flow is modelled in a bounded computational domain where a boundary is
divided into three mutually disjoint parts: a solid wall, an outlet and an inlet. At
the inlet Dirichlet boundary condition for velocity vector is used and for a pressure
and the stress tensor Neumann boundary condition is used. At the outlet the
pressure value is given and for the velocity vector and the stress tensor Neumann
boundary condition is used. The homogenous Dirichlet boundary condition for the
velocity vector is used on the wall. For the pressure and stress tensor Neumann
boundary condition is considered.
In the case of unsteady computation dual-time stepping method is considered.
The principle of dual-time stepping method is following. The artificial time τ is
introduced and the artificial compressibility method in the artificial time is applied.
The system of Navier-Stokes equations is extended to unsteady flows by adding
artificial time derivatives ∂W/∂τ to all equations.
Presented mathematical models are tested in the two and three dimmensional
branching channel.
Acknowledgment
This work was partly supported by the grant GACR 201/09/0917 and GACR
P201/11/1304.
195
References
[1] T. Bodnar and A. Sequeira: Numerical Study of the Significance of the NonNewtonian Nature of Blood in Steady Flow through s Stenosed Vessel (Editor:
R. Ranacher, A. Sequeira), Advances in Mathematical Fluid Mechanics (2010)
83–104.
[2] A.J. Chorin, A numerical method for solving incompressible viscous flow problem, Journal of Computational Physics, 135, 118–125 (1967).
[3] A. Jameson, W. Schmidt and E. Turkel, Numerical solution of the Euler equations by finite volume methods using Runge-Kutta time-stepping schemes,
AIAA 14th Fluid and Plasma Dynamic Conference California (1981).
[4] R. Keslerová and K. Kozel, Numerical solution of laminar incompressible generalized Newtonian fluids flow, Applied Mathematics and Computation, 217,
5125–5133 (2011).
[5] R. LeVeque, Finite-Volume Methods for Hyperbolic Problems. Cambridge
University Press, (2004).
[6] P. Louda, K. Kozel, J. Příhoda, L. Beneš, T. Kopáček: Numerical solution of
incompressible flow through branched channels, Journal of Computers & Fluids
46 (2011) 318–324.
196
Radka Keslerova
Czech Technical University in Prague, Dep. of Tech. Mathematics, CZ
Numerical Simulation of the Atmospheric Boundary Layer Flow over coal mine in
North Bohemia
Contributed Session CT4.5: Friday, 08:20 - 08:50, CO016
1
Introduction
This contribution presents numerical results and wind tunnel measurement obtained for the air flow over a real orography and a scaled-down model of the coal
mine in the North Bohemia.
Pollution dispersion is one of the critical aspects of economy development. Therefore the accurate prediction of pollutants propagation in the environment is crucial
for future industrialization as well as for natural resources exploitation. Within
this context the airborne dust dispersion in complex terrain is of major interest. In
order to be able to predict the dust pollution it is necessary to explore the air-flow
in detail first. The terrain (in-situ) measurements are naturally the best source of
information. They are however quite expensive and thus their availability is very
limited while solving real-life problems. Thus it is necessary to employ various
ways of physical and mathematical modeling as a complement and extension of
available meteorological data.
2
Problem description
The solved case is directly based on orography of the opencast coal mine in the
North Bohemia. The real area of the mine cover more then 30km2 of the landscape
with forests, villages and part of mountains. The wind-tunnel model is based on
orography of this mine. The model scale is 1 : 9000. The whole wind tunnel model
has horizontal dimensions of 1500 × 1500 mm. The detailed experimental data
were collected just in a small portion of this area. The nominal velocity is about
0.25 m/s which means that the corresponding Reynolds number is of the order
103 . More details about the experimental setup and methodology can be found
e.g. in [4].
3
Mathematical model and numerical methods
The mathematical model chosen for this case is the system of Navier-Stokes equations for viscous incompressible non-stratified flow.
Because of the rather low Reynolds number for the wind tunnel scale experiment,
the case is considered as laminar and thus no turbulence model is used.
The first numerical scheme used to solve this model is a modification of the semiimplicit finite difference method described in [2]. It uses artificial compressibility
formulation to resolve pressure. The governing equations are discretized in a semiimplicit way using a combination of forward and backward differences at time levels
n and n+1 which leads to a central scheme with second order of accuracy in space.
Numerical stabilization is carried out using a fourth order artificial viscosity. The
whole problem is solved by a time-marching technique, where the steady state
solution is searched as a limit of unsteady problem solution for time t −→ ∞.
197
The second method is based on the finite volume formulation. The finite volume
scheme AUSM scheme is used for the spatial semidiscretization of the inviscid
fluxes. Quantities on the cell faces are computed using the MUSCL reconstruction
with the Hemker-Koren limiter. The scheme is stabilized by the pressure diffusion. The viscous fluxes are discretized using the central approach on a dual mesh
(diamond type scheme).
The spatial discretization results in a system of ODE’s which is solved by the
second-order BDF formula. Arising set of nonlinear equations is then solved by
the artificial compressibility method in the dual time by the explicit 3-stage secondorder Runge-Kutta method.
Numerical results obtained by both of these schemes are compared to each other
and also the comparison with the experimental data is presented. Influence of the
boundary conditions is studied.
References
[1] T. Bodnár and L. Beneš. On some high resolution schemes for stably stratified
fluid flows. In Finite Volumes for Complex Applications VI, Problems &
Perspectives, volume 4 of Springer Proceedings in Mathematics, pages 145–
153. Springer Verlag, 2011.
[2] T. Bodnár, L. Beneš, and K. Kozel. Numerical simulation of flow over barriers
in complex terrain. Il Nuovo Cimento C, 31(5–6):619–632, 2008.
[3] I. Sládek, T. Bodnár, and K. Kozel. On a numerical study of atmospheric
2D and 3D - flows over a complex topography with forest including pollution
dispersion. Journal of Wind Engineering and Industrial Aerodynamics, 95(9–
11), 2007.
[4] Š. Nosek, Z. Jaňour, K. Jurčáková, R. Kellnerová, and L. Kukačka. Dispersion
over open-cut coal mine: wind tunnel modelling strategy. In P. Jonáš and
V. Uruba, editors, Proceedings of the Colloquium FLUID DYNAMICS 2011,
pages 27–28. Institute of Thermomechanics AS CR, 2011.
Joint work with L. Benes, and T. Bodnar.
198
Sebastian Kestler
University of Ulm, DE
On the adaptive tensor product wavelet Galerkin method in view of recent quantitative improvements
Minisymposium Session SMAP: Monday, 15:00 - 15:30, CO015
Based on the fundamental work of Cohen, Dahmen and DeVore (see [1]), in recent years much progress and many contributions have been made in the field of
adaptive wavelet methods for operator problems (see [6] and the references given
therein). In particular, it was shown that this type of method allows for quasioptimal algorithms for different types of operator problems including linear elliptic
and parabolic problems (for both local and non-local operators), non-linear (leastsquares) problems as well as PDEs with stochastic influences.
In this talk, we first shortly repeat the basic principles behind wavelet discretizations of operator problems and adaptive wavelet (Galerkin) methods (see [2]). In
the main part of the talk, we present an optimal algorithm for the fast evaluation
of non-sparse stiffness matrices (see [4]) and a new efficient way of computing a reliable and effective a-posteriori error estimator within the adaptive tensor product
wavelet Galerkin method applied to linear operator problems (see [3]). We shall
also show how to solve operator problems on unbounded domains by adaptive
wavelet methods (see [5]).
References
[1] A. Cohen, W. Dahmen, and R. DeVore. Adaptive wavelet methods for elliptic operator equations – Convergence rates. Mathematics of Computations,
70(233):27–75, 2001.
[2] T. Gantumur, H. Harbrecht, and R. P. Stevenson. An optimal adaptive wavelet
method without coarsening of the iterands. Mathematics of Computations,
76(258):615–629, 2007.
[3] S. Kestler and R. P. Stevenson. An efficient approximate residual evaluation in
the adaptive tensor product wavelet method. Journal of Scientific Computing,
2013. doi: 10.1007/s10915-013-9712-1
[4] S. Kestler and R. P. Stevenson. Fast evaluation of system matrices w.r.t. multitree collections of tensor product refinable basis functions. Technical report,
2012. Submitted.
[5] S. Kestler and K. Urban. Adaptive wavelet methods on unbounded domains.
Journal of Scientific Computing, 53(2):342–376, 2012.
[6] R. P. Stevenson. Adaptive wavelet methods for solving operator equations:
An overview. In R. DeVore and A. Kunoth, editors, Multiscale, Nonlinear and
Adaptive Approximation: Dedicated to Wolfgang Dahmen on the Occasion of
his 60th Birthday, pages 543-598. Springer (Berlin), 2009.
Joint work with R.P. Stevenson, and K. Urban.
199
Venera Khoromskaia
Max-Planck Institute for Mathematics in the Sciences, DE
Hartree-Fock and MP2 calculations by grid-based tensor numerical methods
Minisymposium Session LRTT: Monday, 12:40 - 13:10, CO1
The Hartree-Fock eigenvalue problem governed by the 3D nonlinear integro-differential
operator represents the basic model in ab initio electronic structure calculations.
We present a fast “black-box” Hartee-Fock solver by the tensor numerical methods
based on the rank-structured calculation of the core Hamiltonian and of the twoelectron integrals tensor (TEI), using a general, well separable basis discretized
on a sequence of n × n × n Cartesian grids [2,5]. The arising 3D convolution integrals are replaced by 1D algebraic operations in O(n log n) complexity, yielding
high resolution at low cost [1,2], due to approximation on large spatial grids up
to the size n3 ≈ 1014 . The Cholesky decomposition of TEI matrix is based on the
new algorithm of multiple factorizations, which yields an almost irreducible number of product basis functions building the TEI tensor, depending on a threshold
ε > 0 [4]. The factorized TEI matrix is applied in tensor calculations of MP2 energy correction [6]. We demonstrate on-line Hartree-Fock simulations for compact
molecules using our prototype Matlab programs. The examples include glycine
and alanine amino acids [7].
[1] B. N. Khoromskij and V. Khoromskaia. Multigrid Tensor Approximation of
Function Related Arrays. SIAM J Sci. Comp., 31(4), 3002-3026 (2009).
[2] V. Khoromskaia. Computation of the Hartree-Fock Exchange in the Tensorstructured Format. Comp. Meth. in Appl. Math., Vol. 10(2010), No 2,
pp.204-218.
[3] B. N. Khoromskij, V. Khoromskaia and H.-J. Flad. Numerical solution of the
Hartree-Fock equation in Multilevel Tensor-structured Format. SIAM J Sci.
Comp., 33(1), 45-65 (2011).
[4] V. Khoromskaia, B.N. Khoromskij and R. Schneider. Tensor-structured Calculation of the Two-electron Integrals in a General Basis. Preprint 29/2012
MIS MPG, Leipzig, 2012. SIAM J. Sci. Comp., 2013, accepted.
[5] V. Khoromskaia, D Andrae and B.N. Khoromskij. Fast and Accurate Tensor
Calculation of the Fock Operator in a General Basis. Comp. Phys. Comm.,
183 (2012) 2392-2404.
[6] V. Khoromskaia, and B.N. Khoromskij. Møller-Plesset Energy Correction Using Tensor Factorizations of the Grid-based Two-electron Integrals. Preprint
26/2013 MIS MPG Leipzig, 2013.
[7] V. Khoromskaia. 3D grid-based Hartree-Fock solver by tensor methods. in
progress, 2013.
200
Boris Khoromskij
Max-Planck-Institute for Mathematics in the Sciences, DE
Quantized tensor approximation methods for multi-dimensional PDEs
Contributed Session CT2.8: Tuesday, 15:30 - 16:00, CO123
Modern numerical methods of separable approximation, combining the canonical,
Tucker, and matrix product states (MPS) – tensor train (TT) formats, allow the
low-parametric discretization of d-variate functions and operators on large n⊗d grids with linear complexity in the dimension, O(dn) [2].
The recent quantics-TT (QTT) approximation method [1] is proven to provide
the logarithmic data-compression on a wide class of functions and operators. This
opens the way to solve high-dimensional steady-state and dynamical problems
using FEM approximation in quantized tensor spaces, with the log-volume complexity scaling in the full-grid size, O(d log n), instead of O(nd ).
In this talk I will demonstrate how the canonical, QTT and QTT-Tucker tensor
approximations apply to multi-parametric PDEs [3, 4], and to some uncertainty
quantification problems for time-dependent models [5]. The efficiency of QTTbased tensor approximation is illustrated by numerical examples.
http://personal-homepages.mis.mpg.de/bokh
References
[1] B.N. Khoromskij. O(d log N )-Quantics Approximation of N -d Tensors in
High-Dimensional Numerical Modeling. J. Constr. Approx. v. 34(2), 257-289
(2011).
[2] B.N. Khoromskij. Tensors-structured Numerical Methods in Scientific Computing: Survey on Recent Advances. Chemometr. Intell. Lab. Syst. 110 (2012),
1-19.
[3] B.N. Khoromskij, and Ch. Schwab, Tensor-Structured Galerkin Approximation of Parametric and Stochastic Elliptic PDEs. SIAM J. Sci. Comp., 33(1),
2011, 1-25.
[4] B.N. Khoromskij, and I. Oseledets. Quantics-TT collocation approximation
of parameter-dependent and stochastic elliptic PDEs. Comp. Meth. in Applied
Math., 10(4):34-365, 2010.
[5] S. Dolgov, and B.N. Khoromskij. Tensor-product approach to global
time-space-parametric discretization of chemical master equation. Preprint
68/2012, MPI MiS, Leipzig 2012 (submitted).
201
Emil Kieri
Department of Information Technology, Uppsala University, SE
Accelerated convergence for Schrödinger equations with non-smooth potentials
Contributed Session CT4.9: Friday, 09:20 - 09:50, CO124
When numerically solving the time-dependent Schrödinger equation (TDSE) for
the electrons in an atom or molecule, the Coulomb singularity poses a challenge.
The solution will have limited regularity, and high-order spatial discretisations,
which are much favoured in the chemical physics community, are not performing
to their full potential. By exploiting knowledge about the kinks in the solution
we construct a correction, and show how this improves the convergence of Fourier
collocation from second to fourth order. The new method is applied to the higher
harmonic generation (HHG) from atomic hydrogen.
In HHG from atomic gases, atoms are ionised by a laser beam. The detached electrons are accelerated in the electric field of the laser, and may recombine with the
nucleus. A high energy photon, a higher harmonic, is then emitted. The process is
sketched in Figure 1. The HHG process has extensive applications in experimental
physics as the generator of short pulses in the extreme ultraviolet regime. Such
pulses can be used e.g. for time-resolved spectroscopy of electron dynamics [1]. For
different applications, different properties of the harmonic radiation are desirable,
and by shaping the incident pulse it is possible to tailor the harmonic spectrum.
Much work has gone into optimising the harmonic spectrum with respect to a
certain target experimentally [3]. The aim of this work is to improve the accuracy
of simulation of the HHG process. An application we have in mind is computational optimisation of the harmonic spectrum, for which efficient simulation of the
process is a necessary component.
For its simplicity we will consider HHG from atomic hydrogen, but the method
presented is applicable to any atom. Our model is the TDSE for the hydrogen
atom subject to a linearly polarised electric field. We use a two-dimensional model
in cylindrical coordinates,
1
1
u − ze(t)u.
iut = − ∆u − √
2
2
r + z2
The coordinate system is centred at the nucleus with the z-axis aligned with the
electric field. The wave function u contains all retrievable information about the
electron. The initial wave function is taken as the atomic ground state.
The Coulomb potential is singular at the origin. This poses a challenge for numerical methods, especially if high order of accuracy is desired. Fourier collocation
is often used for the TDSE because of its spectral accuracy, but it is only second
order accurate for this problem due to lack of regularity.
We discretise the radial direction using Bessel functions. This softens the Coulomb
singularity such that the potential becomes bounded and continuous. In the axial
direction we use Fourier collocation with a new correction term, which is constructed in the spirit of [2]. We derive the time evolution of the kink in the
solution, and use this knowledge to cancel the leading order error terms. One
would then expect the order of accuracy to increase by one, but we show that it
increases from two to four. This is confirmed by numerical experiments. For the
time discretisation we use the Magnus midpoint rule.
We conclude with a simulation for which we calculate the spectrum of emitted
higher harmonics. We use a laser pulse consisting of a Gaussian carrier envelope
202
and a base frequency ω0 . The harmonic spectrum is approximated by the squared
Fourier transform of the expectation value of the z-component of the dipole velocity
∂
u). The outcome is shown in Figure 2. In the harmonic spectrum,
hµ̇i = (u, i ∂z
peaks show at odd multiples of ω0 , less distinct and with smaller amplitude for
higher frequencies.
References
[1] G. Doumy and L. F. DiMauro. Science, 322:1194–1195, 2008.
[2] J.-H. Jung. J. Sci. Comput., 39:49–66, 2009.
[3] C. Winterfeldt, C. Spielmann, and G. Gerber. Rev. Mod. Phys., 80:117–140,
2008.
(a)
(b)
(c)
(a)
(b)
(c)
Figure 1: Semi-classical sketch of the HHG process. (a) The electric field of the
laser beam tilts the potential. The electron may then tunnel through the potential
barrier and ionise the atom. (b) The potential tilts in the other direction. The
electron is then accelerated back towards the nucleus. (c) The electron recombines
with the nucleus, emitting a high-energy photon.
0
10
−2
Im u
S(ω)
10
−4
10
−6
10
−8
10
z
r
0
5
10
15
20
25
30
35
40
ω/ω0
Figure 2: (left) The imaginary part of the wave function at the peak of the laser
pulse. Most of the wave function is bound around the nucleus, but some have
ionised and may generate higher harmonics. (right) The spectrum of emitted
harmonics, normalised with respect to the base frequency ω0 .
203
Michael Kirby
Colorado State University, US
Flag manifolds for characterizing information in video sequences
Contributed Session CT3.8: Thursday, 17:30 - 18:00, CO123
In many applications researchers are concerned with knowledge discovery from
large sets of digital imagery. Of particular interest is the the problem of analyzing
large amounts of data generated by video sequences. We propose to explore this
problem using the mathematical framework of the flag manifold. We present a
method for computing flags from raw video sequences and a metric for computing
the distance between flags.
A flag is a set of nested sequence of subspaces Sk of a vector space V such that
S0 ⊂ S1 ⊂ S2 ⊂ S3 ⊂ · · · ⊂ SM
where S0 is the empty set, SM = V and the dimension of the spaces is increasing,
i.e.,
dim Si < dim Si+1 .
To begin, we consider full flag manifolds where each flag consists of M+1 nested
subspaces, i.e., dim Si = i with i = 0, . . . , M and then proceed to partial flags
where dim Si 6= i. We are primarily interested in exploiting the manifold structure
of the flag where the vector spaces in question are defined over the real numbers.
We present a novel method for computing flag manifolds from video sequences.
This involves introducing an optimization problem that computes the mean of a
set of subspaces of possibly different dimensions. We observe that the Karcher
mean is a special instance of such an approach but is not generally associated to
a flag structure.
We apply our algorithm to the problem of characterizing information in video for
the purposes of classification.
Joint work with Bruce Draper, Justin Marks, Tim Marrinan, and Chris Peterson.
204
Alana Kirchner
Technical University of Munich, DE
Efficient computation of a Tikhonov regularization parameter for nonlinear inverse
problems with adaptive discretization methods
Minisymposium Session FEPD: Monday, 12:40 - 13:10, CO017
Parameter and coefficient identification problems for PDEs usually lead to nonlinear inverse problems, which require regularization techniques due to their instability. We will present a combination of Tikhonov regularization, Morozov’s
discrepancy principle, and adaptive finite element discretizations as a Tikhonov
parameter choice rule. The discrepancy principle is implemented via an inexact
Newton method, where we control the accuracy by means of mesh refinement based
on a posteriori goal oriented error estimators. In order to further reduce the computational costs, we apply a generalized Gauss-Newton approach for the optimal
control problem, where the stopping index for this iteration plays the part of an
additional regularization parameter, also determined by the discrepancy principle.
The obtained theoretical convergence results (optimal rates under usual source
conditions) will be illustrated by several numerical experiments.
Joint work with Barbara Kaltenbacher, and Boris Vexler.
205
Axel Klawonn
Universität zu Köln, DE
A deflation based coarse space in dual-primal FETI methods for almost incompressible elasticity
Minisymposium Session PSPP: Thursday, 15:30 - 16:00, CO3
Domain decomposition methods of FETI-DP type have been successfully considered for mixed finite element discretizations of almost incompressible linear
elasticity problems. For discretizations with discontinuous pressure elements, a
zero net flux condition on each subdomain is needed to ensure a good condition
number for FETI-DP or BDDC domain decomposition methods which has been
shown by Li, Pavarino, Widlund, and others. Usually, this constraint is enforced
for each vertex, edge, and face of each subdomain separately. Here, a coarse space
is discussed where all vertex and edge constraints are treated as usual but where
all faces of each subdomain contribute only a single constraint. This approach
is presented within a deflation based framework for the implementation of coarse
spaces into FETI-DP methods.
Joint work with Sabrina Gippert, and Oliver Rheinbach.
206
Stefan Kleiss
RICAM, Austrian Academy of Sciences, AT
Guaranteed and Sharp a Posteriori Error Estimates in Isogeometric Analysis
Contributed Session CT1.5: Monday, 17:00 - 17:30, CO016
The potential and the performance of isogeometric analysis (IGA), introduced in
[1], have been well-studied for applications from many fields over the last years,
see the monograph [2]. Though not a pre-requisite, most of the studies of IGA
are based on non-uniform rational B-splines (NURBS). Since the straightforward
implementation of NURBS leads to a tensor-product structure, local mesh refinement methods are subject of active current research. Despite the fact that
adaptive mesh refinement is closely linked to the question of reliable a posteriori
error estimation, the latter is still in its infancy stage in isogeometric analysis.
Functional-type a posteriori error estimates, see the recent monograph [3] and the
references therein, which have also been studied for a wide range of problems,
provide reliable and sharp error bounds, which are fully computable and do not
contain any generic, un-determined constants.
We present functional-type a posteriori error estimates in isogeometric analysis.
By exploiting the properties of NURBS, we present efficient computation of these
error estimates. The numerical realization and the quality of the computed error
distribution are addressed. The potential and the limitations of the proposed
approach are illustrated using several computational examples.
References
[1] T.J.R. Hughes, J. Cottrell, and Y. Bazilevs. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Engrg., 194(39-41):4135–4195, 2005.
[2] T.J.R. Hughes, J. Cottrell, and Y. Bazilevs. Isogeometric Analysis: Toward
Integration of CAD and FEA. Wiley, Chichester, 2009.
[3] S. Repin. A Posteriori Estimates for Partial Differential Equations. Walter
de Gruyter, Berlin, Germany, 2008.
Joint work with Satyendra K. Tomar.
207
Petr Knobloch
Department of Numerical Mathematics, Faculty of Mathematics and Physics,
Charles University in Pragu, CZ
Finite element methods for convection dominated problems
Plenary Session: Friday, 10:50 - 11:40, CO1
Many important applications involve a strong convective transport of physical
quantities of interest whereas diffusion effects play a minor role only. This typically causes that the solutions of such problems contain so-called layers, which
are narrow regions where the solutions change abruptly. Unless the computational mesh is sufficiently fine, approximation of these solutions using standard
discretization techniques usually leads to spurious oscillations of unacceptable size
that pollute the solutions in a large part of the computational domain. Therefore,
many various discretization approaches have been developed during the last more
than four decades of intensive research but the problem of solving convection dominated problems numerically has not been resolved in a satisfactory way yet.
The talk will be devoted to various finite element techniques developed to solve convection dominated problems numerically. For clarity, most ideas will be explained
for simple scalar convection–diffusion or convection–diffusion–reaction equations.
Furthermore, applications to flow problems will be discussed. We shall mainly focus on linear and nonlinear stabilized methods. A basic difficulty connected with
these approaches is that they involve stabilization parameters that significantly
influence the quality of the computed solutions but for which the optimal choice is
usually unknown. Therefore, we shall also discuss various possibilities how these
parameters can be defined. The properties of the methods will be illustrated by
both theoretical and numerical results.
208
Tzanio Kolev
Lawrence Livermore National Laboratory, US
Parallel Algebraic Multigrid for Electromagnetic Diffusion
Minisymposium Session MMHD: Thursday, 11:30 - 12:00, CO017
Numerical simulation of electromagnetic phenomena is of critical importance in a
number of practical applications and production codes. In many physical models,
Maxwell’s equations are reduced to a second-order PDE system for one of the vector fields or for a potential. The definite Maxwell equations, for example, arise after
discretization in time, while magnetostatics with a vector potential leads to the
semidefinite Maxwell problem. Motivated by the practical needs of such large-scale
simulations, we are developing parallel algebraic solvers for complicated systems
of partial differential equations, including the definite and semidefinite Maxwell
problem. One example of a typical application is the calculation of hydrodynamic
stresses caused by large currents in pulsed-power experiments at Lawrence Livermore National Laboratory. The plot in Figure 1 shows the transient magnetic
field and eddy currents occurring in a helical coil with two side-by-side wires. Fine
resolutions of this problem are not tractable with previous solvers, such as classical
algebraic multigrid (AMG) methods.
Recently, there has been a significant activity in the area of auxiliary-space methods for linear systems arising in electromagnetic diffusion simulations. Motivated
by a novel stable decomposition of the Nedelec finite element space due to Hiptmair and Xu, we implemented a scalable solver for second order (semi-)definite
Maxwell problems, which utilizes several internal AMG V-cycles for scalar and
vector nodal Poisson-like matrices. In this talk we describe this Auxiliary-space
Maxwell Solver (AMS) by reviewing the underlying theory, demonstrating its parallel numerical performance, and presenting some new developments in its theory
and implementation for new classes of electromagnetic problems. In particular,
we will report some large-scale scalability results with the AMS implementation in
the hypre library of scalable linear solvers, including the problem with 12 billion
unknowns on 125,000 cores shown in Figure 2.
209
Figure 1: Simulation of electromagnetic diffusion in a bifilar helical coil using the
Auxiliary-space Maxwell Solver.
01!234-5+678
&#!
&"!
!"#$%&'
&!!
%!
()*+,-.
$!
/-01(
#!
/*,2-
"!
!
!
'!!!!
&!!!!!
()*+",-$.-/$,"'
Figure 2: Weak scaling study for AMS, where the problem size per core is
fixed (32768 elements) and we report the problem generation and the solver
setup/solve times.
Joint work with P. Vassilevski, and T. Brunner.
210
Igor Konshin
Institution of Russian Academy of Sciences Dorodnicyn Computing Centre of RAS,
RU
Continuous parallel algorithm of the second order incomplete triangular factorization with dynamic decomposition and reordering
Contributed Session CT1.6: Monday, 17:30 - 18:00, CO017
Modern software packages that are used for mathematical physics problems modeling, are often based on the implicit approximation schemes. It requires a high
accuracy solution of ill-conditioned sparse systems of linear algebraic equations of
large dimension. Another key point is the maximal parallel efficiency of the linear
system solution. The paper deals with both of the above mentioned problems.
Let us consider the solution of linear system Ax = b, where A ∈ RN ×N is a given
nonsingular sparse matrix of large dimension N , b ∈ RN is a given right-hand side
vector, x ∈ RN is a vector of unknowns. The main idea of the iterative solution
is based on the second order incomplete triangular factorization [1], which follows
from the relation
A + E = L(I) U (I) + L(I) U (II) + L(II) U (I) ,
(1)
where L(I) and U (I) are the lower and upper triangular parts of the preconditioner,
respectively, (the elements of the first order accuracy), L(II) and U (II) are the lower
and upper triangular parts of the preconditioner with the elements of the second
order accuracy, respectively, and E is an error matrix.
In [1] there are presented some theoretical estimates of the preconditioning quality
in the symmetric case. Our aim is to provide a reliable approach to the construction
of the parallel preconditioner without a deterioration of convergence.
The commonly used parallelization methods are based on the block Jacobi preconditioning or overlapping additive Schwarz preconditioning [2]. The idea of
continuous parallel algorithm of type (1) preconditioning consists in reproducing
the sequential algorithm for dynamically chosen ordering and partitioning. The
above ordering is based on the sparsity of the current Schur complement and is
dynamically calculated by a nested dissection (ND) ordering algorithm, after that
a new decomposition to the processors and threads can be constructed.
The following calculations model is introduced for the implementation of the
MPI+threads continuous parallel factorization. The notion of group tree of MPI
processes ‘th-tree’ is introduced in terms of processor groups (Fig. 1, left). Each
vertex of th-tree is associated with a corresponding th-block of the matrix as a set
of consecutive rows of the coefficient matrix. Each node of the tree which have
children corresponds to the MPI processes separators between several groups of
MPI processes, where the leaves are the independent sets of each MPI process.
After calculating the approximate Schur complement the ND ordering is constructed for each node of th-tree by using ParMetis package, after that the binary
block partitioning is constructed to provide a set of hyper blocks (h-blocks). This
corresponds to the partitioning to a binary tree of MPI processes, binary ‘h-tree’
(Fig. 1, center). Similarly, after the calculation of the approximate Schur complement on each node of the MPI processes tree the ND ordering is constructed to
separate the binary block partitioning to a set of blocks. The last tree is a binary
tree for computational threads ‘t-tree’ (Fig. 1, right).
The technique described above is used to dynamically construct the multilevel
preconditioner based on decomposition and proper reordering. This method is
211
used for the parallel linear systems solution on the parallel computers of heterogeneous architectures with up to several thousands of processors (threads). There
are presented the numerical results for the linear systems arising from different
applications (including structural mechanics problems).
[1] I.E.Kaporin. High quality preconditioning of a general symmetric positive
definite matrix based on its decomposition. Numer. Linear Algebra Appl. (1998)
Vol.5, 483-509.
[2] I.E.Kaporin, I.N.Konshin. A parallel block overlap preconditioning with inexact
submatrix inversion for linear elasticity problems. Numer. Linear Algebra Appl.
(2002) Vol.9, No.2, 141-162.
Figure 1: Hierarchy of the trees: th-tree of the MPI processes groups (left); binary h-tree of MPI processes (center); and binary t-tree of computational threads
(right).
Joint work with Sergey Kharchenko.
212
Adam Kosík
Charles University in Prague, Faculty of Mathematics and Physics, CZ
The Interaction of Compressible Flow and an Elastic Structure Using Discontinuous Galerkin Method
Contributed Session CT4.5: Friday, 09:50 - 10:20, CO016
In this paper we are concerned with the numerical simulation of the interaction of
fluid flow and an elastic structure in a 2D domain. For each individual problem
we employ the discretization by the discontinuous Galerkin finite element method
(DGM). We describe the application of the DGM to the problem of compressible
fluid flow in a time-dependent domain [1] and dynamic problem of the deformation
of an elastic body. For the static elasticity problem, the discretization method was
established in [2]. Finally, we describe our approach to the coupling of these two
independent problems: both are solved separately at a given time instant, but we
require the approximate solutions to satisfy certain transient conditions. These
transient conditions are met through several inner iterations. In each iteration a
calculation of both the elastic body deformation problem and the problem of the
compressible fluid flow is performed.
The application of the DGM to both problems is described. The DGM is a method
for solving various kinds of partial differential equations, taking in advance some
of the features of both the finite volume and the finite element methods. The
DGM approach is applied for the spatial discretization of both problems. The
time discretization is based either on finite-difference methods or on the spacetime discontinuous Galerkin method (STDGM). The STDGM applies the main
concept of the DGM both to the time and space semi-discretizations.
Flow of viscous compressible fluid is described by the Navier-Stokes equations.
The time-dependent computational domain and a moving grid are taken into
account employing the arbitrary Lagrangian-Eulerian (ALE) formulation of the
Navier-Stokes equations. Solution of the deformation of the computational domain becomes another task which has to be dealt with to solve the problem of
interaction.
For the numerical solution of the dynamic 2D linear elasticity problem with mixed
boundary conditions we have developed a .NET library written in C#. The
library supports several time discretization techniques, built on top of the DG
discretization in space with an arbitrary choice of the degree of the polynomial
approximation. The time discretizations as based on the backward Euler formula,
the second-order backward difference formula and the STDGM with an arbitrary
choice of the degree of the polynomial approximation in time.
The presented method can be applied to solve a selection of problems of biomechanics and aviation. Specifically, in this paper we are focused on the simulation
of vibrations of vocal folds, which are caused by the airflow originating in human
lungs. This procedure leads to the formation of voice. We consider a simplified 2D
problem equipped with appropriate initial and boundary conditions. We define
the properties of the flowing fluid and the material properties of the elastic body,
which models the vocal folds. The geometry of the computational domain is inspired by measurements on real human vocal tract. The results are post-processed
in order to get a visualization of the obtained solution. We are especially interested in the visualization of the elastic body deformation and the visualization of
some chosen physical quantities.
213
References
[1] M. Feistauer, J. Horáček, V. Kučera, J. Prokopová: On the numerical solution
of compressible flow in time-dependent domains. Mathematica Bohemica, 137
(2012), 1–16.
[2] B. M. Rivière, Discontinuous Galerkin Methods for Solving Elliptic and
Parabolic Equations: Theory and Implementation, Frontiers in Applied Mathematics, 2008.
Joint work with Miloslav Feistauer, Martin Hadrava, and Jaromír Horáček.
214
Antti Koskela
University of Innsbruck, AT
A moment-matching Arnoldi method for phi-functions
Minisymposium Session TIME: Thursday, 15:30 - 16:00, CO015
We consider
a new Krylov subspace algorithm for computing expressions of the
Pp
form k=0 hk ϕk (hA)wk , where A ∈ Cn×n , wk ∈ Cn , and ϕk are matrix functions
related to the exponential function. Computational problems of this form appear
when applying exponential integrators to large dimensional ODEs in semilinear
form u0 (t) = Au(t) + g(u(t)). Using Cauchy’s integral formula
ϕk (z) =
1
2πi
Z
Γ
eλ 1
dλ
λl λ − z
we give a representation for the error of the approximation and derive a priori
error bounds which describe well the convergence behaviour of the algorithm. In
addition an efficient a posteriori estimate is derived. Numerical experiments in
MATLAB illustrating the convergence behaviour are given.
Joint work with Alexander Ostermann.
215
Felix Krahmer
Insitute for Numerical and Applied Mathematics, University of Göttingen, DE
The restricted isometry property for random convolutions
Minisymposium Session ACDA: Monday, 12:40 - 13:10, CO122
The theory of compressed sensing is based on the observation that many natural
signals are approximately sparse in appropriate representation systems, that is,
only few entries are significant. The goal of the theory is to devise methods to
recover such a signal x from linear measurements y = Φx. For example, it has
been shown [1] that under the assumption of a small restricted isometry constant
on the matrix Φ, approximate recovery via `1 -minimization
min kzk1
z
subject to Φz = y,
(where kzkp denotes the usual `p -norm) is guaranteed even in the presence of noise.
Here, for a matrix Φ ∈ Rm×n and s < n, the restricted isometry constant δs =
δs (Φ) is defined as the smallest number such that
(1 − δs )kxk22 ≤ kΦxk22 ≤ (1 + δs )kxk22
for all s-sparse x.
If a matrix has a small restricted isometry constant, we also say that the matrix
has the restricted isometry property (RIP).
A class of measurement models that is of particular relevance for sensing applications is that of subsampled convolution with a random pulse. In such a model,
the convolution of a signal x ∈ Rn with a random vector ∈ Rn given by
x 7→ ∗ x, ( ∗ x)k =
n
X
(k−j)
mod n
xj .
j=1
is followed by a restriction PΩ to a deterministic subset of the coefficients Ω ⊂
{1, . . . , n} and normalization of the columns. The resulting measurement map is
linear; its matrix representation Φ given by
1
Φx = √ ∗ x
m
is called a partial random circulant matrix. In the talk, we will focus on the case
that the random vector is a Rademacher random vector, that is, its entries are
independent random variables with distribution P(i = ±1) = 1/2. Note, however,
that the corresponding results in [2] consider more general random vectors. In the
talk, we present the following main result.
Theorem 4. ([2]) Let Φ ∈ Rm×n be a draw of a partial random circulant matrix
generated by a Rademacher vector . If
m ≥ cδ −2 s (log2 s)(log2 n),
(1)
2
then with probability at least 1 − n−(log n)(log s) , the restricted isometry constant of
Φ satisfies δs ≤ δ. The constant c > 0 is universal.
This result improves the best previously known estimates for a partial random
circulant matrix [3], namely that m ≥ Cδ (s log n)3/2 is a sufficient condition for
achieving δs ≤ δ with high probability. In particular, Theorem 4 removes the
216
exponent 3/2 of the sparsity s, which was already conjectured in [3] to be an
artefact of the proof.
The proof is based on the observation that the restricted isometry constant of a
partial circulant matrix Φ based on a Rademacher vector can be expressed as
kVx k22 − EkVx k22 ,
δs (Φ) =
sup
x∈S n−1 | supp x|≤s
where Vx is defined through Vx y :=
√1 PΩ x
m
∗ y.
As it turns out, the expression kVx k22 is a Rademacher chaos process, that is, it is
of the form h, M i. This observation was already exploited in [3] to obtain their
suboptimal bounds. Our result, however, incorporates the additional observation
that the matrix M in the above scenario is Vx∗ Vx , hence positive semidefinite. In
the talk we present a bound for suprema of chaos processes under such structural
assumptions. The proof of this bound is based on decoupling and a chaining
argument, see [2]. This bound then allows to establish the above theorem.
References
[1] E. J. Candès, J., T. Tao, and J. Romberg. Robust uncertainty principles: exact
signal reconstruction from highly incomplete frequency information. IEEE
Trans. Inform. Theory, 52(2):489–509, 2006.
[2] F. Krahmer, S. Mendelson, and H. Rauhut. Suprema of chaos processes and
the Restricted Isometry Property. Comm. Pure Appl. Math., to appear.
[3] H. Rauhut, J. K. Romberg, and J. A. Tropp. Restricted isometries for partial
random circulant matrices. Appl. Comput. Harmon. Anal., 32(2):242–254,
2012.
Joint work with Shahar Mendelson, and Holger Rauhut.
217
Stephan Kramer
Institut f. Numerische und Angewandte Mathematik, Universität Göttingen, DE
The Geometric Conservation law in Astrophysics: Discontinuous Galerkin Methods on Moving Meshes for the non-ideal Gas Dynamics in Wolf-Rayet Stars
Minisymposium Session NFSI: Thursday, 15:30 - 16:00, CO122
Wolf-Rayet stars are described by the inviscid Euler equations for compressible flow
enhanced by a coupling to radiation transport in the diffusion approximation and
a Poisson equation for the self-gravitation. Unlike standard aerospace applications
the closure is given by two equations of state, one for the pressure and one for the
energy density. These equations and the opacity of the star are to a large extent
only known in the form of lookup tables.
To understand the details of the nonlinear dynamics in the transient states of
a Wolf-Rayet star an accurate three-dimensional simulation of its atmosphere is
necessary. Especially the mass losses observed require a discretization scheme
which is locally conservative in space and time. To accommodate for shock waves
we employ arbitrary Lagrangian-Eulerian (ALE) methods where the mesh partly
moves with - or represents the motion of - a fluid particle. Due to the importance
of local conservation properties of the discretization scheme we choose an DG
approach.
They key to a successful and consistent ALE-type DG-discretization is to respect
the geometric conservation law: uniform flows should be preserved exactly for
arbitrary mesh motion. We follow Mavriplis et al. [1] and discuss a DG-ALE
discretization for the nonlinear gas dynamics in Wolf-Rayet stars.
[1] D. Mavriplis and C. Nastase. On the geometric conservation law for highorder discontinuous Galerkin discretizations on dynamically deforming meshes. 46th
AIAA Aerospace Sciences Meeting and Exhibit, 2008.
Joint work with Bartosz Kohnke, and Gert Lube.
218
Stephan Kramer
Insitut f. Numerische und Angewandte Mathematik, DE
Converting Interface Conditions due to Excluded Volume Interactions into Boundary Conditions by FEM-BEM Methods
Minisymposium Session FREE: Tuesday, 12:00 - 12:30, CO2
Recent impedance spectroscopy studies of ubiquitin in solution have revealed the
influence of conformational sampling of proteins on the direct current contribution to the dielectric loss spectrum. A detailed model for this has been derived
in [1]. Our contribution discusses the main numerical issues in setting up a
Poisson-Nernst-Planck model for the ion dynamics and the electrostatic potential in impedance spectroscopy of globular proteins in solution:
- The set of partial differential equations modeling impedance spectroscopy are
derived from the continuity equation and the electro-diffusive fluxes. This is a
set of convection-diffusion equations for the ion densities coupled to a Poisson
equation for the electrostatic potential.
- The simulation of the experiment on a generic, solvated globular protein needs
appropriate boundary conditions for the impedance cell and for the protein-solvent
interface.
The experimental setup introduces solvent-electrode interfaces which give rise to
dielectric double layers well-known from the electro-chemistry. The excluded volume interaction between protein and ions can be transformed into an integral
equation for the electrostatic potential on the protein-solvent interface. This is
helpful especially in the case of complicated molecular surfaces. When intramolecular dynamics are taken into account these surface might start to move.
In the bulk the model is discretized by finite elements. The integral equation on
the protein-solvent interface is discretized by a boundary element method. Our
results show
- the interface problem can be replaced by a non-local boundary condition,
- how to setup the correct FEM-BEM coupling for its discretization,
- curvilinear approximation of cell boundaries enhances convergence.
[1] Stephan C. Kramer PhD thesis 2012, Universität Göttingen, link: http://ediss.unigoettingen.de/handle/11858/00-1735-0000-000D-FB52-0
Joint work with Gert Lube.
219
Marie Kray
Universität Basel, CH
A new approach to solve the inverse scattering problem for the wave equation
Contributed Session CT3.1: Thursday, 18:00 - 18:30, CO1
In paper [1], we propose a new method to solve the following inverse problem: we
aim at reconstructing, from boundary measurements, the location, the shape and
the wave propagation speed of an unknown inclusion surrounded by a medium
whose properties are known.
Our strategy combines two methods recently developed by the authors:
1. the Time-Reversed Absorbing Condition method (TRAC) first introduced
in [2]: It combines time reversal techniques and absorbing boundary conditions to reconstruct and regularize the signal in a truncated domain that
encloses the inclusion. This enables one to reduce the size of computational
domain where we solve the inverse problem, now from virtual internal measurements.
2. the Adaptive Inversion (AI) method initially proposed for the viscoelasticity
equation in [3]: The originality of this method comes from the parametrization of the problem. Instead of looking for the value of the unknown parameter at each node of the mesh, it projects the parameter into a basis
composed by eigenvectors of the Laplacian operator. Then, the AI method
uses an iterative process to adapt the mesh and the basis of eigenfunctions
from the previous approximation to improve the reconstruction.
The novelty of our work is threefold. Firstly, we present a new study on the regularizing power of the TRAC method. Secondly, we adapt the Adaptive Inversion
method to the case of the wave equation and we propose a new anisotropic version
of the iterative process. Finally, we present numerical examples to illustrate the
efficiency of the combination of both methods. In particular, our strategy allows
(a) to reduce the computational cost, (b) to stabilize the inverse problem and (c)
to improve the precision of the results.
On Figure 1, we display our results for a penetrable pentagon. We compare the
exact propagation speed (left column) to the reconstruction by using both methods, first without noise on the recorded data (center column), then with 20% level
of noise (right column). We denote by 20%-noisy TRAC data, the virtual data
obtained after the TRAC process from 20%-noisy boundary measurements.
References:
[1] M. DE B UHAN AND M. K RAY, A new approach to solve the inverse scattering problem for waves : combining the TRAC and the Adaptive Inversion
methods, submitted (available on HAL), 2013.
[2] F. A SSOUS , M. K RAY, F. N ATAF, AND E. T URKEL, Time Reversed Absorbing
Condition : Application to inverse problem, Inverse Problems, 27(6), 065003,
2011.
[3] M. DE B UHAN AND A. O SSES, Logarithmic stability in determination of a 3D
viscoelastic coefficient and a numerical example, Inverse Problems, 26(9),
95006, 2010.
220
Figure 1: Shape and properties reconstruction of a penetrable pentagon by
using both TRAC and AI methods: (a) Propagation speed profile inside and
outside the inclusion. (b) Result obtained with 0%-noisy TRAC data, relative L2 -error = 1.72%. (c) Result obtained with 20%-noisy TRAC data, relative
L2 -error = 1.92%.
Joint work with Dr. Maya de Buhan (CNRS-Université Paris Descartes France).
221
Gunilla Kreiss
Uppsala University, SE
Imposing Neumann and Robin boundary conditions with added penalty term
Contributed Session CT1.9: Monday, 17:00 - 17:30, CO124
In a standard finite element method model for an elliptic problem Neumann and
Robin boundary conditions are imposed weakly. For smooth cases we expect the
normal derivative at the boundary to converge to the prescribed value, but at a
slower rate than the solution itself. This can be problematic when for example
computing flow in porous media. In a typical porous media case the pressure will
satisfy an elliptic equation with a Neumann boundary condition for instance where
the aquifer is bounded by an impermeable rock. After solving for the pressure,
the pressure gradient gives the approximate flow. At boundaries the flow approximation will only satisfy the prescribed flux approximately. From an engineering
point of view a very good agreement would be desirable.
In this work we modify the weak form by including a penalty term so as to decrease
the error in the boundary normal derivative for the Neumann case. The same
technique can be applied to Robin boundary conditions. The new bilinear form is
symmetric, and the approach is inspired by Nitsche’s method for imposing Dirichlet
conditions weakly. We prove that in the interior of the domain the corresponding
discrete approximation converges at the same order as the solution obtained using
the standard method. Numerical experiments demonstrate that the convergence
rate of the normal derivative at the boundary can be improved by one order. This
is true for both Neumann and Robin boundary conditions.
In a second numerical example we compute streamlines based on a pressure solution on a square, with prescribed flux at the horizontal boundaries. At the right
half of the upper boundary, and at the left half of the lower boundary the prescribed flux is equal to zero. Thus the streamlines of the exact solution should
be parallel to the boundary there. In figures 1 and 2 we have plotted streamlines based on the standard method and on our method, respectively. Note the
streamlines almost parallel to the right half of the upper boundary. In figure 2
the uppermost streamline is considerably more accurate than in figure 1, where it
exits the no-flow boundary. An improved result is also found in the lower left half
of the boundary.
The numerical tests have been done on both Cartesian grids and quadrilateral
grids with bilinear finite elements.
222
Figure 1: Standard Neumann
Figure 2: Penalized Neumann
Joint work with Margot Gerritsen, and Annette Stephansen.
223
Wolfgang Krendl
Johannes Kepler University Linz, AT
Efficient preconditioning for time-harmonic control problems
Contributed Session CT4.2: Friday, 08:50 - 09:20, CO2
Based on analytic results on preconditioners for time-harmonic control problems
in the paper Stability Estimates and Structural Spectral Properties of Saddle
Point Problems (authors: Krendl W., Simoncini V., Zulehner W.: to appear in:
Numerische Mathematik), we discuss their efficient implementation. In particular, time-harmonic parabolic and time-harmonic Stokes control problems. For
these problems we present practical preconditioners in combination with MINRES, which lead to robust convergence rates with respect to meshsize, frequency
and cost parameters.
Joint work with Valeria Simoncini, and Walter Zulehner.
224
Daniel Kressner
EPF Lausanne, CH
Interpolation based methods for nonlinear eigenvalue problems
Minisymposium Session NEIG: Thursday, 12:00 - 12:30, CO2
This talk is concerned with numerical methods for matrix eigenvalue problems
that are nonlinear in the eigenvalue parameter. In particular, we focus on eigenvalue problems for which the evaluation of the matrix-valued function is computationally expensive. Examples of such problems arise, e.g., from boundary
integral formulations of elliptic PDE eigenvalue problems or coupled FEM/BEM
discretizations of fluid-structure interaction problems. The cost for evaluating the
matrix-valued function typically excludes the use of established nonlinear eigenvalue solvers. Instead, we propose the use of polynomial approximation combined
with non-monomial linearizations.
It can be shown that the obtained eigenvalue approximations converge exponentially as the degree of the polynomial decreases. In turn, a degree between 10 and
20 is often sufficient to attain excellent accuracy. Still, this means that the size of
the eigenvalue problem is increased by a factor between 10 and 20, and hence the
storage requirements of, e.g., Krylov subspace methods increase by this factor. For
matrix polynomials in the monomial basis, the Q-Arnoldi methods and variants
thereof (SOAR, TOAR) are established techniques to largely avoid this increase.
If time permits, we will discuss the adaption of TOAR and deflation techniques to
non-monomial bases.
Parts of this work are based on collaborations with Jose Roman, Olaf Steinbach,
and Gerhard Unger.
Joint work with Cedric Effenberger.
225
Jochen Kroll
LANXESS Deutschland GmbH, DE
An alternative description of the visko-elastic flow behavior of highly elastic polymer melts
Minisymposium Session MANT: Wednesday, 10:30 - 11:00, CO017
The description of the visco-elastic behavior of polymer melt and solutions undergoing finite deformations is usually based on the description by generalized
Maxwell processes. Achieving a sufficient approximation quality of dynamical
data requires – especially in the case of commercial and thus broadly distributed
polymers – the introduction of a large number of parameters with the latter being
of limited physical meaning.
The presented modeling approach is not only characterized by its significantly
reduced number of parameters but also by its direct link to the dynamical characterization of the material. In that way a connection between the molecular
information and the simulated flow behavior can be established.
226
Lev Krukier
Southern Federal University, Computer Center, RU
Symmetric - skew-symmetric splitting and iterative methods
Contributed Session CT3.3: Thursday, 17:00 - 17:30, CO3
Any matrix A can naturally be expressed as a sum of symmetric matrix A0 and
skew-symmetric matrix A1 . This splitting is named symmetric - skew-symmetric
spliting (SSS).
Consider the linear equation system
Au = f,
(1)
where A is non-symmetric matrix, u is the vector of unknown, f is the vector of
the right part is considered.
Iterative method based on symmetric - skew-symmetric splitting was firstly proposed for this business by Gene Golub.
If A0 is a positive definite than matrix A is named positive real. We will name
matrix A strongly non-symmetric if
kA0 k∗ << kA1 k∗
wherek&k∗ is some matrix norm.
It is well known, that difficulties to solve such linear equation systems grows up
because matrix can lose property of diagonal dominant. For these cases we propose
symmetric-skew-symmetric iterative methods (SSIT).
Let us approach (1) by considering the iterative methods of the following form:
y n+1 = Gy n + τ B −1 f,
G = B −1 (ω)(B(ω) − τ A),
(2)
where f, y0 ∈ H,H is an n-dimensional real Hilbert space, f is the right part
of (1), A, B(ω) are linear operators (matrices) in H, A is given by equation
(2), B(ω) is invertible, y0 is an initial guess, yk is the k-th approach, τ, ω > 0 are
iterative parameters, u is the solution that we obtain, ek = y k − u and rk = Aek
denote the error and the residual in the k-th iteration, respectively.
Consider the next choice of operator B. The class of triangular skew-symmetric
iterative methods is defined by (2) with the matrix B being chosen as
B(ω) = Bc + ω((1 + j)KL + (1 − j)KU ), j = ±1, Bc = Bc∗ .
(3)
The class of product triangular skew-symmetric iterative methods is defined by
(2) with the matrix B being chosen as
−1
B = (BC + ωKU )BC
(BC + ωKL ), Bc = Bc∗ ,
(4)
∗
where KL + KU = A1 , KL = −KU∗ , BC = BC
.
Operator BC can be chosen arbitrarily, but has to be symmetric. These methods
are from class of SSIT and called as two-parameters triangular(TTM) and product
triangular (TPTM) method. Convergence of TTM and TPTM has been considered
and proved. We compare TTM to the conventional SOR procedure and TPTM to
the conventional SSOR procedure.
For the check of TPTM behavior, the standard 5-point central difference scheme
on the regular mesh has been used for approximation of the convection-diffusion
227
equation with Dirichlet boundary conditions and small parameter at the higher
derivatives in the incompressible medium and it’s transformation by regular ordering to strongly non-symmetric linear equation systems. In the case of central
difference approximation of the convective terms operator A can naturally be expressed in a sum of symmetric positive definite operator A0 , which is a difference
analogue of the Laplace operator and skew-symmetric operator A1 , which is a
difference analogue of the convective terms.
Numerical experiments show that in considered particular cases the behavior of
methods is closely related to the technique of choosing the matrix BC .
Joint work with B. L. Krukier, and O.A.Pichugina.
228
Vaclav Kucera
Charles University in Prague, Faculty of Mathematics and Physics, Czech republic
On the use of reconstruction operators in discontinuous Galerkin schemes
Contributed Session CT2.5: Tuesday, 15:00 - 15:30, CO016
In this work we follow the methodology of higher order finite volume (FV) and
spectral volume (SV) schemes and introduce a reconstruction operator into the
discontinuous Galerkin (DG) method. In the standard FV method, such operators are used to increase the order of accuracy of the basic piecewise constant
scheme by constructing higher order piecewise polynomial approximations of the
exact solution from the lower order piecewise constant approximate solutions. In
the DG setting, the reconstruction operators will be used to construct higher order
piecewise polynomial reconstructions from the lower order DG scheme. This allows
us to increase the accuracy of existing DG schemes with a problem-independent
reconstruction procedure. In the talk, the technique will be presented for a nonstationary nonlinear convection equation, although the basic idea can be straightforwardly applied to any DG formulation of general evolutionary equations.
Unlike the FVM, where the reconstruction stencil size must be increased in order
to increase the order of accuracy, in the DG scheme the reconstruction stencil has
minimal size independent of the approximation order. For example, in one spatial
dimension, from a DG scheme of order n, one can reconstruct an approximate
solution of order 3n + 2 using the von Neumann neighborhood only. This represents a dramatic increase in accuracy. In two spatial dimensions, an approximate
solution of order n allows us to construct an approximation of order 2n + 1.
One may ask, whether such a reconstruction procedure brings any advantages over
using the corresponding DG scheme of higher order. However, there are several
reasons why using a lower order DG scheme is more advantageous. First, test
functions in the reconstructed scheme are from the lower order discrete space,
therefore lower order quadrature rules are needed in the evaluation of element and
boundary integrals and therefore fewer quadrature points and (numerical) flux
evaluations are needed. Furthermore, the stability conditions on the time step size
are inherited from the lower order scheme, therefore larger time steps can be taken,
which greatly increases the efficiency of the scheme. And finally, if orthogonal bases
are not used, the mass matrices resulting from the temporal discretization have
smaller dimension and can therefore be inverted faster. Numerical experiments
are provided to demonstrate the accuracy and efficiency of the proposed schemes.
Applying reconstruction procedures in DG schemes was already proposed already
in Dumbser et al. (2008) based on heuristic arguments, however we provide a
more rigorous derivation, which justifies the increased order of accuracy. Then
we analyze properties of the reconstruction operators form the point of view of
classical finite element theory, using a generalized version of the Bramble-Hilbert
lemma. Furthermore, we show the equivalence of the reconstructed DG scheme
to a certain modification of the corresponding higher order DG scheme. This socalled auxiliary problem can be analyzed similarly as standard DG schemes and
although a complete theory of error estimates is not yet developed, this setting
gives a firm theoretical background to the reconstructed DG scheme.
The author is a junior researcher in the University Center for Mathematical Modelling, Applied Analysis and Computational Mathematics (Math MAC). The research is supported by the project P201/11/P414 of the Czech Science Foundation.
229
Dmitri Kuzmin
University Erlangen-Nuremberg, DE
Vertex-based limiters for continuous and discontinuous Galerkin methods
Plenary Session: Friday, 11:40 - 12:30, CO1
This talk is concerned with the design of constrained finite element methods for
convection-dominated transport equations and hyperbolic systems. We will begin
with a review of algebraic flux correction schemes for enforcing the discrete maximum principle for (low-order) continuous finite elements. After formulating sufficient conditions of positivity preservation, we will present a black-box approach
to limiting the antidiffusive part of the Galerkin transport operator. The limiting
techniques to be discussed are based on a generalization of the fully multidimensional flux-corrected transport (FCT) algorithm. Next, we will address the aspects
of slope limiting in discontinuous Galerkin (DG) methods. The representation of
finite element shape functions in terms of cell averages (coarse scales) and derivatives (fine scales) makes it possible to eliminate the unresolvable fine-scale features
using a vertex-based hierarchical moment limiter. The proposed limiting strategy
preserves the order of accuracy at smooth extrema and may serve as a parameterfree regularity estimator. We will highlight the existing similarities to variational
multiscale methods and explore the possibility of enriching a continuous (linear
or bilinear) coarse-scale approximation space with discontinuous basis functions
of higher order. Further topics to be discussed include the iterative treatment
of nonlinear systems and the extension of scalar limiting techniques to the Euler
equations of gas dynamics. The accuracy of the presented high-resolution schemes
will be illustrated by numerical examples including the first use of vertex-based
limiters in the context of hp adaptivity for hyperbolic conservation laws.
230
Pauline Lafitte
Ecole Centrale Paris, FR
Projective integration schemes for kinetic equations in the hydrodynamic limit
Minisymposium Session ASHO: Wednesday, 11:30 - 12:00, CO2
In order to introduce new asymptotic preserving schemes for kinetic equations in
regimes leading to hyperbolic systems of conservation laws appearing e. g. in some
models of radiative transfer or fluid-particle interactions, we apply the projective
integration method developed by Gear and Kevrekidis in the context of large multiscale differential systems appearing in Chemistry.
Joint work with A. Lejon, and G. Samaey.
231
Omar Lakkis
University of Sussex, GB
Review of Recent Advances in Galerkin Methods for Fully Nonlinear Elliptic Equations
Minisymposium Session NMFN: Monday, 12:10 - 12:40, CO2
I will make a brief overview of all numerical methods, including their analysis where
available, for fully nonlinear elliptic equations based on Galerkin-type approximations, while mentioning other known methodologies, such as finite differences and
related monotone schemes. In the final part, I will focus on the finite element
Hessian methods introduced by Lakkis and Pryer (2010) via the nonvariational finite element method, aposteriori error estimates and their potential for convergent
adaptive mesh refinement.
232
Jens Lang
Technische Universität Darmstadt, DE
Anisotropic Finite Element Meshes for Linear Parabolic Equations
Minisymposium Session TIME: Thursday, 15:00 - 15:30, CO015
In [1,2] anisotropic mesh adaptation methods for elliptic problems are studied. In
a next step, we have investigated the influence of anisotropic meshes upon the
time stepping and the conditioning of the linear systems arising from linear finite
element approximations of linear parabolic equations. Here, we present stability
results and estimates for the condition number. Both explicit and implicit time
integration schemes are considered. For stabilized explicit Runge-Kutta methods,
the stability condition is obtained. It is shown that the allowed maximal step size
depends only on the number of the elements in the mesh and a measure of the
non-uniformity of the mesh viewed in the metric specified by the inverse of the
diffusion matrix. Particularly, it is independent of the mesh non-uniformity in volume measured in the Euclidean metric. For the implicit time stepping situation,
bounds are obtained for the condition numbers of the coefficient matrices of the
linear system and preconditioned linear system with Jacobi preconditioning. It
is shown that the effects of the volume non-uniformity can be eliminated by the
Jacobi preconditioning. One of our main findings is that the alignment of the mesh
with the diffusion matrix plays a crucial role in the stability condition for the explicit stepping case and the condition number of the preconditioned linear system
by the Jacobi preconditioning for the implicit stepping case. When the mesh is
uniform with respect to metric defined by the (symmetric and uniformly positive
definite) diffusion matrix, the stability condition and the condition number behaves
like in the situation with constant, isotropic diffusion problems on a uniform mesh.
[1] W. Huang, L. Kamenski, J. Lang, A new anisotropic mesh adaptation method
based upon hierarchical a posteriori error estimates, J. Comp. Phys. 229 (2010),
pp. 2179-2198.
[2] W. Huang, L. Kamenski, J. Lang, Adaptive finite elements with anisotropic
meshes, Numerical Mathematics and Advanced Applications 2011: Proceedings of
ENUMATH 2011, the 9th European Conference on Numerical Mathematics and
Advanced Applications, Leicester, September 2011, A. Cangiani et al. (eds.), pp.
33-42, Springer 2013.
Joint work with Weizhang Huang, and Lennard Kamenski.
233
Toni Lassila
CMCS-MATHICSE, EPFL, CH
Space-time model reduction for nonlinear time-periodic problems using the harmonic balance reduced basis method
Minisymposium Session ROMY: Thursday, 12:00 - 12:30, CO016
In many applications of fluid dynamics, for example in simulations of turbomachinery flows or the human cardiovascular system, the behavior of the flow is such that
the solution converges towards a periodic steady-state starting from an arbitrary
initial state. Typically one is then only interested in computing the periodic-steady
state solution. In this case, simulating the transient behavior of the unsteady flow
until a periodic steady-state is reached is not an efficient approach. The harmonic
balance method assumes that both the flow solution and the spatial operator of
the problem are time-periodic and can be written as their Fourier series expansions. These expansions are then truncated after the first few leading terms, and
the problem reduces to solving a set of fully-coupled nonlinear equations for the
Fourier coefficients.
In this talk, the harmonic balance method is coupled with the reduced basis
method for reduction in space to construct a computationally efficient space-time
reduced order model without the typical growth of error in time. It is well suited
to hemodynamics applications in large arteries, where a strong pulsatile inflow
drives the flow towards periodic regimes. We also discuss extending the Floquet
theory of the stability of linear time-periodic systems to analyze the stability of
the harmonic balance reduced basis -solutions to identify the critical Reynolds
number after which the flow undergoes a bifurcation and the periodic steady-state
solution becomes unstable.
234
Olivier Le Maitre
Duke University, US
Galerkin Method for Stochastic Ordinary Differential Equations with Uncertain
Parameters
Minisymposium Session UQPD: Thursday, 11:30 - 12:00, CO1
We propose a Galerkin method for the resolution of a certain class of Stochastic
Ordinary Differential Equations (SODE) driven by Wiener processes and involving some random parameters. The dependence of the solution with respect to the
uncertain parameters is treated by Polynomial Chaos expansions, with expansion
coefficients being random processes function of the Wiener processes. An hybrid
Monte-Carlo Galerkin method is then proposed to compute these expansion coefficients, allowing for a complete uncertainty analysis of the solution. In particular,
we show that one can retrieve the dependence on the uncertain parameters of the
stochastic noise in the solution. Exemples of applications are shown for linear and
non linear SODEs. Finally, the extension of the method to non-intrusive techniques and more general source of stochasticity is discussed.
Joint work with Omar Knio.
235
Sanghyun Lee
Ph.D in Mathematics at Texas A&M University, US
Numerical simulation of Kaye effects
Minisymposium Session FREE: Monday, 15:00 - 15:30, CO2
The fascinating phenomenon of a leaping shampoo stream, Kaye effect, is a property of non-Newtonian fluid which was first described by Alan Kaye in 1963. It
manifest itself, when a thin stream of non-Newtonian fluid is poured into a dish of
the fluid. As pouring proceeds, a small stream of liquid occasinally leaps upward
from the heap. Figure (1)
Since there is no mathematical model or numerical simulation studied before, as
a first approach, we have studied a mathematical model and algorithm to find
the range of parameters to observe the Kaye effects. In this context we propose a
modfied projection method for Navier-stokes equation with open boundary, level
set method for free boundary and adaptivity.
Also, in earlier studies it has been debated whether non-Newtonian effects are
the underlying cause of this phenomenon, making the jet glide on top of a shearthinning liquid layer, or whether an entrained air layer is responsible. Here in we
show that the jet slides on a lubricating air layer with numerical simulation which
is the identical result that we observe from physical experiments.
236
Figure 1: Kaye effect with shampoo
Joint work with Andrea Bonito, and Jean-Luc Guermond.
237
Jeonghun Lee
Aalto University, Department of Mathematics and Systems Analysis, FI
Hodge Laplacian problems with Robin boundary conditions
Contributed Session CT4.6: Friday, 08:20 - 08:50, CO017
In this work, we consider mixed methods of Hodge Laplacian problems with Robin
boundary conditions.
Mixed methods for the Hodge Laplacian problems were studied by Arnold, Falk
and Winther in [1, 2] using a framework of the de Rham complex, called the finite element exterior calculus (FEEC). In the work of Arnold, Falk and Winther,
they assume the homogeneous Dirichlet or homogeneous Neumann boundary conditions. However, it is reasonable to consider more general boundary conditions
in physical applications.
Recently, the scalar Poisson equation with Robin boundary conditions was studied
in [3]. Stenberg and his collaborators proved a priori error estimates and provided
an efficient and reliable a posteriori error estimator. The author generalizes this
to mixed methods of Hodge Laplacian problems with Robin boundary conditions,
for general differential k forms in the FEEC framework.
Robin boundary conditions for the scalar Poisson equation are well-known whereas
Robin boundary conditions for Hodge Laplacian problems of differential k forms
are not obvious for general k. Thus we propose appropriate Robin boundary
conditions in the language of differential forms and discuss well-posedness of the
problem. For discrete mixed forms of Hodge Laplacian problems, we use the Pr Λk
and Pr− Λk finite element families on triangular meshes. We prove the stability of
the numerical scheme, as well as discuss a priori and a posteriori error estimates.
References
[1] Douglas N. Arnold and Richard S. Falk and Ragnar Winther Finite element
exterior calculus, homological techniques, and applications, Acta Numer., 15
(2006), 1–155.
[2] Douglas N. Arnold and Richard S. Falk and Ragnar Winther Finite element
exterior calculus: from Hodge theory to numerical stability, Bull. Amer. Math.
Soc., 47 (2010), no. 2, 281–354.
[3] Juho Könnö and Dominik Schötzau and Rolf Stenberg Mixed finite element
methods for problems with Robin boundary conditions, SIAM J. Numer. Anal.,
49 (2011), no. 1, 285–308.
238
Annelies Lejon
Department of Computer Science, KU Leuven, BE
Higher order projective integration schemes for multiscale kinetic equations in the
diffusive limit
Contributed Session CT1.3: Monday, 18:30 - 19:00, CO3
1
Introduction
Multiscale systems (involving multiple timescales) can be found in many real world
applications, such as biological systems, traffic flow, plasma astrophysics, etc. In
this talk, we consider systems that can be described by a kinetic equation that
models evolution of a distribution function in position-velocity expensive to simulate this over the longer timescales we are interested in. We present a high-order
projective integration scheme that is fully explicit and whose computational complexity does only depends on the macroscopic time-scales in the system. Moreover,
we show an application of this technique on a semiconductor equation to illustrate
the numerical performance.
2
Methods
The kinetic equation describe the evolution of the probability f (x, v, t) being at
position x, moving with velocity v at time t,
ρ(x, t) − f (x, v, t) + εM (ρ)
v
,
∂t f (x, v, t) + f (x, v, t) =
ε
ε2
(1)
in which ρ = hf i, and we have introduced a small-scale parameter 0 < ε 1 and
a diffusive scaling. The term M (ρ) has been introduced to obtain an advectiondiffusion behaviour in the diffusion limit ε → 0,
The projective integration algorithm was developed by Gear and Kevrekidis (SIAM
Journal on Scientific Computing, 4:1091,1106,2003). It consists of the following
steps:
1. Perform K small steps with an naive explicit integrator (with time-step δt =
O(ε2 ) (this is called an inner integrator). When ε is small, this will enforce
convergence of the fast modes to the slow manifold that is characterize by ρ.
2. One then performs a large time-step ∆t by extrapolation in time (this is
called an outer integrator).
The application of projective integrations to kinetic equations was first studied for
first-order extrapolation (projective forward Euler) and a purely diffusive equation
(SIAM Journal on Scientific Computing, 34:A579-A602, 2012)
This work extends the method to higher order time integration, and provides a
numerical analysis in a more general advection-diffusion setting. We proved that
the stability condition on ∆t independent of ε for kinetic equations of type (1).
Also, the required number K of steps with the inner integrator is independent of
ε. We therefore constructed a stable and explicit method, with arbitrary accuracy
in time and space. For the numerical results, we used a 4th order Runge–Kutta
method as the outer integrator, forward Euler as the inner integrator.
239
3
Stability Regions
Furthermore, we derived analytical expressions for the stability regions for the
higher order method. From figure 1, it is clear that there are two distinct regions.
One part is centered around the origin and can be used to capture the fast modes
of the system and the other one is located near (1, 0) and the latter will capture
the slow modes.
Figure 1: In the left part of the figure the stability regions of the PRK4 method has
been plotted for different values of δt and ∆t = 1 × 10−3 , K = 3 : δt = 1 × 10−6
(dashed), δt = 1 × 10−4 (dotted),δt = 1 × 1.6 × 10−5 (solid). The right part is a
magnification for the region for slow eigenvalues
Joint work with Pauline Lafitte, and Giovanni Samaey.
240
Martin Lilienthal
Graduate School of Computational Engineering / TU-Darmstadt, DE
Non-Dissipative Space Time Hp-Discontinuous Galerkin Method for the TimeDependent Maxwell Equations
Contributed Session CT4.6: Friday, 08:50 - 09:20, CO017
A space-time finite element method for the time-dependent Maxwell equations is
presented. The method allows for local hp-refinement in space and time by employing a space-time Galerkin approach and is thus well suited for hp-adaptivity.
Inspired by the continuous Galerkin methods for ODEs, nonequal test and trial
spaces are employed in the temporal direction. Combined with a (centered) discontinuous Galerkin approach in the spatial directions, a stable non-dissipative
method is obtained. Numerical experiments in (3+1)D indicate that the method
is suitable for space-time hp-adaptivity on dynamic discretizations.
The work of M. Lilienthal is supported by the ’Excellence Initiative’ of the German Federal and State Governments and the Graduate School of Computational
Engineering at Technische Universität Darmstadt and the DFG under grant no.
SCHN 1212/1-1.
Joint work with Sascha Schnepp, and Thomas Weiland.
241
Lek-Heng Lim
University of Chicago, US
Symmetric tensors with positive decompositions
Minisymposium Session LRTT: Monday, 11:10 - 11:40, CO1
A symmetric d-tensor is positive semidefinite if, when viewed as a homogeneous
form, is always nonnegative valued, or equivalently, has all eigenvalues nonnegative. The dual cone of positive semidefinite tensors is the cone of symmetric tensors
that have a decomposition into rank-1 symmetric tensors with all coefficients positive. Such tensors have many nice properties: a best rank-r approximation always
exist, the decomposition is unique for small values of r without any additional
requirements (such as Kruskalś condition), and there are provably correct algorithms (as opposed to heuristics like alternating least squares) for finding such
decompositions. We will discuss these and other properties of symmetric tensors
with positive decompositions.
Joint work with Greg Blekherman.
242
Ping Lin
University of Dundee, GB
L2 projected finite element methods for Maxwell’s equations with low regularity
solution
Minisymposium Session MMHD: Thursday, 11:00 - 11:30, CO017
In the talk we will present an element-local L2 projected finite element method
to approximate the nonsmooth solution (not in H 1 ) of the Maxwell problem on
a nonconvex Lipschitz polyhedron with reentrant corners and edges. The key
idea lies in that element-local L2 projectors are applied to both curl and div
operators. The C 0 linear finite element (enriched with certain higher degree bubble
functions) is employed to approximate the nonsmooth solution. The coercivity in
L2 norm is established uniformly in the mesh size. For the solution and its curl
in H r with r < 1 we obtain an error bound O(hr ) in an energy norm. Numerical
examples confirm the theoretical error bound. The idea is also applied to curldiv magnetostatic problem in multiply-connected Lipschitz polyhedrons and to
eigenvalue problems. Desirable error bounds are obtained as well. The talk is
based on a few joint papers with H.Y. Duan and R. Tan.
243
Alexander Linke
Weierstrass Institute, DE
Stabilizing Mixed Methods for Incompressible Flows by a New Kind of Variational
Crime
Contributed Session CT2.3: Tuesday, 14:00 - 14:30, CO3
In incompressible flows with vanishing normal velocities at the boundary, irrotational forces in the momentum equations should be balanced completely by the
pressure gradient. Unfortunately, nearly all available discretizations for incompressible flows violate this property. The origin of the problem is that discrete
velocities are usually not divergence-free. Hence, the use of divergence-free velocity reconstructions is proposed wherever an L2 scalar product appears in the
discrete variational formulation - which actually means committing a new kind of
variational crime. The approach is illustrated and applied to several finite volume
and finite element discretizations for the incompressible Navier-Stokes equations.
In a finite element context, the new variational crime makes classical grad-div
stabilization unnecessary, and even delivers error estimates for the discrete velocities that are completely independent of the pressure. Several numerical examples
illustrate the theoretical results demonstrating that divergence-free velocity reconstructions may indeed increase the robustness and accuracy of existing convergent
flow discretizations in physically relevant situations.
244
Quan Long
King Abdullah University of Science and Technology, KSA
A Projection Method for Under Determined Optimal Experimental Designs
Contributed Session CT3.1: Thursday, 17:30 - 18:00, CO1
Shannon–type expected information gain can be used to evaluate the relevance
of a proposed experiment subjected to uncertainty. The estimation of such gain,
however, relies on a double-loop integration. Moreover, its numerical integration
in multidimensional cases, e.g., when using Monte Carlo sampling methods, is
therefore computationally intractable for realistic physical models, especially those
involving the solution of partial differential equations. In this paper, we present a
new methodology, based on the Laplace approximation for the integration of the
posterior probability density function (pdf), to accelerate the estimation of the
expected information gains in the model parameters and predictive quantities of
interest for both determined and under determined models. We obtain a closed–
form approximation of the inner integral and the corresponding dominant error
term, such that only a single–loop integration is needed to carry out the estimation
of the expected information gain.
In this work, we extend that method to the general cases where the model parameters could not be determined completely by the data from the proposed experiments.
We carry out the Laplace approximations in the directions orthogonal to the
null space of the corresponding Jacobian matrix, so that the information gain
(Kullback–Leibler divergence) can be reduced to an integration against the marginal
density of the transformed parameters which are not determined by the experiments. Furthermore, the expected information gain can be approximated by an
integration over the prior, where the integrand is a function of the projected posterior covariance matrix. To deal with the issue of dimensionality in a complex
problem, we use Monte Carlo sampling or sparse quadratures for the integration
over the prior probability density function, depending on the regularity of the
integrand function. We demonstrate the accuracy, efficiency and robustness of
the proposed method via several nonlinear under determined numerical examples.
They include the designs of the scalar parameter in an one dimensional cubic polynomial function with two indistinguishable parameters forming a linear manifold,
respectively, and the boundary source locations for impedance tomography in a
square domain, considering the parameters as a piecewise linear continuous random field.
Joint work with Marco Scavino, Raul Tempone, and Suojin Wang.
245
Petr Louda
Czech Technical University in Prague, CZ
Numerical simulations of laminar and turbulent 3D flow over backward facing step
Contributed Session CT4.3: Friday, 08:20 - 08:50, CO3
Numerical simulations of laminar and turbulent 3D flow over backward
facing step
1
P. Louda2 , P. Sváček1 , K. Kozel1 , J. Příhoda2 ,
Dept. of Technical Mathematics CTU Prague, Karlovo nám. 13, CZ-121 35
Prague 2
2
Institute of Thermomechanics AS CR, Dolejškova 5, CZ-182 00 Prague 8
Corresponding author: [email protected]
ABSTRACT
The work deals with 3D numerical simulations of incompressible flow in channel of
rectangular cross-section with backward facing step. The flow regimes considered
are laminar as well as turbulent. The mathematical model is based on NavierStokes for laminar and Reynolds averaged Navier-Stokes equations for turbulent
regime. Two types of numerical methods are used:
• Implicit finite volume method solving governing equations by artificial compressibility method. The approximation of convective terms is based on third
order interpolation on structured grid of hexahedrons, the discretization of
viscous term is second order accurate. The time discretization is backward
Euler scheme.
• Stabilized finite element method solving weak formulation of governing equations. The flow velocity and pressure are approximated by continuous piecewise linear functions using streamline-upwind/ Petrov-Galerkin and pressure
stabilizing/ Petrov-Galerkin method together with div-div stabilization.
The results of both methods are compared in the laminar case and also in turbulent
cases. The turbulence is modelled by two-equation eddy-viscosity models (TNT
k-ω, SST) and by an explicit algebraic Reynolds stress model (EARSM, Wallin,
Hellsten) and by V2F model (Durbin). The numerical results are compared with
3D experimental data acquired using PIV technique.
References
[1] P. Louda, J. Příhoda, P. Sváček, and K. Kozel. Numerical simulation of
separated flows in channels. J. of Thermal Science, 21(2):145–153, 2012.
[2] P. Louda, P. Sváček, K. Kozel, and J. Příhoda.
Numerical simulations of separation behind backward facing steps.
In D. T.
Tsahalis, editor, IC-EpsMsO 4th International Conference on Experiments/Process/System/Modelling/Simulation/Optimization, pages 437–444,
Laboratory of Fluid Mechanics and Energy, University of Patras, 2011. ISBN
978-960-98941-8-0.
246
[3] P. Sváček. Numerical Modelling of Aeroelastic Behaviour of an Airfoil in
Viscous Incompressible Flow. Applied Mathematics and Computation. vol.
217, no. 11, p. 5078-5086. ISSN 0096-3003, 2011.
[4] P. Louda, K. Kozel, J. Příhoda. Numerical solution of 2D and 3D viscous incompressible steady and unsteady flows using artificial compressibility
method. Int. J. for Numerical Methods in Fluids 56, pp. 1399–1407, 2008.
Joint work with P. Svacek, K. Kozel, and J. Prihoda.
247
Robert Luce
University of Pau, FR
Robust local flux reconstruction for various finite element methods
Minisymposium Session ADFE: Wednesday, 11:00 - 11:30, CO016
We are interested in local reconstructions of the gradient of primal finite element
approximations. We consider conforming, nonconforming and totally discontinuous (Galerkin) methods (abbreviated as CG, NC, DG in the following) of arbitrary
order. Such reconstructions have many applications, such as a posteriori error estimation and numerical approximations of coupled system.
Our first aim is to present a uniform approach to flux reconstruction. We start
from a hybrid formulation covering all considered finite element methods. The
Lagrange multipliers compensating for the different weak continuity conditions
yield approximations to the normal fluxes. It turns out that they can be computed
locally in all cases on patches defined by the support of the lowest-order basis
functions. Then these multipliers are used to define local corrections in broken
Raviart-Thomas spaces.
Our second aim is to study relations between the different methods. Especially
we prove that the DG-method with stabilisation parameter γ converges uniformly
in h with the convergence rate 1/γ towards the CG or NC solution, depending on
the employed form of stabilisation. In addition, the same convergence result holds
true for the reconstructed fluxes and therefore for the error estimators.
The theoretical results will be illustrated by numerical tests.
Joint work with Roland Becker, and Daniela Capatina.
248
Lin-Tian Luh
Providence University of Taiwan, TW
The Criteria of Choosing the Shape Parameter for Radial Basis Function Interpolations
Contributed Session CT1.2: Monday, 17:30 - 18:00, CO2
The main purpose of this report is to present concrete and useful criteria for
choosing the constant c contained in the famous radial function
β
β
h(x) := Γ(− )(c2 + |x|2 ) 2 , β ∈ R\2N≥0 , c > 0
2
(1)
which is called multiquadric for β > 0 and inverse multiquadric for β < 0, respectively. Here Γ denotes the classical gamma function. The optimal choice of c is a
longstanding question and has obsessed many experts in the field of radial basis
functions(RBFs). Most time what people can do is just making experiments and
try to build a model o predict the influence of c, for some special cases. Here,
we make a lucid clarification for its influence on the error estimates and show it
with a concrete function, denoted by M N (c). The approximated functions lie in
a function space which is equivalent to Gaussians’ native space, and is denoted
by Eσ . Then, |f (x) − sf (x)| ≤ M N (c) · F (δ), for all f ∈ Eσ , where sf is the
frequently used interpolation function and δ is the fill distance which measures the
spacing of the data points. Both M N (c) and F (δ) contribute to the error bound,
but M N (c) is more influential. The constant σ describes the rate of decay for the
Fourier transform of f .
We find M N (c) depends on four parameters, β, σ, the dimension n, and the fill
distance δ. So the optimal choice of c which minimizes the value of M N (c) also
depends on the four parameters. There are three cases. Here l contained in M N (c)
corresponds to the fill distance and is inversely proportional to the fill distance.
Case1. β < 0, |n + β| ≥ 1 and n + β + 1 ≥ 0 Let f ∈ Eσ and h be as in (1).
Then
M N (c) := c
β−n+1−4l
4
where
ξ∗ =
cσ +
p
(ξ ∗ )
n+β+1
2
ecξ
∗
−
(ξ∗ )2
σ
1/2
c2 σ 2 + 4σ(n + β + 1)
.
4
Case2. β = −1 and n = 1 Let f ∈ Eσ and h be as in (1). Then
β
M N (c) := c 2 −l
where
(
M (c) :=
1/2
√
1
+ 2 3M (c)
ln2
1
e1− c2 σ
√
2 2
g( cσ+ c4 σ +4σ )
if 0 < c ≤
if
√2
3σ
√2 ,
3σ
<c
√
ξ2
g being defined by g(ξ) := cξecξ− σ .
Case3. β > 0 and n ≥ 1 Let f ∈ Eσ and h be as in (1). Then
(
M N (c) := c
1+β−n−4l
4
(ξ ∗ )
1+β+n
2
e
249
(ξ∗ )2
σ
ecξ
∗
)1/2
,
where
∗
ξ =
cσ +
p
c2 σ 2 + 4σ(1 + β + n)
.
4
Example of Case1: In the figure, b0 controls the domain size of the approximated functions and is roughly speaking the diameter of the domain. It’s obvious
that the optimal c in this sitution is around 9. Since F (δ), which is independent
of c, also contributes to the error bound, the actual error is much smaller than
10−16 .
250
Vladimir Lukin
Keldysh Institute of Applied Mathematics, RAS, RU
Mathematical modelling of radiatively accelerated canalized magnetic jets
Contributed Session CT1.3: Monday, 18:00 - 18:30, CO3
One of the most interesting classes of astrophysical processes is the formation of
jet outflows in active galaxy nuclei (e.g, galaxy M87), microquasars and many
other objects. The jet consists of high energetic magnetized matter bullets. It
propagates inside the cone with 6◦ angle. Matter velocity in M87 jet reaches the
value of 0.8c (c — velocity of light).
We construct and investigate the mathematical model of the jet matter radiation
acceleration inside the canal over the hot gravitating object with thin accretion
disk. The model is based on [1, 2] models and includes 2D axisymmetric radiative
magnetohydrodynamic (RMHD) equation system:
∂B
∂ρ
+ ∇ρv = 0,
= ∇ × (v × B) ,
∂t
∂t
1
∂
(ρv + G) + ∇ · Π̂ + T̂ =
(∇ × B) × B + Fg ,
∂t
4π
∂
1
(e + U ) + ∇ · (v (e + p) + W) =
((∇ × B) × B) · v + Fg · v,
∂t
4πZ
Γ(t, x, ω, ω 0 )I(t, x, ω 0 ) dω 0 .
ω · ∇I(t, x, ω) + β(t, x)I(t, x, ω) = β(t, x)
(1)
(2)
(3)
(4)
Ω
Here ρ is the matter density, v — velocity vector, Π̂ — impulse flow density,
Πij = pδij + ρvi vj , p — gas pressure, e — gasR energy, B — magnetic field,
Fg — gravitation force, I — radiation intensity, Γ(t, x, ω, ω 0 )I(t, x, ω 0 ) dω 0 —
Ω
scattering integral, β(t, x) — matter scattering coefficient. We consider the matter
2
p
is the ideal gas, so e = ρ|v|
2 + γ−1 . We use Rayleigh scattering function and the
scattering cross section is σT = 6.652 × 10−29 sm2 . The gravitating body of mass
M is situated in the origin of coordinates. Figure 1 shows model scheme.
Numerical method for unstructured triangular grids based on splitting into physical processes is used to solve the system of equations. Calculation of the unknowns
at each time step consists of the following phases: solution of the gas dynamic equations system by the HLLC method; divergence-free approximation of Faraday’s law
on the staggered difference cell; integration of radiation transfer equation (RTE)
by the discrete directions method; approximation of the geometrical and power
sources at the system right side.
The parallel numerical code for this method is developed. For the algorithm of
RTE (4) numerical integration we used following parallel strategies: the RTE
integration along the traced on the grid beams corresponding to a given node of
the spatial grid is independent for each node, so the shared memory OpenMP
technology is usefull; following the nVidia CUDA technology computation of each
of the Nscat elementary integrals for the given spatial node is implemented using
thread block inside one graphic multiprocessor and every elementary integral is
computed by one thread inside the block.
Obtained calculations acceleration efficiency is 10.8 times for the OpenMP alongthe-beam integration using two 6-core Intel Xeon X5670 2.93 Ghz processors and
251
82.3 times for scattering integral calculation using nVidia Tesla GPU. The calculations were performed on the K-100 cluster, KIAM RAS.
At the begin of calculations there is magnetized channel inside the computation
domain. The impact of radiation field leads to uprising of radiative accelerating
force acting on the rarefied matter. Driven by the radiation force the matter is
rapidly accelerating up to velocity of 1/5c. The jet is well collimated, magnetic
field inside the channel preserves its structure, the canal is surrounded by optically
thick magnetized walls (see Figure 2). The jet flow contains the velocity discontinuities of shock wave type. Dense matter bullets are formed on the front of every
discontinuity. The bullets release period is 13 days.
The work has been partly supported by the Russian Foundation for Basic Research (projects 12-01-00109, 12-02-00687, 12-01-31193) and by the Science School
1434.2012.2.
References
[1] M.P. Galanin, Yu.M. Toropin, and V.M. Chechetkin, Astron. Rep. 43, 119
(1999).
[2] M.P. Galanin, V.V. Lukin, and V.M. Chechetkin, Math. Mod. and Comp. Sim.,
4, 3 (2012).
Figure 1: Scheme of the jet launching system model
Figure 2: Density and velocity modulus distributions
Joint work with M.P. Galanin, and V.M. Chechetkin.
252
Francisco Macedo
EPFL, CH
A low-rank tensor method for large-scale Markov Chains
Contributed Session CT1.1: Monday, 17:00 - 17:30, CO1
A number of practical applications lead to Markov Chains with extremely large
state spaces. Such an instance arises from models for calcium channels, which are
structures in the body that allow cells to transmit electrical charges to each other.
These charges are carried on a calcium ion which can travel freely back and forth
through the calcium channel. The state space of a Markov process describing these
interactions typically grows exponentially with the number of cells. More generally, Stochastic Automata Networks (SAN s) are networks of interacting stochastic
automata. The dimension of the resulting state space grows exponentially with the
number of involved automata. Several techniques have been established to arrive
at a formulation such that the transition matrix has Kronecker product structure.
This allows, for example, for efficient matrix-vector multiplications. However, the
number of possible automata is still severely limited by the need of representing
a single vector (e.g., the stationary vector) explicitly. We propose the use of lowrank tensor techniques to avoid this barrier. More specifically, an algorithm will be
presented that allows to approximate the solution of certain SAN s very efficiently
in a low-rank tensor format.
Joint work with Prof. Daniel Kressner.
253
Pravin Madhavan
University of Warwick, GB
On a Discontinuous Galerkin Method for Surface PDEs
Contributed Session CT2.5: Tuesday, 14:30 - 15:00, CO016
Partial differential equations on manifolds have become an active area of research
in recent years due to the fact that, in many applications, models have to be
formulated not on a flat Euclidean domain but on a curved surface. For example,
they arise naturally in fluid dynamics (e.g. surface active agents on the interface
between two fluids ) and material science (e.g. diffusion of species along grain
boundaries) but have also emerged in areas as diverse as image processing and
cell biology (e.g. cell motility involving processes on the cell membrane, or phase
separation on biomembranes).
Finite element methods (FEM) for elliptic problems and their error analysis have
been successfully applied to problems on surfaces via the intrinsic approach in
Dziuk (1988) based on interpolating the surface by a triangulated one.
However, as in the planar case there are a number of situations where FEM may not
be the appropriate numerical method, for instance, advection dominated problems
which lead to steep gradients or even discontinuities in the solution.
DG methods are a class of numerical methods that have been successfully applied
to hyperbolic, elliptic and parabolic PDEs arising from a wide range of applications. Some of its main advantages compared to ‘standard’ finite element methods
include the ability of capturing discontinuities as arising in advection dominated
problems, and less restriction on grid structure and refinement as well as on the
choice of basis functions.
The main idea of DG methods is not to require continuity of the solution between
elements. Instead, inter-element behaviour has to be prescribed carefully in such
a way that the resulting scheme has adequate consistency, stability and accuracy
properties.
In my presentation I will investigate the issues arising when attempting to apply
DG methods to problems on surfaces. We restrict our analysis to a linear secondorder elliptic PDE on a compact smooth connected and oriented surface. An
interior penalty (IP) method is introduced on a discrete surface and we derive
a-priori error estimates by relating the latter to the original surface via the lift
introduced in Dziuk (1988).
The estimates suggest that the geometric error terms arising from the surface discretisation do not affect the overall convergence rate of the IP method when using
linear ansatz functions. This is then verified numerically for a number of test
problems.
Joint work with Andreas Dedner, and Bjorn Stinner.
254
Immanuel Maier
University of Stuttgart, DE
A reduced basis method for domain decomposition problems
Contributed Session CT4.4: Friday, 08:20 - 08:50, CO015
Reduced basis (RB) methods allow efficient model reduction of parametric partial
differential equations. We propose a new approach for combining model reduction
methods with domain decomposition techniques. Important components of the
RB technique, as the decomposition into parameter-independent and parameterdependent computations (offline/online - decomposition), the greedy-algorithm for
generating the basis and a-posteriori error estimation are maintained.
Some related RB methods for coercive homogeneous domain decomposition problems already have been developed. Starting from the RB element method (RBEM)
[2], the scRBEM [3] and the RDF method [4] represent efforts to accommodate
to the decomposed nature of problems. We point out the relationship to these
methods. In particular, they mainly address network-type problems. In contrast,
our approach [1] treats coercive problems, where the system’s topology is known
a-priori. Expensive solutions of the full system can be computed offline and used
as snapshots for several RB spaces on the subdomains. The snapshots are chosen
offline by a greedy procedure. For the construction of RB spaces a framework
separating intra-domain and interface-associated functions is established. Online
an iterative RB solution method can be formulated; convergence is proven theoretically. The overall method is investigated numerically with respect to accuracy
and efficiency.
We present the abstraction in a general framework and consider the extension of
our method to heterogeneous domain decomposition problems. Possible problem
instantiations are the coupling of free flow with porous media flow modelled by the
Stokes and Darcy equations or the flow around an obstacle, modelled by Stokes in
an inner region and by Laplace’s equation in an outer region (due to negligence of
viscous effects) [5].
References
[1] I. Maier and B. Haasdonk. An Iterative Domain Decomposition Procedure for
the Reduced Basis Method. SimTech Preprint, University of Stuttgart, 2012.
[2] Y. Maday and E.M. Rønquist. The Reduced Basis Element Method: Application to a Thermal Fin Problem. Journal of Scientific Computing, 26 (2004),
240–258.
[3] D.B.P. Huynh, D.J. Knezevic and A.T. Patera. A Static Condensation Reduced Basis Element Method: Approximation and A Posteriori Error Estimation. Submitted to M2AN, 2011.
[4] L. Iapichino. Reduced Basis Methods for the Solution of Parametrized PDEs
in Repetitive and Complex Networks with Application to CFD. PhD thesis,
EPFL, 2012.
[5] A. Quarteroni and A. Valli. Domain Decomposition Methods for Partial Differential Equations. Oxford University Press, 1999.
255
Joint work with Bernard Haasdonk.
256
Charalambos Makridakis
University of Sussex, GB
Consistent Atomistic / Continuum approximations to atomistic models.
Minisymposium Session MSMA: Monday, 16:00 - 16:30, CO3
We discuss recent results related to the problem of the atomistic-to-continuum
passage and the design of corresponding coupled methods for crystalline materials. In particular we will comment on issues related to the analysis of Cauchy–
Born/nonlinear elasticity approximations to atomistic models in two and three
space dimensions. We will present new coupled atomistic/continuum methods
which are consistent.
257
Olli Mali
University of Jyväskylä, FI
Estimates of Effects Caused by Incompletely Known Data in Elliptic Problems
Generated by Quadratic Energy Functionals
Contributed Session CT3.7: Thursday, 17:30 - 18:00, CO122
In mathematical modelling, the data of the problem is often known with limited accuracy. Instead of exact data values, some set of admissible data is known. This set
generates a family of problems and the respective set of solutions. We consider linear elliptic problems generated by quadratic energy functionals. The coefficients of
the problem and the respective right-hand side are considered to be known by limited accuracy. The knowledge we have is of the form: mean value±variations,
which is motivated by the engineering practice.
The quantity of interest is the radius of the solution set. It is the distance between
the solution related to the “mean” data and the most distant member of the solution
set. The relation between the magnitude of variations of the data and the radius
of the solution set is of special interest. This question has been studied in [1, 2]
for diffusion type problems in terms of the primal variable. Here, we study also
the relationship between the set of admissible data and the dual variable as well
as the primal–dual pair in a combined norm.
References
[1] O. Mali and S. Repin, Estimates of the indeterminacy set for elliptic
boundary–value problems with uncertain data, J. Math. Sci. 150, pp. 18691874, 2008.
[2] O. Mali and S.Repin, Two-sided estimates of the solution set for the reactiondiffusion problem with uncertain data, Applied and numerical partial differential equations, 183–198, Comput. Methods Appl. Sci., 15, Springer, New
York, 2010.
Joint work with S. Repin.
258
Gunar Matthies
Universität Kassel, DE
A two-level local projection stabilisation on uniformly refined triangular meshes
Contributed Session CT2.3: Tuesday, 15:30 - 16:00, CO3
The local projection stabilisation (LPS) has been successfully applied to scalar
convection-diffusion-reaction equations, the Stokes problem, and the Oseen problem.
A fundamental tool in its analysis is that the interpolation error of the approximation space is orthogonal to the discontinuous projection space. It has been shown
that a local inf-sup condition between approximation space and projection space
is sufficient to construct modifications of standard interpolations which satisfy this
additional orthogonality.
There are different versions of the local projection stabilisation on the market; we
will consider the two-level approach based on standard finite element spaces Yh
on a mesh Th and on projection spaces Dh living on a macro mesh Mh . Hereby,
the finer mesh is generated from the macro mesh by a certain refinement rules. In
the usual two-level local projection stabilisation on triangular meshes, each macro
triangle M ∈ Mh is divided by connecting its barycentre with its vertices. Three
disc
triangles T ∈ Th are obtained. Then, the pairs (Pr,h , Pr−1,2h
), r ≥ 1, of spaces of
continuous, piecewise polynomials of degree r on Th and discontinuous, piecewise
polynomials of degree r − 1 on Mh satisfy the local inf-sup condition and can be
used within the LPS framework.
One disadvantage of this refinement technique is however that Th contains simplices
with large inner angles even in the case of a uniform decomposition Mh into
isosceles triangles. Another drawback is that this refinement rule leads to nonnested meshes and spaces whereas the common refinement technique of one triangle
into 4 similar triangles (called red refinement in adaptive finite elements) results
into nested meshes and spaces.
disc
We will show that in the two-dimensional case the pairs (Pr,h , Pr−1,2h
), r ≥ 2, satisfy the local inf-sup condition with the refinement of one triangle into 4 triangles.
Consequently, the LPS can be also applied on sequences of nested meshes and
spaces and keeping the same error estimates. Finally, we compare the properties
of the two resulting LPS methods based on the different refinement strategies by
means of numerical test examples for convection-diffusion problems with dominating convection.
Joint work with Lutz Tobiska.
259
Miriam Mehl
Technische Universität München, DE
Towards massively parallel fluid-structure simulations – two new parallel coupling
schemes
Minisymposium Session NFSI: Thursday, 10:30 - 11:00, CO122
Multi-physics applications and in particular fluid-structure interactions have dramatically gained importance in various kinds of applications from biomedical computing to engineering design due to both an increasing need for accuracy and, thus,
more accurate models, and a huge available computing power on todays supercomputers that allows to tackle the computational challenges of such simulations. Since
only a high grid resolution ensures a discretization accuracy that accounts for the
increased modelling accuracy as compared to single-physics models, scalability of
multi-physics simulation codes on massively parallel machines is mandatory.
However, complex multi-physics models not only pose large computational challenges but also implementational, software engineering and maintance challenges.
The latter can be eased by using a partitioned approach, i.e., reusing existing and
trusted codes for the involved single-physics effects and combining them with suitable coupling methods to a multi-physics simulation environment. This reduces
the implementational effort substantially and, if done carefully, allows a flexible
exchange of the involved software components. The downside of partitioned approaches are stability issues induced by the high-level coupling of the underlying
interaction equations.
For fluid-structure interactions, stability issues become more severe with decreasing structural density, decreasing fluid compressibility, and increasing structure
size relative to the size of the fluid domain, e.g.. A lot of work has been invested
by various groups to overcome these difficulties and, indeed, sophisticated coupling methods have been found that ensure stability even for massless structures
and completely incompressible fluids. Hereby, the most common basic scheme is
a Gauss-Seidel type coupling executing fluid and structure solver in an alternating manner transfering forces as boundary conditions from fluid to structure and
displacements and velocities from the structure to the fluid (Dirichlet-Neumann
coupling). For incompressible fluids, a strong coupling has to be ensured in general
in order to achieve stability of the transient simulation, that is, several iterations
of this staggered fluid-structure solve have to be executed within each time step.
Convergence of these methods is ensured either by Aitken underrelaxation [1] or
quasi-Newton interface methods [4]. In particular the latter lead to very good
convergence rates even in ’hard’ cases. However, there’s one drawback of these
schemes for parallel computing: fluid and structure solver have to be executed one
after the other which prevents good scalability due to the unbalanced computational needs: the structural solver is in general much cheaper then the fluid part
and doesn’t scale on a large number of processors.
Numerical methods executing fluid and structure solver in parallel have been applied mostly to problems with compressible fluids. Ross [3] for example solves fluid
and structure in parallel followed by a solve step of an interface equation, which,
however, needs discretization details of both solvers at the interface and is thus
not suited for coupling black-box solver which is our aim. Farhat [2] proposes parallel Jacobian-like time-stepping which works considerably well in a weak coupling
setting executing fluid and structure solver only once per time step but turns out
to be equivalent to two separate staggered couling schemes if done iteratively in
260
each time step.
We propose two new coupling methods combining the coupling ideas of [3] and [2]
with the interface quasi-Newton method from [4]. In the presentation, we show
a uniform formulation of all three methods – the original staggered scheme and
our two parallel schemes – in terms of fixed point problems. The quasi-Newton
method then uses a least squares method based on previous iteration data to
estimate the effect of an approximate Jacobian. For several test and benchmark
cases, we show that we achieve iteration numbers comparable to those achieved in
[4] for the staggered approach, which marks and important step towards efficient
multi-physics simulations in the ’exascale era’.
References
[1] Küttler, U. and Wall, W. Fixed-point fluid-structure interaction solvers with
dynamic relaxation. Comput. Mech. (2008) 43:61–72.
[2] Farhat, C. and Lesoinne, M. Two efficient staggered algorithms for the serial and parallel solution of three-dimensional nonlinear transient aeroelastic
problems. Comput. Method. Appl. M. (2000) 182:499–515.
[3] Ross, M.R., Felippa, C.A., Park, K.C. and Sprague, M.A. Treatment of acoustic fluid-structure interaction by localized Lagrange multipliers: Formulation.
Comput. Methods Appl. Mech. Eng. (2008) 197:305–3079.
[4] Degroote, J., Bathe, K.-J. and Vierendeels, J. Performance of a new partitioned procedure versus a monolithic procedure in fluid-structure interaction
Comput. Struct. (2009) 87:793–801.
Joint work with Hans-Joachim Bungartz, Bernhard Gatzhammer, and Benjamin
Uekermann.
261
Lina Meinecke
Uppsala University, SE
Stochastic simulation of diffusion on unstructured meshes via first exit times
Contributed Session CT3.2: Thursday, 17:00 - 17:30, CO2
In molecular biology it is of interest to simulate the diffusion and reactions of
molecules such as proteins in a cell. When simulating the biochemistry in a biological cell, many molecules are present in only very low copy numbers. As a result
a macroscopic or deterministic description with the reaction-diffusion equation is
inaccurate and does not reproduce experimental data and a stochastic description
is needed [1]. The diffusion of the molecules is then given by Brownian dynamics
and the reactions between them occur with certain probability.
For stochastic simulation of the diffusion, the cell is partitioned into compartments
or voxels in a mesoscopic model. The number of molecules in a voxel is recorded
and the molecules can jump between neighbouring voxels to model diffusion. In
order to accurately represent the geometry of the cell including outer and inner
curved boundaries it is helpful to use unstructured meshes for the voxels. The
probabilities to jump between the voxels is given in [2] by a discretization of the
Laplacian with the finite element method (FEM) on the mesh. Solutions of the
diffusion equation with FEM encounter problems on some unstructured meshes in
3D. If the mesh is of poor quality, the maximum principle may not be satisfied by
the FEM solution and the jump coefficients derived from it may be negative.
We present a new approach to diffusion simulation using first exit times that for
unstructured meshes guarantees positive jump coefficients. These first exit times
can be sampled from the survival probability for molecules within a voxel. It will
be shown that this approach yields accurate results on multidimensional Cartesian
meshes and on meshes with variable mesh size in 1D. This approach is extended
to unstructured 2D meshes of varying quality by solving the local equation for
the exit time or computing the exit time between the nodes along the edges. The
method is compared to the accuracy obtained with FEM coefficients and jump
coefficients determined by the finite volume method (FVM).
1. A. Mahmutovic, D. Fange, O. G. Berg, and J. Elf, Lost in presumption: stochastic reactions in spatial models, Nature Methods 9, 1163-1166, 2012.
2. S. Engblom, L. Ferm, A. Hellander, and P. Lötstedt. Simulation of stochastic reaction-diffusion processes on unstructured meshes. SIAM J. Sci. Comput.,
31(3):1774-1797, 2009.
262
Figure 1: Mesh of poor quality: one angle is bigger than 90 degrees.
Figure 2: Stochastic Simulation of Diffusion on an unstructured mesh of good
quality.
Joint work with Per Lötstedt.
263
Ward Melis
KU Leuven, BE
A relaxation method with projective integration for solving nonlinear systems of
hyperbolic conservation laws
Contributed Session CT4.1: Friday, 09:50 - 10:20, CO1
1
Introduction
Hyperbolic conservation laws are ubiquitous in domains such as fluid dynamics,
plasma physics, traffic modeling and electromagnetism. We present a general
strategy for systems of nonlinear hyperbolic conservation laws:
∂u ∂F (u)
+
= 0,
∂t
∂x
(1)
in which x ∈ RD contains the independent variables; u ∈ RI holds I conserved
quantities ui : RD × [0, T ] → R, i = 1, ..., I; and F ∈ RI corresponds to the
vector of flux functions Fi : R → R, i = 1, ..., I which may be nonlinear in each
of the functions ui . The method is based on combining a relaxation method with
projective integration.
In a relaxation method, the nonlinear conservation law is approximated by a system of kinetic equations, in which a small relaxation parameter 0 < 1 is
present. The general idea of these methods is to eliminate the nonlinear flux term,
at the expense of introducing a stiff nonlinear source term. The kinetic equation
describes the evolution of a distribution function f (x, v, t) : RD ×RD ×[0, T ] → RI
of particles with positions x ∈ RD and velocities v ∈ RD (see [?]):
∂
∂
1
f (x, v, t) + v f (x, v, t) = (M (P (f (x, v, t))) − f (x, v, t)) ,
∂t
∂x
(2)
and is constructed such that, in the hydrodynamic limit ( → 0), the solution
converges to that of (1).
2
Method and results
The projective integration algorithm was developed by Gear and Kevrekidis (see
[?]). It consists of the following steps:
1. Perform K small steps for equation (2) with a naive explicit integrator with
time step δt = O(2 ) (this is called an inner integrator). When ε is small,
this will enforce convergence of the fast modes to the slow manifold that is
characterized by the density ρ.
2. Subsequently, perform a large time step ∆t by extrapolation in time (this is
called an outer integrator).
We show that the method allows both ∆t and K to be independent of , while
being fully explicit and general. Moreover, the method can be of arbitrary order
and its implementation is surprisingly simple, even for complex nonlinear systems.
We will present the method and illustrate its performance on the linear advection
equation, Burgers’ equation and the Euler equations in fluid dynamics, both in a
one and two dimensional domain.
264
Figure 1: Left: stability plot of the projective forward Euler (PFE) method in terms
of the amplification factor τ of the inner integrator. Right: Order test for PFE with
FE as inner integrator and three different spatial orders. Solid lines represent the
calculated error whereas the dotted lines shows the expected error.
Joint work with Pauline Lafitte, and Giovanni Samaey.
265
Wim Michiels
KU Leuven, BE
Projection based methods for nonlinear eigenvalue problems and associated distance
problems
Minisymposium Session NEIG: Thursday, 14:00 - 14:30, CO2
We consider the nonlinear eigenvalue problem
!
m
X
Ai pi (λ) v = 0, λ ∈ C, v ∈ Cn ,
(1)
i=1
where A1 , . . . , Am are complex n × n matrices and the scalar functions pi : C →
C, 1 = 1, . . . , m, are entire. Problems of the form (1) do not only include polynomial eigenvalue problems but also eigenvalue problems arising from systems of
delay differential equations.
In the first part of the talk we present a rational Krylov method for solving
Pm the nonlinear eigenvalue problem (1). The method approximates A(λ) = i=1 Ai pi (λ)
by polynomial Newton and/or Hermite interpolation, resulting in a generalized
eigenvalue problem which is solved by the rational Krylov method. We show that,
by matching the interpolation points with the poles of the rational Krylov method,
the resulting algorithm can be constructed in a fully dynamic way, in the sense that
the degree of the interpolating polynomial does not need to be fixed beforhand.
New interpolation points can be added on the run (on the basis of the quality
of the obtained eigenvalue approximations), and arbitrary accuracy of eigenvalue
approximations can be obtained by a sufficiently large number of iterations. The
latter is in contrast with an ‘approximate plus solve’ approach, where the final
accuracy is limited by the chosen degree of the polynomial approximation. In case
of Hermite interpolation in one point, the Newton rational Krylov method reduces
to the infinite Arnoldi method, where the dynamic property is reflected in the
interpretation as the standard Arnoldi method applied to an infinite-dimensional
linear operator whose spectrum corresponds with the one of A(λ). We illustrate
that with an appropriate choice of interpolating points/poles, the method is suitable for a global search for eigenvalues in a region of interest, as well as for local
corrections on individual eigenvalues. Finally, for very large problems, we show
that the subspace generated by the Newton rational Krylov method can be used
to project (1), resulting in a small nonlinear eigenvalue problem, which can be
solved using a method of choice.
In the second part of the presentation we point out how these nonlinear eigenvalue
solvers can be used as building blocks in algorithms for distance problems. More
precisely, we consider the situation where (1) is perturbed to
!
m
X
(Ai + δAi )pi (λ) v = 0, λ ∈ C, v ∈ Cn .
(2)
i=1
Assuming that (1) is stable in the sense that all eigenvalues are confined to the
open left half plane or the open unit disk, the distance to instability can be defined
as the smallest size of the perturbations in (2) which lead to instability. This definition depends on (i) the class of allowable perturbation and (ii) a global measure
of the combined perturbations on the different coefficient matrices. For both real
and complex valued allowable perturbations and for various perturbation measures
266
we present numerical algorithms for computing the distance to instability. As a
common feature, in all these cases it is sufficient to restrict the perturbations in
(2) to rank one or rank two perturbations. This leads to algorithms on manifolds
of low rank matrices. Both discrete iteration maps and differential equations on
such manifolds will be considered.
Joint work with Roel Van Beeumen, Karl Meerbergen, and Dries Verhees.
267
Agnieszka Miedlar
TU Berlin, DE
Multiscale adaptive finite element method for PDE eigenvalue/eigenvector approximations
Minisymposium Session NEIG: Thursday, 15:00 - 15:30, CO2
In this talk we present a multiscale adaptive finite element method for PDE eigenvalue problems which will use one scale, e.g., P 1 finite elements, to approximate
the solution and finer scale, e.g., P 2 finite elements, to capture the approximate
residual. Starting from the results of Grubišić and Ovall [GO09] on the reliable
and efficient asymptotically exact a posteriori hierarchical error estimators in the
self-adjoint case, we explore the possibility to use the enhanced Ritz values and
vectors to restart the iterative algebraic procedures within the adaptive algorithm.
Using higher order hierarchical polynomial finite element bases, as indicated by
Bank [Ban96] and by Le Borne and Ovall [LO12], our method generates discretization matrices which are almost diagonal. This construction can be repeated for
the complements of higher (even) order polynomials and yields a structure which
is particularly suitable for designing computational algorithms with low complexity. We present some numerical results for both the symmetric as well as the
nonsymmetric eigenvalue problems.
2010 Mathematics Subject Classification. 65F10, 65F15, 65N15, 65N22, 65N25,
65N30, 65M60
Key words. eigenvalue problems, FEM, finite element method, AFEM, adaptive
finite element method
References
[Ban96] R. E. Bank, Hierarchical bases and the finite element method, Acta numerica, 1996, Acta Numer., vol. 5, Cambridge Univ. Press, Cambridge, 1996,
pp. 1–43.
[GO09] L. Grubišić and J. S. Ovall, On estimators for eigenvalue/eigenvector approximations, Math. Comp. 78 (2009), no. 266, 739–770.
[LO12] S. Le Borne and J. S. Ovall, Rapid error reduction for block GaussSeidel based on p-hierarchical bases, Numer. Linear Algebra Appl. (2012),
Published online in Wiley Online Library (wileyonlinelibrary.com). DOI:
10.1002/nla.1841.
Joint work with Luka Grubisic, and Jeffrey S. Ovall.
268
Giovanni Migliorati
CSQI, MATHICSE, EPF Lausanne, CH
Adaptive polynomial approximation by random projection of multivariate aleatory
functions
Contributed Session CT1.1: Monday, 17:30 - 18:00, CO1
In this talk we present recent results on polynomial approximation by the Random
Discrete L2 Projection (RDP) of functions depending on multivariate random
variables distributed with a given probability density. The RDP is computed using
point-wise noise-free evaluations of the target function in independent realizations
of the random variables.
First, we recall the main results achieved in [1, 2, 3, 4] concerning the stability and
accuracy of the RDP. In particular, we focus on the relation between the number
of sampling points and the dimension of the polynomial space that ensures an
accurate RDP, independently of the “shape” of the polynomial space. The effects
of the smoothness of the target function and of the number of random variables
are addressed as well.
Then we focus on the approximation of Quantities of Interest depending on the solution to of a PDE with stochastic coefficients. For a class of isotropic PDE models
with “inclusion-type” coefficients parametrized by a moderately large number of
random variables we show that, with an a-priori optimal choice of the polynomial
space, the RDP approximation error in expectation converges subexponentially
w.r.t. the number of sampling points. Moreover, a comparison between the convergence rates of RDP and Stochastic Galerkin is established.
Lastly we discuss adaptive polynomial approximation to approximate best N-terms
sets of the coefficients in the polynomial expansion of the target function. We employ the results achieved in the theoretical analysis to devise strategies based on
RDP that adaptively explore the unknown anisotropy of the target function and
adaptively enrich the polynomial space. A critical issue that will be discussed
concerns how to increase the number of sampling points during the adaptive algorithm. Numerical results will be presented as well.
References
[1] G.Migliorati, F.Nobile, E.von Schwerin, R.Tempone: Analysis of the discrete
L2 projection on polynomial spaces with random evaluations, submitted. Also
available as MOX-report 46-2011.
[2] A.Cohen, M.A.Davenport, D.Leviatan: On the stability and accuracy of Least
Squares approximations, to appear on Found. Comput. Math.
[3] G.Migliorati, F.Nobile, E.von Schwerin, R.Tempone: Approximation of Quantities of Interest in stochastic PDEs by the random discrete L2 projection on
polynomial spaces, to appear on SIAM J. Sci. Comput.
[4] A.Chkifa, A.Cohen, G.Migliorati, F.Nobile, R.Tempone: Discrete least squares
polynomial approximation with random evaluations; application to parametric
and stochastic PDEs, in preparation.
Joint work with A.Chkifa, A.Cohen, F.Nobile, and R.Tempone.
269
Shinya Miyajima
Gifu University, JP
Fast verified computation for solutions of generalized least squares problems
Contributed Session CT2.1: Tuesday, 14:00 - 14:30, CO1
The generalized least squares problems considered in this talk is to find the nvector x that minimizes
(Ax − b)T B −1 (Ax − b),
A ∈ Rm×n ,
b ∈ Rm ,
B ∈ Rm×m ,
(1)
where m ≥ n, A, b and B are given, A has full column rank, and B is symmetric
positive definite. This problem arises in finding the least squares estimate of the
vector x when we are given the linear model b = Ax + w with w an unknown
noise vector of zero mean and covariance B. In several practical problems in
econometrics [J. Johnston, Econometric Methods, second ed., McGraw-Hill, New
York, (1972)] and engineering [D.B. Duncan, S.D. Horn, Linear dynamic recursive
estimation from the viewpoint of regression analysis, J. Amer. Statist. Assoc. 67,
815–821 (1972)], A and B will have special block structure. It is well known that
−1
the vector minimizing (1) is (AT B −1 A) AT B −1 b.
Since B is symmetric positive definite, there exist matrices L satisfying B = LLT ,
which can be obtained by Cholesky decomposition or eigen-decomposition. In
several applications, L is more basic and important than B, so that it is assumed
in several papers (e.g. [C.C. Paige, Computer solution and perturbation analysis of
generalized linear least squares problems, Math. Comp. 33(145), 171–183 (1979)])
+
+
that L is given. Then the solution can be written as (L−1 A) L−1 b, where (L−1 A)
denotes the Moore-Penrose inverse of L−1 A. In this talk, we treat both of the cases
when B is given and L is given.
Stable algorithms for solving (1) have been proposed in [C.C. Paige, Fast numerically stable computations for generalized linear least squares problems, SIAM J.
Numer. Anal. 16(1), 165–171 (1979)]. These algorithms are based on the idea
that (1) is equivalent to the problem of finding x which minimizes v T v on the
equality constraint b = Ax + Lv. In these algorithms, the equivalent problem is
solved via orthogonal transformation.
−1
In this talk, we consider numerically enclosing (AT B −1 A) AT B −1 b, specifically,
computing error bounds of x̃ using floating point operations, where x̃ denotes
−1
a numerical result for (AT B −1 A) AT B −1 b. As far as the author knows, algo−1
rithms for enclosing (AT B −1 A) AT B −1 b in (1) have not been known. Although
−1
(AT B −1 A) AT B −1 b can be enclosed by utilizing the INTLAB [S.M. Rump,
INTLAB - INTerval LABoratory, in T. Csendes (ed.), Developments in Reliable
Computing, Kluwer Academic Publishers, Dordrecht, 77–104 (1999)] routine, this
approach involves large computational cost, since intervals including B −1 A and
B −1 b, or L−1 A and L−1 b are required during the execution.
−1
The purpose of this talk is to propose algorithms for enclosing (AT B −1 A)
AT B −1 b in both of the cases when B is given and L is given. These algorithms do
not require the intervals described above, and allow the presence of underflow in
floating point arithmetic. In order to develop these algorithms, we establish theories for computing error bounds of x̃. The error bounds obtained by the proposed
algorithms are “verified” in the sense that all the possible rounding errors have
been taken into account. In the case when B is given, the proposed algorithms
do not assume but prove A and B to have full rank and to be positive definite,
270
respectively. In the case when L is given, the algorithms do not assume but prove
A and L to have full rank and to be nonsingular, respectively. We introduce a
technique for obtaining smaller error bounds and report numerical results to show
the properties of the proposed algorithms.
271
Olga Mula
LJLL and CEA, FR
The Generalized Empirical Interpolation Method: Analysis of the convergence and
application to the Stokes problem
Minisymposium Session ROMY: Thursday, 11:30 - 12:00, CO016
The extension of classical lagrangian interpolation is an old problem in approximation theory that remains a field of current active research (see, e.g. [6], or the
kriging studies in the stochastic community such as [3]).
This development involves two main tasks that must be addressed together: the
generalization of the interpolating functions and of the position of the interpolating points so that the interpolation process is at least stable and close to the
best approximation in some sense. Indeed, since classical lagrangian interpolation
approximates general functions by finite sums of well-chosen, linearly independent interpolating functions (e.g. polynomial functions), the question on how
to approximate general functions by general interpolating functions arises. As a
consequence, an investigation on how to optimally select the interpolating points
needs to be carried out (i.e. the well documented theory about the location of the
interpolating points in classical polynomial interpolation needs to be enlarged).
One step in this direction is the Empirical Interpolation Method (EIM, [1], [2],
[6]) that has been developed in the broad framework where the functions f to
approximate belong to a compact set F of a Banach space X . The structure of
F is supposed to make any f ∈ F be approximable by finite expansions of small
size of given basis functions. This is the case when the Kolmogorov n−width
of F in X is small. Indeed, the Kolmogorov n−width of F in X , defined by
sup inf kx − ykX (see [4]) measures the extent to which
dn (F, X ) :=
inf
Xn ⊂X
dim(Xn )=n
x∈F y∈Xn
F can be approximated by some finite dimensional space Xn ⊂ X of dimension
n. In general Xn is not known and the Empirical Interpolation Method builds
simultaneously and recursively in n the set of interpolating functions and the
associated interpolating points by a greedy selection procedure (see [1]), but note
however that the approach can be shortcut in case the basis functions are available,
then the interpolating points are the only output of EIM.
A recent generalization of this interpolation process consists in generalizing the
evaluation at interpolating points by application of a class of interpolating continuous linear functionals chosen in a given dictionary Σ ⊂ L(F ) and this gives
rise to the so-called Generalized Empirical Interpolation Method (GEIM, [5]). In
this newly developed method, the particular case where the space X = L2 (Ω) or
X = H 1 (Ω) is considered, with Ω being a bounded spacial domain of Rd and F
being a compact set of X .
In this context, the aim of the talk is twofold:
Since the efficiency of GEIM depends critically on the choice of the interpolating
functions, we will first analyze the quality of the finite dimensional subspaces
Xn ⊂ F built by the greedy selection procedure of GEIM. For this purpose, the
accuracy of the approximation in Xn of the elements of F will be compared to the
best possible performance which is the Kolmogorov n− width dn (F, L2 (Ω)). The
convergence uses the Lebesgue constant Λn that evaluates the operator norm of
the interpolation operator.
The second part of the talk will be devoted to a numerical example motivated by
an observation made in [5] where it was shown in a simple numerical experiment
272
(a parameter dependent elliptic problem) that the GEIM provides cases where
the Lebesgue constant Λn is uniformly bounded in n when evaluated in the L(L2 )
norm. We will extend the analysis to the Stokes equations and explain how we can
take advantage of this framework in order to use GEIM to approximate a solution
in the whole domain from the only knowledge of measurements from sensors.
References
[1 ] Barrault, M. and Maday, Y. and Nguyen, N.C. Y. and Patera, A.T., An
empirical interpolation method: Application to efficient reduced-basis discretization of partial differential equations, C. R. Acad. Sci. Paris, Série I.,
vol. 339, 667–672, 2004.
[2 ] Grepl, M.A. and Maday, Y. and Nguyen, N.C. and Patera, A.T., Efficient
reduced-basis treatment of nonaffine and nonlinear partial differential equations, M2AN (Math. Model. Numer. Anal.), vol. 41(3), 575-605, 2007.
[3 ] Kleijnen, J.P.C. and van Beers, W., Robustness of Kriging when interpolating in random simulation with heterogeneous variances: Some experiments,
European Journal of Operational Research, vol. 165, 826 - 834, 2005.
[4 ] Kolmogoroff, A., Über die beste Annäherung von Funktionen einer gegebenen Funktionenklasse, Annals of Mathematics, vol. 37, 107-110, 1936.
[5 ] Maday, Y. and Mula, O., A generalized empirical interpolation method:
application of reduced basis techniques to data assimilation, Analysis and
Numerics of Partial Differential Equations, vol. XIII, 221-236, 2013.
[6 ] Maday, Y. and Nguyen, N.C. and Patera, A.T. and Pau, G.S.H., A general
multipurpose interpolation procedure: the magic points, Commun. Pure
Appl. Anal., vol. 8(1), 383-404, 2009.
Joint work with Y. Maday, O. Mula, and G. Turinici.
273
Naofumi Murata
Keio University, JP
Analysis on distribution of magnetic particles with hysteresis characteristics and
field fluctuations
Contributed Session CT3.6: Thursday, 17:30 - 18:00, CO017
Numerical approach to analyze magnetic particles’ behavior has been widely developed in the field of magnetic fluid and printer toners. It is well known that
chain-like clusters are formed as a result of dipole-dipole interactions between
particles. However, in most cases, the applied magnetic fields are constant and
non-temporal. In those simulations, hysteresis characteristics of each magnetic
particle are often neglected or approximated as constant or linear, not considering the whole hysteresis loop. Famous conventional methods such as free-energy
theory or Monte Carlo simulations give practical results under some of those particular conditions. However these approximations smear out the effect of individual
mutual interaction between particles, making the results averaged. These methods therefore cannot be applied for problems with field fluctuations and hysteresis
characteristics of particles.
In this research, analysis method on the behavior of magnetic particles with hysteresis characteristics under spatially and temporally fluctuating fields is proposed.
In large system where mutual interactions of particles appear strongly, nonlinearity
of hysteresis and driving force from the field fluctuation in addition to the energy
dissipation by collisions might bring chaotic behavior and patterns of the particles.
This research aims to find the foothold of chaotic behavior which appears under
these conditions.
The proposed method starts from the discretization and interpolation of fields by
means of FEM rectangular elements. To model hysteresis characteristics of each
particle, sigmoid functions were used to express the major hysteresis loop while
minor loops were expressed with linear recoil lines. The hysteresis characteristic
adopted here basically obeys the Madelung’s rules [1]. Collisions and clustering of
particles were modeled by treating mechanical contacts, namely by solving Hertz’s
contact problem. Time integration was carried out by a fourth order symplectic
integrator. In the simulation, behavior of 200 magnetic particles under spatially
and temporally fluctuating fields was examined. The results imply the nonlinearity of hysteresis characteristics greatly affect the final clustering patterns.
[1] S.E. Zirka, Yu.I. Moroz, Hysteresis modeling based on transplantation, IEEE
Transactions on Magnetics 31 (6) (1995) 3509–3511.
Joint work with Kenji Oguni.
274
Gulcin Mihriye Muslu
Istanbul Technical University, TR
New Numerical Results on Some Boussinesq-type Wave Equations
Contributed Session CT2.2: Tuesday, 14:30 - 15:00, CO2
Boussinesq-type equations were proposed to model bi-directional propagation of
nonlinear dispersive waves arising in many areas of science and engineering. Elastic
waves and surface water waves are the two most studied phenomena in the literature within the context of a Boussinesq-type equation model. In this talk, we will
focus on a Fourier pseudo-spectral method for solving one-dimensional Boussinesqtype equations. Then we will present our preliminary numerical results concerning
the two standard test problems: the propagation of a single solitary wave and the
collision of two solitary waves. We also compare our numerical results with those
given in the literature in terms of both numerical accuracy and computational
cost. The numerical comparisons show that the Fourier pseudo-spectral method
provides very accurate results, at least for the two test problems stated above, and
has a promising potential for handling other problems based on Boussinesq-type
equations.
Joint work with Handan Borluk.
275
Bayramov Nadir
RICAM, AT
Finite element methods for transient convection-diffusion equations with small diffusion
Contributed Session CT2.3: Tuesday, 15:00 - 15:30, CO3
Transient convection-diffusion or convection-diffusion-reaction equations, with in
general small or anisotropic diffusion, are considered. A specific exponential fitting
scheme, resulting from finite element approximation, is applied to obtain a stable
monotone method for these equations.
In the first part of the talk error estimates are dicussed for this method and a
comparison with the more commonly known SUPG method is drawn.
The second part focuses on the efficient solution of the arising linear systems. A
nonlinear algebraic multilevel iteration method is introduced in the framework of
flexible GMRES using piecewise constant coarse spaces which are based on matching in graphs. The uniform convergence of this method is demonstrated by various
numerical experiments.
Joint work with Johannes Kraus.
276
Federico Negri
EPFL - MATHICSE - CMCS, CH
Reduced basis methods for PDE-constrained optimization
Contributed Session CT4.4: Friday, 09:20 - 09:50, CO015
We present a reduced framework for the numerical solution of parametrized PDEconstrained optimization problems. In particular, we focus on parametrized quadratic
optimization problems constrained by either advection-diffusion, Stokes or NavierStokes equations, where the control (or design, inversion) variables are infinite
dimensional functions, distributed in a portion of the domain or along its boundary. Parameters are not the object of the optimization, rather they may represent
physical and/or geometrical quantities describing the state system or they can be
related to observation measurements in the cost functional.
In this context, our goal is to design a strategy for the reduction of the complexity of the optimization problem by treating it as a whole, with respect to all its
variables (state and control) simultaneously. This framework is based on a suitable optimize-then-discretize-then-reduce approach which takes advantage of the
reduced basis (RB) method for the rapid and reliable solution of parametrized
PDEs. Indeed, we build our RB approximation directly upon the “truth” underlying finite element approximation of the optimality system. The saddle-point
structure of the reduced optimality system requires a careful design of the reduced
spaces for the state, control and adjoint variables, in order to ensure the stability
of the RB approximation. We propose an aggregated approach, possibly enriched
by supremizer solutions, which enables us to prove the well-posedness of the RB
approximation.
Then, we derive rigorous a posteriori error estimates on the solution variables as
well as on the cost functional: in the linear constraint case we exploit the Babuška
stability theory, while in the nonlinear constraint case we rely on the BrezziRappaz-Raviart theory. The link between the sharpness of these error bounds, the
conditioning of the optimality system and the use of suitable “robust” norms will
be discussed.
We assess the properties and numerical performances of the methodology by solving some classical benchmark problems of vorticity minimization through suction/injection of fluid. Then we apply this framework to some problems arising in
haemodynamics, dealing with both data assimilation and optimal control of blood
flows.
Joint work with Andrea Manzoni, Alfio Quarteroni, and Gianluigi Rozza.
277
Thi Trang Nguyen
FEMTO-ST Institute, University of Franche-Comte, FR
Homogenization of the one-dimensional wave equation
Contributed Session CT2.2: Tuesday, 15:00 - 15:30, CO2
Homogenization of the wave equation in the bounded domain Ω with the timeindependent coefficients has been carried out in several papers. For example in [2],
the solution of the homogenized problem is a weak limit, when period tends to 0,
of a subsequence of the solution. The latter has no fast time oscillations. So, it can
not model correctly the physical solution. In order to overcome this problem, a
method for two-scale model derivation of the periodic homogenization of the wave
equation has been developed in [1]. It allows to analyze the oscillations occurring
on both time and space for low and high frequency waves. Unfortunately, the
boundary conditions of the homogenized model have not been found. Therefore,
establishing the boundary conditions of the homogenized model is critical and is
the main motivation of our works. In this presentation, we use the same method
as in [1] for the homogenization for the wave equation in one dimension. A new
result on the asymptotic behavior of waves regarding the boundary conditions has
been obtained and will be presented for the first time. Numerical simulations will
also be provided.
For a bounded open set Ω = (0, 1) and a finite time interval I = [0, T ) ⊂ R+ , we
consider the wave equation with Dirichlet boundary conditions,

2 ε
u (t, x) − ∂x a xε ∂x uε (t, x) = f ε (t, x) in I × Ω,
 ρ xε ∂tt
uε (t = 0, x) = uε0 (x) and ∂t uε (t = 0, x) = v0ε (x) in Ω,
(1)
 ε
u (t, 0) = uε (t, 1) = 0 in I,
where ε > 0 denotes a small
parameter intended to go to zero. The two functions
aε = a xε and ρε = ρ xε are Lipschitzian, positive, and periodic with respect to
a lattice of reference cell εY ⊂ R. We reformulate (1) under the
√ formε of√a system
with unknown the vector of first-order derivatives U ε :=
aε ∂x u , ρε ∂t uε .
Here we study the asymptotic behavior of U ε .
For any fixed K ∈ N∗ and any fiber k ∈ L∗K , with the definition of the set L∗K of
±k
the eigenvalues and eigenvectors of
fibers introduced in [1], we consider λ±k
n , en
the Bloch wave spectral problem with ±k-quasi-periodic boundary conditions, and
M k a set of indices of all Bloch eigenvalues. We denote by Λ = (0, 1) a time unit
k
∗
cell. Starting with the observation that λkn = λ−k
n for all n ∈ M and k ∈ LK , we
ε
ε
ε
apply to U the sum of the modulated two-scale transforms Wk and W−k , defined
2
ε
in [1]. For a given k ≥ 0, Wkε U ε +W−k
U ε converges weakly in L2 (I × Λ × Ω × Y )
to U k (t, τ, x, y)which can be decomposed by
X
U k (t, τ, x, y) = UH (t, x, y) +
Unk (t, x) e2iπsn τ ekn (y)
(2)
n∈M k
+Un−k
(t, x) e−2iπsn τ e−k
n (y)
The term UH is the low frequency part. The other terms represent the high
frequency waves, and Un±k are solution of a system of macroscopic equations which
boundary conditions constitute one of the main contributions of this work. We
278
deduce an approximation of the physical solution,
q
X x
isn λk
t/ε k x
ε
k
|n|
+
Un (t, x) e
U (t, x) ≈ UH t, x,
en
ε
ε
n∈M k
q
t/ε −k x
isn λk
−k
|n|
+Un (t, x) e
en
ε
(3)
which holds in the strong sense. The figures below represent the numerical results.
References
[1] M. Brassart, M. Lenczner, A two-scale model for the periodic homogenization of the wave equation, J. Math. Pures Appl. 93 (2010) 474 − 517.
[2] S. Brahim-Otsmane, G.A. Francfort, F. Murat, Correctors for the homogenization of the wave and heat equations, J. Math. Pures Appl. 71 (1992)
197 − 231.
Figure 1: At t = 0.665. Figure 1: Comparison of the first component of U ε and the
corresponding homogenized solution. Figure 2: The error between the physical
solution and the homogenized solution, with the maximal error is 0.011.
Joint work with Michel Lenczner, and Matthieu Brassart.
279
Kirill Nikitin
Institute of Numerical Mathematics of Russian Academy of Sciences, RU
A monotone nonlinear finite volume method for diffusion equations and multiphase
flows
Minisymposium Session SDIFF: Monday, 11:40 - 12:10, CO123
We present a new nonlinear monotone finite volume method for diffusion equation
and its application to two-phase black oil model. We consider full anisotropic
discontinuous diffusion or permeability tensors on conformal polyhedral meshes.
The approximation of the diffusive flux uses the nonlinear two-point stencil which
provides the conventional 7-point stencil for the discrete diffusion operator on cubic
meshes. We show that the quality of the discrete flux in a reservoir simulator has
great effect on the front behavior and the water breakthrough time. We compare
two two-point flux approximations (TPFA), the proposed nonlinear TPFA and
the conventional linear TPFA, and multi-point flux approximation (MPFA). The
new nonlinear scheme has a number of important advantages over the traditional
linear discretizations.
Compared to the linear TPFA, the new nonlinear scheme demonstrates low sensitivity to grid distortions and provides appropriate approximation in case of full
anisotropic permeability tensor. For non-orthogonal grids or full anisotropic permeability tensors the conventional linear TPFA provides no approximation, while
the nonlinear flux is still first-order accurate. The computational work for the
new method is higher than the one for the conventional TPFA, yet it is rather
competitive.
Compared to MPFA, the new scheme provides sparser algebraic systems and thus
is less computational expensive. Moreover, it is monotone which means that the
discrete solution preserves the non-negativity of the differential solution.
Joint work with K.Terekhov, and Yu.Vassilevski.
280
Caroline Nore
LIMSI-CNRS and University Paris-Sud, FR
Dynamo action in finite cylinders
Minisymposium Session MMHD: Thursday, 14:00 - 14:30, CO017
Using numerical simulations, we investigate two magnetohydrodynamics (MHD)
problems in a cylindrical cavity, namely a precessing cylinder and a short TaylorCouette set-up, both containers being filled with a conducting fluid. We use a
parallel code denoted SFEMaNS (Guermond at al., JCP, 2011) to integrate nonlinear MHD equations for incompressible fluids in heterogenous domains with axisymmetric interfaces embedded in a vacuum. We numerically demonstrate that
precession is able to drive a dynamo and that a short Taylor-Couette set-up with
a body force can also sustain dynamo action. In the precessing cylinder, the
generated magnetic field is unsteady and quadrupolar (Nore et al., PRE, 2011).
These numerical evidences may be useful for an experiment now planned at the
DRESDYN facility in Germany. In the Taylor-Couette set-up, the nonlinear dynamo state is characterized by fluctuating kinetic and magnetic energies and a
tilted dipole whose axial component exhibits aperiodic reversals during the time
evolution (Nore et al., PoF, 2012). These numerical evidences may be useful for
developing an experimental device.
This work was performed using HPC resources from GENCI-IDRIS (Grant 90254).
Joint work with F. Luddens (LIMSI-CNRS and Univ. Paris-Sud, France), L.
Cappanera (LIMSI-CNRS and TAMU), J. Leorat (Obs. Meudon, France) and
J.-L. Guermond (TAMU, and USA).
281
Takeshi Ogita
Tokyo Woman’s Christian University, JP
Backward error bounds on factorizations of symmetric indefinite matrices
Contributed Session CT2.1: Tuesday, 14:30 - 15:00, CO1
In this talk we are concerned with the rounding error analysis on block LDLT
factorizations of symmetric matrices. Let A be a real symmetric matrix. Then A
can be factorized as
P AP T = LDLT ,
where L is a unit triangular matrix, D is a block diagonal matrix with each block
of order 1 or 2, and P is a permutation matrix according to some pivoting strategy.
It is called a block LDLT factorization with diagonal pivoting, which is known as
a stable numerical algorithm and widely used for solving symmetric and indefinite
linear systems. Moreover, it is also useful for checking the inertia of a symmetric
matrix. In practice floating-point arithmetic is extensively used for these purposes.
Since finite precision numbers are used, rounding errors are involved in computed
results.
For symmetric and positive definite matrices a backward error bound for a Cholesky
factorization has been given, for example, in [Demmel (1989), Higham (2002)].
Moreover, it is modified for sparse cases in [Rump (2006)], which is rigorous
and easy to compute efficiently. For symmetric and indefinite matrices, however,
Cholesky factorization cannot be applied, but LDLT or block LDLT factorization can. In terms of the stability of the algorithms, block LDLT factorization is
preferable.
Let L̃, D̃ and P̃ be floating-point block LDLT factors of A, which are approximations of L, D and P , respectively. Then the backward error ∆ of the floating-point
factorization is defined by
∆ := P̃ AP̃ T − L̃D̃L̃T .
(1)
In some methods of verified numerical computations for the solution of a linear
system with A being a coefficient matrix [Rump (1994), Rump (1995), Rump
(1999)] and for the inertia of A [Yamamoto (2001)], it is mandatory to compute
an upper bound of k∆k, where k · k stands for the spectral norm. A main point of
this research is to derive a method of calculating the backward error bound that
is easy to compute.
There are several methods for block LDLT factorizations of symmetric matrices
with different pivoting strategies such as Bunch–Parlett (1971), Bunch–Kaufman
(1977) and so forth. There are also useful implementations, e.g. [Duff (2002),
Duff–Reid (1982)]. In addition, rounding error analyses are presented in [Fang
(2011), Higham (1997), Slapničar (1998)]. See [Fang (2011)] for details. From
a qualitative standpoint some rough estimations suffice to show the backward
stability of the algorithms. From the viewpoint of verified numerical computations,
however, rigorous and computable estimations are necessary, especially precise
estimations are preferable.
In order to obtain an upper bound of |∆| by backward error analysis it is necessary
to derive a backward error bound for 2 × 2 linear systems during a block LDLT
factorization. We present two ways for the purpose; One is to use a classical
rounding error analysis as in [Higham (2002)]. The other is to apply a direct
rounding error analysis. The latter gives much sharper bounds than the former.
282
Numerical results are also presented with some applications.
Joint work with Kenta Kobayashi.
283
Mario Ohlberger
University of Muenster, DE
Model reduction for nonlinear parametrized evolution problems
Minisymposium Session UQPD: Thursday, 11:00 - 11:30, CO1
In this contribution we present and discuss recent development of the reduced
basis method [8] in the context of model reduction for nonlinear parametrized
evolution problems. Our approach is based on the POD-Greedy algorithm, first
introduced in the linear setting in [6] and then extended to the nonlinear setting
in [7, 2]. The model reduction in nonlinear scenarios is based on empirical interpolation of nonlinear differential operators and their Frechet derivatives. As a
result, the POD-Greedy algorithm is generalized to the PODEI-Greedy algorithm
that simultaneously constructs the reduced and the collateral reduced basis space
employing the empirical interpolation of nonlinear differential operators. Efficient
online/offline decomposition is obtained for discrete operators that satisfy an Hindependent DOF dependence for a certain set of interpolation functionals, where
H denotes the dimension of the underlying high dimensional discretization space.
The resulting reduced basis method is applied to nonlinear parabolic and hyperbolic equations based on explicit or implicit finite volume discretizations, as well
as to mixed elliptic-parabolic systems modeling two phase flow in porous media
[3]. We show that the resulting reduced scheme is able to capture the evolution
of both smooth and discontinuous solutions. In case of symmetries of the problem, the approach realizes an automatic and intuitive space-compression or even
space-dimensionality reduction. We perform empirical investigations of the error
convergence and runtimes. In all cases we obtain a runtime acceleration of at least
one order of magnitude. To speed up offline and/or online runtimes, adaptive
basis enrichment strategies and multiple bases generation approaches [5] can be
combined with the PODEI-Greedy approach.
In [4] it has been shown that the POD-Greedy method is optimal in the sense that
exponential or algebraic convergence rates of the Kolmogorov n-with are maintained by the algorithm. Although this is a very nice result for situations with
fast decay rates of the Kolmogorv n-with, it also shows limitations of the model
reduction approach, in particular in nonlinear hyperbolic scenarios with evolving
discontinuities. As a first attempt to address such classes of problems we will also
present a new model reduction approach that is based on a combination of the
PODEI-Greedy approach with the method of freezing that was originally introduced to study relative equilibria of evolution problems [9, 1]. Given the action
of a Lie group on the solution space, in this approach the original problem is reformulated as a partial differential algebraic equation system by decomposing the
solution into a group component and a spatial shape component and imposing
appropriate algebraic constraints on the decomposition. The system is then projected onto a reduced basis space. We show that efficient online evaluation of the
scheme is possible and study a numerical example showing its strongly improved
performance in comparison to a scheme without freezing.
References
[1] W.-J. Beyn, V. Thümmler. Freezing Solutions of Equivariant Evolution Equations. SIAM J. Appl. Dyn. Syst. 3:85–116, 2004.
284
[2] M. Drohmann, B. Haasdonk, and M. Ohlberger. Reduced basis approximation
for nonlinear parametrized evolution equations based on empirical operator
interpolation. SIAM J. Sci. Comput., 34:A937-A969, 2012.
[3] M. Drohmann, B. Haasdonk, and M. Ohlberger. Reduced basis model
reduction of parametrized two-phase fow in porous media. In: Proccedings
of the 7th Vienna International Conference on Mathematical Modelling
(MathMod), Vienna, 2012.
[4] B. Haasdonk. Convergence rates of the pod-greedy method. M2AN Math.
Model. Numer. Anal., doi:10.1051/m2an/2012045, 2013.
[5] B. Haasdonk, M. Dihlmann, and M. Ohlberger. A training set and multiple bases generation approach for parametrized model reduction based
on adaptive grids in parameter space. Math. Comput. Model. Dyn. Syst.,
17(4):423–442, 2011.
[6] B. Haasdonk, and M. Ohlberger. Reduced basis method for finite volume
approximations of parametrized evolution equations. M2AN Math. Model.
Numer. Anal., 42(2):277-302, 2008.
[7] B. Haasdonk, M. Ohlberger, and G. Rozza. A reduced basis method for evolution schemes with parameter-dependent explicit operators. Electronic Transactions on Numerical Analysis, 32: 145–161, 2008.
[8] A.T. Patera, and G. Rozza. Reduced Basis Approximation and a Posteriori
Error Estimation for Parametrized Partial Differential Equations. MIT, 2007.
Version 1.0, Copyright MIT 2006-2007, to appear in (tentative rubric) MIT
Pappalardo Graduate Monographs in Mechanical Engineering.
[9] C. W. Rowley, I. G. Kevrekidis, J. E. Marsden, K. Lust. Reduction and
reconstruction for self-similar dynamical systems. Nonlinearity 16:1257–1275,
2003.
Joint work with Martin Drohmann, Bernard Haasdonk, and Stephan Rave.
285
Rikard Ojala
Dept. of Numerical Analysis, KTH, Stockholm, SE
Accurate bubble and drop simulations in 2D Stokes flow
Contributed Session CT2.3: Tuesday, 14:30 - 15:00, CO3
This talk will be on moving interfaces and free boundaries in two dimensional
Stokes flow, where the flow is due to surface tension. For such flows in the Stokesian regime, with small Reynolds numbers, the resulting linear governing equations can be recast as an integral equation. This is a well-known and widely used
fact. What is frequently overlooked, however, is how to deal with interfaces that
are close to each other. In this case, the integral kernels are near-singular, and
standard quadrature approaches do not give accurate results. Phenomena such as
lubrication are then not captured correctly. We will discuss how to apply a general
special quadrature approach to resolve this problem. The result is a robust and accurate solver capable of handling a wide range of bubble and drop configurations.
An example of a fairly complex drop setup that can be treated is displayed below.
Joint work with Anna-Karin Tornberg.
286
Maxim Olshanskii
University of Houston, US
An adaptive finite element method for PDEs based on surfaces
Minisymposium Session ADFE: Wednesday, 11:30 - 12:00, CO016
An adaptive finite element method for numerical treatment of elliptic partial differential equations defined on surfaces is discussed. The method makes use of a
standard outer volume mesh to discretize an equation on a two-dimensional surface
embedded in R3 . The reliability of a residual type a posteriori error estimator is
proved and both reliability and efficiency of the estimator are studied numerically
in a series of experiments. A simple adaptive refinement strategy based on the
error estimator is demonstrated to provide optimal convergence rate in the H 1
and L2 norms.
287
Maxim Olshanskii
University of Houston, US
Preconditioners for the linearized Navier-Stokes equations based on the augmented
Lagrangian
Minisymposium Session PSPP: Thursday, 14:00 - 14:30, CO3
We discuss block preconditioners based on the augmented Lagrangian formulation
of the algebraic equations of the linearized Navier-Stokes equations. We consider
incompressible fluids, and the resulting algebraic problems are of generalized saddle
point type. The talk reviews variants of augmented Lagrangian preconditioner
based on different forms of augmentation and certain simplifications to make the
approach computationally efficient. The preconditioned systems admit eigenvalue
and field-of-value analysis. We include numerical results for several fluid problems.
The talk reports on a joint work with Michele Benzi (Emory).
288
Christoph Ortner
University of Warwick, GB
Optimising Multiscale Defect Simulations
Minisymposium Session MSMA: Monday, 11:40 - 12:10, CO3
A universal quality measure for any numerical approximation scheme is its accuracy relative to its computational cost. This point of view seems to have gone
largely unnoticed in the analysis of atomistic-continuum multiscale simulations,
but it guarantees an unbiased approach to the construction and evaluation of
computational schemes. In this talk, I will focus on atomistic-to-continuum (quasicontinuum) methods for lattice defects, and some related schemes. I will first
review how the framework of numerical analysis leads to error estimates (accuracy)
in terms of the various approximation parameters such as domain size, atomistic
region size, finite element mesh, or interface correction. I will then discuss how
these estimates can be recast as error estimates in terms of computational cost.
Finally, this can be used to optimise the various approximation parameters. Interesting comparisons are, e.g., between the choices of coupling mechanisms or the
usage of nonlinear versus linear elasticity.
Joint work with Virginie Ehrlacher, Helen Li, Mitch Luskin, Alex Shapeev, and
Brian Van Koten.
289
Abderrahim Ouazzi
wiss.Ang., DE
Newton-Multigrid Least-Squares FEM for V-V-P and S-V-P Formulations of the
Navier-Stokes Equations
Contributed Session CT2.9: Tuesday, 14:30 - 15:00, CO124
Least squares finite element methods are motivated, beside others, by the fact
that in contrast to standard mixed finite element methods, the choice of the finite
element spaces is not subject to the LBB stability condition and the corresponding discrete linear system is symmetric and positive definite. We intend to benefit
from these two positive attractive features, in one hand, to use different types of
elements representing the physics as for instance the capillary forces and mass conservation and, on the other hand, to show the flexibility of the geometric multigrid
methods to handle efficiently the resulting linear systems. We numerically solve
the V-V-P, Vorticity-Velocity-Pressure, and S-V-P, Stress-Velocity-Pressure, formulations of the incompressible Navier-Stokes equations based on the least squares
principles using different types of finite elements, conforming, nonconforming and
discontinuous of low as well as high order. For the discrete systems, we use a conjugate gradient (CG) solver accelerated with a geometric multigrid preconditioner.
In addition, we employ a Krylov space smoother which allows a parameter-free
smoothing. Combining this linear solver with the Newton linearization results in
a robust and efficient solver. We analyze the application of this general approach,
of using different types of finite elements, and the efficient solver, geometric multigrid, for several prototypical benchmark configurations (driven cavity, flow around
obstacles), and we investigate the effects of pressure jumps for the capillary force
in multiphase flow simulations (static bubble configuration).
Key words: Least Squares EFM, Geometric multigrid, First-order system least
squares, Capillary force, Mass conservation, Navier-Stokes equations.
Joint work with M. Sc. Masoud Nickaeen, and Prof. Dr. Stefan Turek.
290
Katsuhisa Ozaki
Shibaura Institute of Technology, JP
Fast Interval Matrix Multiplication by Blockwise Computations
Contributed Session CT2.1: Tuesday, 15:00 - 15:30, CO1
Keywords: interval arithmetic, enclosure methods, a priori error analysis
Interval arithmetic [1] is widely applied into so-called verified numerical computations, which discuss reliability of approximate results by numerical computations.
This talk is concerned with interval matrix multiplication. Let F be a set of
floating-point numbers. Let IF be a set of midpoint-radius interval:
hc, ri = {x ∈ R | c − r ≤ x ≤ c + r, r ≥ 0, c, r ∈ F},
where c and r are center and radius of the interval, respectively. For interval
matrices A ∈ IFm×n and B ∈ IFn×p , enclosure of interval matrix multiplication
can be obtained by
hAm , Ar i ∗ hBm , Br i ⊆ hAm ∗ Bm , |Am |Br + Ar (|Bm | + Br )i,
(1)
where | · | returns a matrix by taking an absolute value elementwise. For study
of interval matrix multiplication according to (1), algorithms are characterized by
the number of matrix products:
• 4 matrix products: Rump [2]
• 3 matrix products: Rump [4] and Ozaki-Ogita-Oishi-Rump [5]
• 2 matrix products: Ogita-Oishi [3], Rump [4] and Ozaki-Ogita-Oishi-Rump
[5]
• a matrix product: Ozaki-Ogita-Oishi-Rump [5]
Basically, there is a tradeoff between the number of matrix products and tightness
of computed intervals.
In this talk, we introduce how to improve tightness of the intervals without a
slowdown of computational performance. In the fastest method in [5], a priori
error analysis for a floating-point result AB ≈ C ∈ Fm×p is used:
|C − AB| ≤ γn |A||B|, γn =
nu
,
1 − nu
(2)
where u is the unit roundoff, especially, u = 2−53 for binary64 in the IEEE 754
standard. If the matrix multiplication is computed by blockwise computation, it
is known in [6] that the bound (2) can be significantly improved. Therefore, first
we implement blockwise matrix multiplication suited for the a priori error analysis. Our simple implementation of block matrix multiplication does not slow the
performance down, compared to optimized BLAS (Basic Linear Algebra Subprograms). In addition, numerical results illustrate that tightness of the interval can
be significantly improved.
References:
[1] A. N EUMAIER, Interval Methods for Systems of Equations, Cambridge University Press, 1990.
291
[2] S. M. R UMP, Fast and parallel interval arithmetic, BIT Numerical Mathematics, 39(3):539–560, 1999.
[3] T. O GITA , S. O ISHI, Fast Inclusion of Interval Matrix Multiplication, Reliable
Computing, 11(3):191–205, 2005.
[4] S. M. R UMP, Fast Interval Matrix Multiplication, Numerical Algorithms,
61(1):1-34, 2012.
[5] K. O ZAKI , T. O GITA , S.M. R UMP, AND S. O ISHI, Fast algorithms for floatingpoint interval matrix multiplication, Journal of Computational and Applied
Mathematics, 236(7):1795-1814, 2012.
[6] N.J. H IGHAM, Accuracy and Stability of Numerical Algorithms, second edition, SIAM Publications, Philadelphia, 2002.
Joint work with Takeshi Ogita.
292
Jan Papez
Faculty of Mathematics and Physics, Charles University in Prague, CZ
Distribution of the algebraic, discretization and total errors in numerical PDE
model problems
Contributed Session CT1.5: Monday, 17:30 - 18:00, CO016
The finite element method (FEM) is widely used in numerical solution of partial
differential equations. This method generates an approximate solution in form of
a linear combination of basis functions with local supports. Each basis function
(multiplied by the proper coefficient) thus approximates the desired solution only
locally. The global approximation property of the FEM discrete solution is then
ensured by solving a linear algebraic system for the unknown coefficients. If this
system is solved exactly, then the FEM discrete solution is obtained and its difference from the true solution is given by the discretization error. But in practice
we do not solve exactly. In hard problems we even do not want to aim at a small
algebraic error as it might be too costly or even impossible to get. One should
therefore take into consideration also the error caused by the inexact algebraic
computation. In particular, such consideration should include the spatial distribution of the algebraic error in the domain. There is no a priori evidence that this
distribution is analogous to the distribution of the discretization error. On the
contrary, as demonstrated in [1, Section 5.1] and [2], the spatial distribution of the
algebraic error can significantly differ from the distribution of the discretization
error. The results presented there for the FEM discretization of the simplest Poisson boundary value problem demonstrate that the algebraic error can have large
local components which can dominate the total error in parts of the domain.
In this contribution we further elaborate on results from [1, Section 5.1] and [2].
Using various iterative and direct algebraic solvers, we compare spatial distribution
of algebraic and discretization errors in numerical solution of several boundary
value problems present in literature.
Acknowledgment: This work was supported by the ERC-CZ project LL1202 and
by the GAUK grant 695612.
References
[1] J. Liesen and Z. Strakoš. Krylov subspace methods: principles and analysis.
Numerical Mathematics and Scientific Computation. Oxford University Press,
Oxford, 2012.
[2] J. Papež, J. Liesen, and Z. Strakoš. Distribution of the discretization and
algebraic error in numerical solution of partial differential equations. Preprint
MORE/2012/03, submitted for publication, 2013.
Joint work with Zdenek Strakos.
293
Luca Pavarino
University of Milan, IT
Isogeometric Schwarz preconditioners for mixed elasticity and Stokes systems
Minisymposium Session PSPP: Thursday, 12:00 - 12:30, CO3
Overlapping Schwarz preconditioners for the isogeometric mixed formulation of
almost incompressible linear elasticity and Stokes systems are here presented and
studied. The preconditioner is based on partitioning the domain of the problem
into overlapping subdomains, solving local isogeometric mixed problems on these
subdomains and solving an additional coarse isogeometric mixed problem associated with the subdomain mesh. Numerical results in 2D and 3D tests show that
this preconditioner is scalable in the number of subdomains and optimal in the
ratio between subdomain and overlap sizes. The numerical tests also show a good
convergence rate with respect to the polynomial degree p and regularity k of the
isogeometric basis functions, as well as with respect to the presence of discontinuous elastic coefficients in composite materials and to domain deformation.
References: L. Beirao da Veiga, D. Cho, L. F. Pavarino, S. Scacchi. Overlapping
Schwarz methods for Isogeometric Analysis. SIAM J. Numer. Anal., 50 (3): 13941416, 2012.
L. Beirao da Veiga, D. Cho, L. F. Pavarino, S. Scacchi. Isogeometric Schwarz preconditioners for linear elasticity systems. Comput. Meth. Appl. Mech. Engrg.,
253: 439-454, 2013.
Joint work with L. Beirao da Veiga, D. Cho, L. F. Pavarino, and S. Scacchi.
294
Bengisen Pekmen
Atilim University, TR
Steady Mixed Convection in a Heated Lid-Driven Square Cavity Filled with a FluidSaturated Porous Medium
Contributed Session CT1.3: Monday, 17:00 - 17:30, CO3
Steady mixed convection flow in a lid-driven porous square cavity is studied numerically using the dual reciprocity boundary element method (DRBEM).
Two-dimensional, steady, laminar flow of an incompressible fluid is considered in a
homogeneous, isotropic porous medium. Viscosity, thermal conductivity, specific
heat, thermal expansion coefficient, and permeability (except the density variation
in the buoyancy term) are assumed to be constant with a body force term in the
momentum equations according to Boussinessq approximation.
The governing non-dimensional equations in terms of stream function ψ-temperature
T -vorticity w are
∇2 ψ = −w
∂w
∂w
Gr ∂T
1
1 2
∇ w=u
+v
−
+
w
2
Re
∂x
∂y
Re ∂x
Da Re
1
∂T
∂T
∇2 T = u
+v
P r Re
∂x
∂y
(1)
(2)
(3)
where u = ∂ψ/∂y, v = −∂ψ/∂x, w = ∂v/∂x − ∂u/∂y, and Re, Gr, Da, P r are
Reynolds, Grashof, Darcy and Prandtl numbers, respectively.
Left and right lids of the cavity move with a velocity v = 1 while u = ψ = 0, and
u = v = ψ = 0 on the other walls. The left wall is the cold wall Tc = 0 and the
right wall is the hot wall Th = 1. Adiabatic condition (∂T /∂n = 0) is imposed on
the top and bottom walls.
Applying the DRBEM with linear boundary elements to the non-dimensional governing equations (1)-(3), the following matrix-vector equations are obtained
Hψ m+1 − Gψqm+1 = −Swm
∂F −1 m+1
∂F −1 m+1
um+1 =
F ψ
, v m+1 = −
F ψ
∂y
∂x
Gr ∂F −1 m+1
1
S wm+1 − Gwqm+1 = −
S
F T
H − ReSM −
Da
Re ∂x
(H − P rReSM ) T m+1 − GTqm+1 = 0
where S = (H Û − GQ̂)F
−1
, M =
∂F
[u]m+1
d
−1
(4)
(5)
(6)
(7)
∂F
[v]m+1
d
−1
F +
F
. H and G
∂x
∂y
∗
are BEM matrices containing integrals of fundamental solution u = ln r/2π and
its normal derivative, respectively. F is the coordinate matrix formed from the
radial basis functions approximating the inhomogeneities of the equations (1)-(3).
Û and Q̂ matrices are built from particular solutions and their derivatives.
In the computations, radial basis function f = 1+r is used (where r is the distance
between source and field points), N = 96, L = 625 are taken, P r = 0.71 is fixed,
and the porosity of the medium is assumed to be unity.
In Figure 1, columnwise contours are streamlines, isotherms and vorticity, respectively. A decrease in Da is shown as Da = 0.01 and Da = 0.001 from top to
295
bottom. Fluid flows slowly as Da decreases since the porosity of the medium increases, (ψmax = 0.0405 for Da = 0.01, and ψmax = 0.0099 for Da = 0.001), and
the heat transfer passes to the conductive mode as can be seen from isotherms.
The effect of moving lids diminishes. Vorticity becomes stagnant at the center
forming strong boundary layers through the vertical walls. Apart from this, the
increase in buoyancy effect with the increase in Gr causes the heat transfer to
become conductive dominant.
DRBEM as a boundary only nature gives very good accuracy at a small computational expense for solving mixed convection flow.
1
1
1
0.5
0.5
0.5
0
0
0.5
1
0
0
0.5
1
0
1
1
1
0.5
0.5
0.5
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0.5
1
Figure 1: Gr = 1000, Re = 100
Joint work with Münevver Tezer-Sezgin.
296
Juan Manuel Pena
Universidad de Zaragoza, ES
Accurate computations for some classes of matrices
Contributed Session CT2.1: Tuesday, 15:30 - 16:00, CO1
A square matrix is called a P-matrix if all its principal minors are positive. Subclasses of P-matrices very important in applications are the nonsingular totally
nonnegative matrices and the nonsingular M-matrices. For diagonally dominant
M-matrices and some subclasses of nonsingular totally nonnegative matrices, accurate methods for computing their singular values, eigenvalues or inverses have been
obtained, assuming that adequate natural parameters are provided. We present
some recent extensions of these methods to other related classes of matrices.
297
Simona Perotto
MOX, Dipartimento di Matematica, Politecnico di Milano, IT
Recent developments of Hierarchical Model (HiMod) reduction for boundary value
problems
Minisymposium Session SMAP: Monday, 12:40 - 13:10, CO015
The construction of surrogate models is a crucial step for bringing computational
tools to practical applications within the appropriate timeline. This can be accomplished by taking advantage of specific features of the problem at hand. For
instance, when solving flow problems in networks (in the modeling of blood, oil,
water, or air dynamics), the local dynamics is expected to develop mainly along
the edges of the network. The interaction between the local and network dynamics calls often for appropriate model reduction techniques. A possible approach
consists of introducing a modal discretization for computing the edge-transversal
dynamics and classical discretization methods (such as finite elements) for the
prevalent (mainstream) dynamics. The former is anticipated to be computed with
an acceptable accuracy by just a few modes. This approach is called HierarchicalModel (Hi-Mod) reduction since it can be regarded as a way for generating a
hierarchy of one-dimensional models, locally improved, for the leading dynamics.
More specifically, the number of employed modes decides the improvement of the
model. In an adaptive framework, the number of modes for the transverse solution is automatically detected by the solver, on the basis of a suitable a posteriori
estimator.
In this talk, we present recent developments of this approach towards the effective
solution of real problems.
Joint work with Alessandro Veneziani, Department of Mathematics and Computer
Science, Emory University, Atlanta, GA, and USA.
298
Ilaria Perugia
University of Pavia, IT
Trefftz-discontinuous Galerkin methods for time-harmonic wave problems
Plenary Session: Wednesday, 09:10 - 10:00, Rolex Learning Center Auditorium
Several finite element methods used in the numerical discretization of wave problems in frequency domain are based on incorporating a priori knowledge about
the differential equation into the local approximation spaces. This can be done
by using Trefftz-type basis functions, namely functions which belong to the kernel
of the considered differential operator (e.g., plane, circular/spherical and angular waves). The resulting methods feature enhanced convergence properties with
respect to standard polynomial finite elements. Prominent examples of such methods are the ultra weak variational formulation (UWVF) by Cessenat and Després,
the partition of unit finite element method (PUFEM) by Babuška and Melenk,
the discontinuous enrichment method (DEM/DGM) by Farhat and co-workers,
the variational theory of complex rays (VTCR) by Ladevèze, and the wave based
method (WBM) by Desmet.
In this talk, we focus on a family of Trefftz-discontinuous Galerkin (TDG) methods,
which includes the UWVF as a special case. For the Helmholtz problem
−∆u − k 2 u = 0
in a bounded domain with connected boundary and impedance boundary condition, TDG methods are proved to be unconditionally well-posed and quasi-optimal
a in a mesh-dependent energy-type norm, i.e., well-posedness and quasi-optimality
hold for any value of the wave number and of the mesh size. High-order convergence is obtained by using new approximation estimates for plane and spherical
waves. These methods and their analysis framework can be generalized to the
time-harmonic Maxwell equation and to the Navier equation.
By duality arguments, L2 -norm error estimates can be obtained for both the hand the p-version of the TDG methods, with a (more or less) standard choice of DG
numerical flux parameters, in the former case, and with constant flux parameters
(like in the UWVF) in the latter case. On the other hand, for scattering problems
with complicated geometries, an hp-approach is advisable. In this case, a special
choice of the numerical flux parameters has been devised, which allows to prove a
priori error estimates on locally refined meshes, with explicit dependence on the
local mesh size, local number of degrees of freedom and local regularity of the
analytical solution. Establishing the exponential convergence in the number of
degrees of freedom of a full hp-version of the TDG method would complete the
picture. Preliminary results in this directions will be presented.
299
Steffen Peter
Technische Universität München, DE
Damping Noise-Folding and Enhanced Support Recovery in Compressed Sensing
Minisymposium Session ACDA: Monday, 15:30 - 16:00, CO122
The practice of compressive sensing suffers very importantly in terms of the efficiency/accuracy trade-off when acquiring noisy signals prior to measurement. It
is rather common to
find results treating the noise affecting the measurements, avoiding in this way
to face the so-called noise-folding phenomenon, related to the noise in the signal,
eventually amplified by the measurement procedure. In this talk we present a
new decoding procedure, combining `1 -minimization followed by a selective least
p-powers, which not only is able to reduce this component of the original noise,
but also has enhanced properties in terms of support identification with respect
to the sole `1 -minimization. We prove such features, providing relatively simple
and precise theoretical guarantees. We additionally confirm and support the theoretical estimates by extensive numerical simulations, which give a statistics of
the robustness of the new decoding procedure with respect to more classical `1 minimization.
Joint work with Marco Artina, and Massimo Fornasier.
300
Johannes Pfefferer
Universität der Bundeswehr München, DE
On properties of discretized optimal control problems with semilinear elliptic equations and pointwise state constraints
Minisymposium Session FEPD: Monday, 14:30 - 15:00, CO017
This talk is concerned with the analysis of finite element discretized optimal control
problems governed by a semilinear elliptic state equation and subject to pointwise
state constraints. In this context two issues mainly arise: the convergence of the
discrete locally optimal controls to the related continuous ones and the convergence of the solution algorithm such as the SQP method. Imposing second-order
sufficient conditions (SSC) for the continuous problem allows us to prove a rate of
convergence of the discrete local solutions to the related continuous ones. Moreover, we elucidate that the SSC postulated for continuous locally optimal solutions
transfer to the discrete level. This contributes to the second issue since for instance
the proof of convergence of the SQP method relies on SSC.
Joint work with Ira Neitzel, and Arnd Rösch.
301
Marco Picasso
EPFL-MATHICSE, CH
Numerical simulation of extrusion with viscoelastic flows
Minisymposium Session MANT: Wednesday, 12:00 - 12:30, CO017
Numerical simulation of extrusion is important for the food processing industry,
pasta, chocolate, cereals, for instance. Extrusion is difficult to simulate since free
surfaces with complex shapes are involved. Using the numerical method proposed
in Bonito Picasso Laso, J. Comp. Phys. 2006, numerical experiments are reported
for several extrusion geometries and several viscoelastic fluids.
Joint work with Alexandre Caboussat, Alexandre Masserey, and Gilles Steiner.
302
Konstantin Pieper
Technische Universität München, DE
Finite element error analysis for optimal control problems with sparsity functional
Minisymposium Session FEPD: Monday, 11:40 - 12:10, CO017
We consider an elliptic optimal control problem with a sparsity functional, where
the control variable is searched for in the space of regular Borel measures.
Minimize
1
2 ku
2
− ud kL2 (Ωo ) + αkqkM(Ωc ) ,
q ∈ M(Ωc ),
subject to A(u) = q in Ω.
Under suitable conditions on Ωo and Ωc the optimal solutions have highly sparse
structure, which suggests applications for the optimal placement of actuators.
For practical computations we discretize the elliptic equation with finite elements,
where the control is approximated by a sum of nodal Dirac delta functions. Using this discretization concept introduced by Casas, Clason and Kunisch, we are
able to obtain improved rates of convergence in the case of two and three spacial
dimensions. The new results agree with the generic regularity of the solutions as
well as with the numerical observations. In the case Ωc ⊂ Ωo additional regularity
for the optimal controls can be obtained by careful inspection of the optimality
system, which results in improved convergence estimates.
We also develop an a posteriori error estimator for this problem: Therefore an additional regularized problem is introduced, where a Tichonov regularization term
is added to the objective functional. For the regularized problem the discretization error can then be estimated with a “dual weighted residual” type estimator
to provide indicators for local mesh refinement. The error introduced by the regularization is estimated with an asymptotic model. This error can be controlled
by the regularization parameter, which is chosen within an adaptive algorithm to
balance both contributions of the error.
Joint work with Boris Vexler.
303
Petra Pořízková
Czech Technical University in Prague, CZ
Compressible and incompressible unsteady flows in convergent channel
Contributed Session CT4.3: Friday, 08:50 - 09:20, CO3
This study deals with the numerical solution of a 2D unsteady flow of a viscous fluid
in a channel for low inlet airflow velocity. The unsteadiness of the flow is caused by
a prescribed periodic motion of a part of the channel wall with large amplitudes,
nearly closing the channel during oscillations. The channel is a simplified model
of the glottal space in the human vocal tract and the flow can represent a model
of airflow coming from the trachea, through the glottal region with periodically
vibrating vocal folds to the human vocal tract.
Goal of this work is numerical simulation of flow in the channel which involves
attributes of real flow causing acoustic perturbations as is “Coandă phenomenon”
(the tendency of a fluid jet to be attracted to a nearby surface), vortex convection
and diffusion, jet flapping etc. along with lower call on computer time, due to
extension in 3D channel flow. Particular attention is paid to the acoustic analysis
of pressure signal from the channel.
Four governing systems are considered to describe the unsteady laminar flow of a
viscous fluid in a channel: 1. Full system - 2D system of Navier-Stokes equations
closed with static pressure expression for ideal gas p = f (ρ, u, v, e) describes flow
of compressible viscous fluid, 5 equations. 2. Iso-energetic system - 2D system of
Navier-Stokes equations closed with pressure expression which is independent on
total energy variable e (p = f (ρ, u, v)) describes flow of compressible viscous fluid,
4 equations. 3. Adiabatic system - 2D system of Navier-Stokes equations closed
with pressure expression which is independent on variables e, u, v describes flow
of compressible viscous fluid, 4 equations. 4. Incompressible system - 2D system
of Navier-Stokes equations where density ρ = const describes steady state flow
of incompressible viscous fluid, 3 equations. Solution is computed using Artificial
Compressibility Method.
The numerical solution is implemented using the finite volume method (FVM) and
the predictor-corrector MacCormack scheme with Jameson artificial viscosity using
a grid of quadrilateral cells. The unsteady grid of quadrilateral cells is considered
in the form of conservation laws using Arbitrary Lagrangian-Eulerian method.
The numerical simulations of flow fields in the channel, acquired from a developed
program, are presented for inlet velocity û∞ = 4.12ms−1 and Reynolds number
Re∞ = 4481 and the wall motion frequency 100 Hz.
Joint work with Karel Kozel, and Jaromir Horacek.
304
Stefan Possanner
Université Paul Sabatier, FR
Numerical integration of the MHD equations on the resistive timescale
Minisymposium Session ASHO: Tuesday, 11:30 - 12:00, CO123
The two-dimensional magneto-hydrodynamic (MHD) equations constitute a relatively simple, low-cost model for describing the interplay between plasma motion and magnetic field dynamics in laboratory (Tokamaks) and in astrophysical
plasmas. In these situations, the resistivity is usually small. Hence, resistive effects such as the tearing mode instability occur on large timescales of order −1 ,
where the asymptotic parameter is the inverse Lundquist number. In this talk,
we elaborate on the MHD equations rescaled to the resistive time, which leads to
a singularly perturbed problem as goes to zero. We present two reformulations
giving a well-posed problem in the limit, the first being based on a micro-macro
decomposition and the second stemming from a reordering of the equations. Finite
difference method is applied for numerical studies. We shall discuss to what degree
the obtained schemes can be categorized as ’asymptotic-preserving’, which is not
trivial because we observe boundary layers in time as well as in space. Finally,
simulation results for the magnetic reconnection process in the non-linear tearing
mode are presented. The significant reduction in computational cost due to the
new schemes with respect to conventional explicit schemes is highlighted.
305
Jerome Pousin
Université de Lyon ICJ INSA UMR CNRS 5208, FR
A posteriori estimate and adaptive partial domain decomposition
Contributed Session CT1.5: Monday, 18:00 - 18:30, CO016
The method of asymptotic partial decomposition of a domain (MAPDD) originates
with the works of G.Panasenko [1]. The idea is to replace an original 3D or 2D
problem by a hybrid one 3D −1D; or 2D −1D where the dimension of the problem
decreases in part of the domain. In the problem considered here, due to geometrical
considerations concerning the domain Ω it is assumed that the solution does not
differ very much from a function which depends only on one variable in a part of
the domain (subdomain Ω2 ). The a posteriori error estimate proved in this paper,
is able to measure the discrepancy between the exact solution and the hybrid
solution. Moreover, the method proposed is able to determine the location of
the junction (i.e the location of the boundary Γ in the example treated) by using
optimization techniques combined with an a posteriori error estimate and an error
indicator. Let us also mention the interest of locating with accuracy the position
of the junction in blood flows simulations [2].
The domain Ω = (0, 1)×(0, 1) is decomposed in two sub domains Ω1 = (0, a)×(0, 1)
and Ω2 = (a, 1) × (0, 1), the boundary Γ = Ω1 ∩ Ω2 , and the boundary ∂Ω is
divided into four subparts γ1 = {0} × (0, 1) γ2 = (0, 1) × {0} γ3 = {1} × (0, 1)
γ4 = (0, 1) × {1} 2. Define the following functional spaces:
(Ω1 ) = {ϕ ∈ H 1 (Ω1 ); ϕ|γ1 = 0}; 0 H 1 (Ω2 ) = {ϕ ∈ H 1 (Ω2 ); ϕ|γ3 = 0};
V =0 H 1 (Ω1 ) ×0 H 1 (Ω2 ); Λ = span{1}.
0H
1
Let us define (u1 , u2 , λ) ∈ V × Λ solution to

Z
2 Z
2 Z
X

 X


f vi dx1 dx2 ;
∇ui · ∇vi dx1 dx2 + λ(v1 − v2 ) dx2 =
Γ
i=1 Ωi
i=1 Ωi
Z



ξ(u1 − u2 ) dx2 = 0 ∀ξ ∈ Λ.

∀v ∈ V
Γ
(1)
Lemma Assume f ∈ L2 (Ω) and f|Ω2 = f (x1 ), then, there exists a unique (u1 , u2 , λ) ∈
V × Λ solution to Problem (1). Moreover, u2 depends only on x1 and we have
Z
1
∂n1 u1 = −∂n2 u2 in L2 (Γ); u2 |Γ =
u1 dx2 .
|Γ| Γ
Let (w1 , w2 , λ0 ) ∈ V × 2 (Γ) be solution to Problem (1) where the mortar space is
2
(Γ), then the error e = (w − u, λ0 − λ) satisfies:
Z
1
1
1
kek ≤ ku1 − u2 k0,Γ = ku1 −
u1 dx2 k0,Γ .
(2)
β
β
|Γ| Γ
Let a denote the position of the boundary Γ. Due to relation (2), the proposed
strategy is to minimize with respect to a the functional J(a) defined by:
Z
1
J(a) = kũ1 (a, x2 ) −
ũ1 (a, x2 ) dx2 k20,Γ
|Γa | Γa
in order to locate precisely the position of the interface. In this presentation I will
discuss some numerical results, and I will show how to combine mesh refinement
and localisation of the interface in order to reduce the error.
306
References
[1]
G.P.Panasenko, title of a book, Multi-scale modelling for structures and composites, Springer, the Netherlands, 2005.
[2]
A. Quarteroni and A. Veneziani, Analysis of a geometrical multiscale model
based on the coupling of PDE’s and ODE’s for Blood Flow Simulations, SIAM
J. on MMS. No. 2 Vol 1 (2003), pp. 173-195.
307
Catherine Powell
University of Manchester, GB
Fast solvers for stochastic FEM discretizations of PDEs with uncertainty
Minisymposium Session CTNL: Tuesday, 10:30 - 11:00, CO015
In modelling most physical processes we encounter uncertainties, both in the mathematical models we use as well as in the input data required to solve them. A
common approach is to view unknown inputs as stochastic processes (in one dimension) or random fields (in higher dimensions), giving rise to stochastic differential
equations. Starting from a statistical description of the data, our task is to obtain statistical information about output quantities of interest. This is known
as Uncertainty Quantification (UQ) and contrasts with traditional deterministic
modelling, where we simulate specific events corresponding to hypothesized models
with certain data and seek to assess only discretisation errors.
Extensive work has been carried out in the last decade to develop accurate numerical methods (stochastic Galerkin, stochastic collocation, Quasi-Monte Carlo) for
PDEs with uncertainty. Galerkin-based schemes lead to linear systems with much
higher complexity than their deterministic counterparts. The matrices are sums
of Kronecker products of smaller matrices associated with distinct discretizations
and the systems are large, reflecting the curse of dimensionality inherent in most
stochastic approximation schemes. For stochastically nonlinear problems, this is
compounded by the fact that the matrices are block-dense and the cost of a matrix vector product is non-trivial. On the other hand, sampling methods lead to
extremely long sequences of small similar linear systems. Challenges for the linear
algebra community include: coping with high dimensionality, non-assembled coefficient matrices, intriguing block structures, exploiting similarity and the influence
of statistical as well as discretization parameters on robustness of solvers.
In this talk, we discuss some of the linear algebra issues involved in applying
stochastic Galerkin and collocation schemes to saddle point problems with uncertain data. Model problems include: a mixed formulation of a second-order elliptic problem (with uncertain diffusion coefficient), the steady-state Navier-Stokes
equations (with uncertain Reynolds number) and a second-order elliptic PDE,
formulated on an uncertain domain.
308
Vladimír Prokop
CTU in Prague, CZ
Numerical Simulation of Generalized Oldroyd-B Fluid Flows in Bypass
Contributed Session CT1.7: Monday, 17:00 - 17:30, CO122
In this paper the numerical solution of viscous incompressible generalized OldroydB fluids is described. The motivation for this work is to model blood flow in
vessels of small diameter and to evaluate the importance of taking into account
shear thinnning behaviour of blood in this case. The flows of Oldroyd-B fluids are
described by the system of conservation laws of mass and momentum. The extra
stress tensor is decomposed into Newtonian and elastic part. The later part is
described by the Oldroyd-B model. In the generalized case of flows of Oldroyd-B
fluids, the viscosity function is specified to describe shear-thinning behaviour of
blood. In this case, the modified Cross model is used, where constants of the
model, such as asymptotic viscosity values at zero and infinite shear rates, are
taken from literature. Energy conservation is not taken into account because the
temperature variations are in our case negligible.Steady numerical solution of incompressible generalized Oldroyd-B flows is sought in the geometry of stenotic
channel with bypass in 2D. An artificial compressibility method is used in numerical solution. In this case one can use marching in time to find steady solution
with steady boundary conditions in the same manner as in the case of compressible flow. The system of governing equations is discretized by the finite volume
method in space. The viscous fluxes are computed using dual finite volumes cells
of the diamond type. The convective fluxes are discretized in a central manner.
The resulting system of ordinary differential equations is then solved by the threestage Runge-Kutta method. In the case of higher Reynolds numbers an artificial
viscosity of Jameson’s type is added to maintain stability of the numerical computation of the system of Navier-Stokes equations.The comparison of Newtonian,
generalized Newtonian, Oldroyd-B and generalized Oldroyd-B flows is presented
in the geometry of stenotic channel with bypass.
Joint work with Karel Kozel.
309
Maria Adela Puscas
Université Paris Est, FR
3d conservative coupling method between a compressible fluid flow and a deformable
structure
Minisymposium Session NFSI: Thursday, 11:00 - 11:30, CO122
ABSTRACT
In this work, we present a conservative method for three-dimensional inviscid fluidstructure interaction problems. On the fluid side, we consider an inviscid Euler
fluid in conservative form. The Finite Volume method uses the OSMP high-order
flux with a Strang operator directional splitting [1]. On the solid side, we consider
an elastic deformable solid. In order to examine the issue of energy conservation,
the behavior law is here assumed to be linear elasticity. In order to ultimately
deal with rupture, we use a Discrete Element method for the discretization of the
solid [2].
Body-fitted methods are not well-suited for this type of problem or even for large
displacements of the structure, since they involve possibly costly remeshing of the
fluid domain. We use an immersed boundary technique through the modification
of the finite volume fluxes in the vicinity of the solid. The method is tailored to
yield the exact conservation of mass, momentum and energy of the system and
exhibits consistency properties.
Since both fluid and solid methods are explicit, the coupling scheme is designed
to be globally explicit too. The computational cost of the fluid and solid methods
lies mainly in the evaluation of fluxes on the fluid side and of forces and torques
on the solid side. It should be noted that the coupling algorithm evaluates these
only once every time step, ensuring the computational efficiency of the coupling.
Our approach is an extension to the three-dimensional deformable case of the conservative method developed in [3]. We will present numerical results assessing the
robustness of the method in the case of a deformable solid with large displacements
coupled with a compressible fluid flow.
REFERENCES
[1] V. Daru and C. Tenaud. High-order one-step monotonicity-preserving schemes
for unsteady com- pressible flow calculations. Journal of Computational Physics,
193:563-594, 2004.
[2] L. Monasse and C. Mariotti. An energy-preserving Discrete Element Method
for elastodynamics. ESAIM: Mathematical Modelling and Numerical Analysis,
46:1527-1553, 2012.
[3] L. Monasse, V. Daru, C. Mariotti, S. Piperno and C. Tenaud. An embedded
Boundary method for the conservative coupling of a compressible flow and a rigid
body. Journal of Computational Physics, 231:2977-2994, 2012.
Joint work with Alexandre ERN, Laurent MONASSE, Virginie DARU, Christian
TENAUD, and Christian MARIOTTI.
310
Qingguo Qingguo Hong
RICAM, Austrian Academy of Science, AT
A multigrid method for discontinuous Galerkin discretizations of Stokes equations
Minisymposium Session PSPP: Thursday, 14:30 - 15:00, CO3
In this talk, a multigrid algorithm for discontinuous Galerkin(DG) H(div)-conforming
discretizations of the Stokes equations is presented. Using the Augmented Uzawa
method to solve this saddle point problem, a linear elasticity problem needs to be
solved efficiently. A variable V- cycle and a W-cycle are designed for this purpose
since the bilinear forms arising from DG disretizations are nonnested. The proposed method is proved to converge uniformly independent of the Poisson ratio
and mesh size which shows its robustness and optimality.
Joint work with Johannes Kraus, Jinchao Xu, and Ludmil Zikatanov.
311
Andreas Rademacher
Mathematical Institute, University of Cologne, DE
Model and mesh adaptivity for frictional contact problems
Contributed Session CT4.7: Friday, 08:20 - 08:50, CO122
Frictional contact problems play an important role in many production processes.
Here, the use of complex frictional laws to ensure an accurate modelling leads to
an high computational effort. One approach to reduce the effort is given by mesh
adaptivity based on goal oriented a posteriori error estimation, which is discussed,
for instance, in [1]. An advanced idea is now not only to adaptively modify the
mesh but also the underlying models based on a posteriori error estimators. In
this note, we shortly describe the approach in the case of Signorini’s problem with
friction in mixed form using the notation of [1]:
(σ(u), ε(v))+ < λn , vn > +(λt , svt )0,ΓC
=
< l, v >,
< µn − λn , un − g > +(µt − λt , sut )0,ΓC
≤
0,
for all v ∈ V , all µn ∈ Λn and all µt ∈ Λt . Here, s specifies the reference friction
model. This problem is discretized with a mixed finite element approach leading to
a discrete solution (uh , λn,H , λt,H ), where the usual nodal low order finite element
approach is used to discretize the displacement. For the discretization of the
Lagrange multipliers piecewise constant basis functions on coarser meshes as in
[1] or biorthogonal basis functions leading to Mortar methods, see, e.g., [2], are
applied.
The first and essential step for model adaptivity is to specify an admissible and
consistent model hierarchy. One example of such a model hierarchy for friction
laws is given by the following models: frictionless contact, Tresca friction, Coulomb
friction, friction model by Betten. We refer to [3, Section 4.2] to a detailed description of the single models. Now, a simplified friction model sm is locally composed
by choosing one of the models out of the hierarchy. The corresponding discrete
m
m
solution is given by (um
h , λn,H , λt,H ).
The aim is now to derive a posteriori error estimates for the error J(u, λn , λt ) −
m
m
J(um
h , λn,H , λt,H ), where J is an user specified possibly nonlinear output functional. To this end, the contact conditions are formulated with the help of a
nonlinear complementarity (NCP) function such that we arrive at a semilinear
problem. Here, the NCP function is given by
Z
D(uh , λt,H )(µt,H ) :=
µt,H (max{s, kλt,H + ut,h k} − s · (λt,H + ut,h )) do
ΓC
for the reference friction model s. For the model adaptive friction law sm , it is
given by Dm . The approach presented in [4] to derive a posteriori error estimates concerning the model and the discretization error is applied on the given
problem formulation. However, we have to pay special attention to the remainder
terms due to the nondifferentiability of the NCP function. At last, we obtain the
model error estimator η m = Dm (uh , λt,H )(ξt,H ) − D(uh , λt,H )(ξt,H ) and the usual
discretization error estimator η h . The function ξt,H is the Lagrange multiplier
concerning the frictional variable of the dual problem, which corresponds in this
case to the last step of a primal dual active set method for solving the reduced
problem. The error estimators η m and η h are localized and normalized to obtain
model and refinement indicators, respectively. In the adaptive strategy, the values
312
η h and η m are compared and using a balancing strategy, it is decided, whether
the mesh, the modeling or both should be improved. Then, standard techniques
are applied for model improvement respectively mesh refinement. In Figure 1, we
present some numerical results. Here, the adaptively chosen friction model for a
three dimensional contact problem is depicted.
References
[1] Blum, H., Rademacher, A. and Schröder, A., Goal oriented error control for
frictional contact problems in metal forming, Key Engineering Materials, 504506, 987-992 (2012).
[2] Wohlmuth, B., Variationally consistent discretization schemes and numerical
algorithms for contact problems, Acta Numerica, 20, 569-734 (2011).
[3] Wriggers, P., Computational Contact Mechanics, John Wiley & Sons, Chichester (2002).
[4] Braack, M. and Ern, A., A posteriori control of modelling errors and discretisation errors, Multiscale Model. Simul., 1, 221-238 (2003).
Figure 1: Exemplarily results of the adaptive algorithm
313
Istvan Reguly
Oxford e-Research Centre, University of Oxford, GB
OP2: A library for unstructured grid applications on heterogeneous architectures
Minisymposium Session PARA: Monday, 15:30 - 16:00, CO016
Due to the physical limitations in building faster single core microprocessors, the
development and use of multi- and many-core architectures for general purpose scientific and engineering applications has received increasing attention for the past
few years. The greatest obstacle to the widespread usage of parallel computing is
the difficulty to program such devices in an efficient and scalable manner. It is
unreasonable to expect domain experts who want to write efficient applications to
learn different complex parallel programming languages and create hardware specific code. In the past, traditional programming languages such as C and Fortran
scaled well over time with increasingly higher processor frequencies, however this
is no longer the case, because of the inevitable heterogeneity of high-performance
architectures. A more efficient solution to this issue is to provide high-level programming abstractions to the application developers, which permit them to focus
on the mathematical aspects of the problem, leaving the optimisation issue to a
corresponding framework that, thanks to the insight into the high-level program
abstractions, is capable of solving the performance portability issue across heterogeneous architectures, and other aspects such as code longevity.
OPlus (Oxford Parallel Library for Unstructured Solvers) [1], a research project
that had its origins in 1993 at the University of Oxford, provided such an abstraction framework for performing unstructured mesh based computations. OP2 [2]
is the successor of OPlus, bringing support for state-of-the-art hardware such as
many-core processors and heterogeneous systems. OPlus and OP2 can be viewed
as an instantiation of the AEcute (access-execute descriptor) programming model
that separates the specification of a computational kernel with its parallel iteration
space, from a declarative specification of how each iteration accesses its data.
We present the design of the current OP2 library, starting with the API that uses
the notion of sets and mappings between sets to define the mesh and its components. Data of arbitrary dimension can be assigned to the elements of any set.
Loops over these sets are defined through OP2’s API by specifying the set itself and
the datasets accessed either directly or indirectly via a mapping. The framework
takes care of data dependencies and indirect access, thus the operation performed
on each set element can be oblivious to the underlying mesh. OP2 utilizes sourceto-source translation and compilation so that a single application code written
using the OP2 API can be transformed into different parallel implementations for
execution on different back-end hardware platforms.
We briefly describe our code generation technique that only involves static parsing
of OP2 API calls, which define the access to all data in a given parallel loop. We
discuss how execution maps to different hardware and multiple levels of parallelism:
on multicore CPUs, different generations of GPUs and across MPI, and what
parameters are involved in this mapping that can have an impact on performance.
One of the core issues is the handling of data dependencies in accessing indirectly
referenced data. The OP2 run-time support solves this parallelism control problem
in different ways, depending on the target back-end architecture: race conditions
are handled using an owner-compute approach over MPI, a block coloring scheme
on the coarse shared-memory level, and for vector machines, suchs as the GPU,
an additional set element based coloring is also required. Since the difference
314
between the computing capacity and the bandwidth to off-chip memory has been
increasing on modern hardware, we discuss the impact of unstructured mesh data
layouts (array of structs vs. struct of arrays) on different architectures. Similarly,
we show how multi-level memory hierarchies can be exploited, such as the on-chip
cache and the explicitly managed scratch-pad memory. Data locality is one of the
most important factors affecting performance, by dividing the execution set into
mini-partitions and staging data we show how to improve data locality.
One of the key challenges is the ever-changing hardware landscape. We aim to
achieve near-optimal performance on different CPUs, GPUs and future many-core
architectures, but their parameters change even from generation to generation: a
good example is the shift in required levels of parallelism and amount of resources
used between the Fermi and the Kepler generation of GPUs. For this reason, we
have to re-evaluate and re-tune the code we generate for different back-ends for
new hardware. Additionally, there are parameters that depend on the application;
the tuning of these have to be carried out in the context of the application. We
demonstrate the tools available, and show the advantages of tuning.
Finally, through Volna [3], a tsunami simulation code that was ported to OP2, we
provide a contrasting benchmarking and performance analysis study on a range of
multi-core/many-core systems.
Acknowledgments
This research is funded by the UK Technology Strategy Board and Rolls-Royce plc.
through the Siloet project, the UK Engineering and Physical Sciences Research
Council pro jects EP/I006079/1, EP/I00677X/1 on Multi-layered Abstractions for
PDEs and the Algorithms and Software for Emerging Architectures (ASEArch)
EP/J010553/1 project
References
[1] Crumpton, P. I. and Giles, M. B. Multigrid aircraft computations using the
OPlus parallel library. Parallel Computational Fluid Dynamics: Implementations and Results Using Parallel Computers (1998), pp. 339 - 346.
[2] M.B. Giles, G.R. Mudalige, Z. Sharif, G. Markall, P.H.J. Kelly. Performance
Analysis and Optimization of the OP2 Framework on Many-Core Architectures. The Computer Journal (2011). ISSN 0010-4620
[3] D. Dutykh, R. Poncet and F. Dias, The VOLNA code for the numerical modeling of tsunami waves: Generation, propagation and inundation, European
Journal of Mechanics - B/Fluids (2011), vol. 30, issue 6, pp. 598-615
Joint work with M. B. Giles, G. R. Mudalige, C. Bertolli, and P.H.J. Kelly.
315
Gunhild Allard Reigstad
NTNU, NO
Numerical investigation of network models for Isothermal junction flow
Contributed Session CT4.3: Friday, 09:20 - 09:50, CO3
This paper deals with the issue of how to properly model fluid flow in pipe junctions. In particular we investigate the numerical results from three alternative
network models, all three based on the Isothermal Euler equations. Using two
different test cases we will focus on the physical validity of simulation results from
each of the models. We will as well show how the different models may produce
results that are fundamentally different for a given set of initial data. Finally we
will give some attention to the selection of suitable test cases for network models.
Network models are used to find global weak solutions for hyperbolic conservation
laws defined on N segments of the real line, where all segments are connected by
junctions. In addition to flow in pipelines, such models are used to describe for
example traffic flow, data networks, and supply chains.
Each pipe in a network model is modelled along a local axis (x ∈ R+ ) and the
pipe-junction interface is at x = 0. Presupposing constant initial conditions, the
flow condition in each pipe may be found from the half-Riemann problem:
∂
∂Uk
+
F(Uk ) = 0
∂t ( ∂x
Ūk if x > 0
Uk (x, 0) =
U∗k if x < 0,
(1)
U∗k Ū1 , . . . , ŪN = lim+ Uk (x, t),
(2)
where
x→0
is by definition connected to Ūk by a Lax wave of the 2nd family and obey a set
of coupling conditions.
For the Isothermal Euler equations, the coupling conditions are related to mass
and momentum:
CC1: Mass is conserved at the junction:
N
X
ρ∗k vk∗ = 0.
(3)
k=1
CC2: There is a unique, scalar momentum related coupling constant at the junction:
Hk∗ (ρ∗k , vk∗ ) = H̃
∀k ∈ {1, . . . , N }.
(4)
Three different expressions for the momentum related coupling constant are considered in this paper. Equal pressure (CCp) and equal momentum flux (CCMF)
have been frequently used in the litterature 1 2 3 4 . Equal Bernoulli (CCBI)
1 M. K. Banda, M. Herty and A. Klar, Gas flow in pipeline networks, Netw. Heterog. Media 1, 41–56,
(2006).
2 M. K. Banda, M. Herty and A. Klar, Coupling conditions for gas networks governed by the isothermal Euler equations, Netw. Heterog. Media 1, 295–314, (2006).
3 R. M. Colombo and M. Garavello, A well posed Riemann problem for the p-system at a junction,
Netw. Heterog. Media 1, 495–511, (2006).
4 M. Herty and M. Seaïd, Simulation of transient gas flow at pipe-to-pipe intersections, Netw. Heterog. Media 56, 485–506, (2008).
316
was recently proposed in Reigstad et al. 5 The physical validity of the network
model results is evaluated by an entropy condition 3 . Analytical investigations
in Reigstad et al. showed that for certain ranges of flow rates in a junction with
three connected pipes, the coupling constants CCp and CCMF would produce unphysical results. Using equal Bernoulli (CCBI) would yield physical results for all
subsonic initial data.
In the present paper, the numerical test cases will be used to verify this analysis
and to explore the behaviour of the different models. The first case consists of a
junction connecting 5 pipes. The case illustrates how the network model easily
may be applied on a large number of pipes connected at a junction. We will as
well show how the results of the three models relate to the entropy condition.
The second case consists of three pipes connected by two junctions such that a
closed system is constructed. We will show how the different models produce
fundamentally different results in terms of rarefaction and shock waves. The total
energy of the system as a function of time will as well be presented in order to
display the effect of having non-entropic solutions.
Acknowledgements
This work was financed through the research project Enabling low emission LNG
systems. The authors acknowledge the project partners; Statoil and GDF SUEZ,
and the Research Council of Norway (193062/S60) for support through the Petromaks programme.
Joint work with Tore Flåtten.
5 G. A. Reigstad, T. Flåtten, N. E. Haugen and T. Ytrehus, Coupling constants and the generalized
riemann problem for isothermal junction flow, Submitted (2012).
317
Knut Reimer
Christian Albrechts Universität zu Kiel, DE
H2 -matrix arithmetic and preconditioning
Contributed Session CT4.2: Friday, 09:50 - 10:20, CO2
The discretisation of integral and partial differential equations leads to highdimensional systems of linear equations. Usually these systems are ill-conditioned.
Thus preconditioners are needed to ensure fast convergence for iterative methods.
The approximated inverse and LU-decomposition are established approaches. For
this purpose H-matrix arithmetic is commonly known as an efficient technique
with log-linear complexity. A further development of the H-matrices are the H2 matrices, which enable storage and evaluation in linear complexity, instead of
log-linear. To adapt the H-matrix technique to H2 -matrices, it is essentially to
design an efficient low-rank-update for every block of an H2 -matrix.
The talk presents the idea for the low-rank-update and a sketch of the inversion
and the LU-decomposition. It concludes with some numerical results for both, the
inversion and the LU-decomposition.
Joint work with Steffen Börm.
318
Sergey Repin
University of Jyvaskyla, FI
On Poincaré Type Inequalities for Functions With Zero Mean Boundary Traces
and Applications to A Posteriori Analysis of Boundary Value Problems
Contributed Session CT4.7: Friday, 08:50 - 09:20, CO122
We discuss Poincaré type inequalities for the functions having zero mean value
on the whole boundary of a Lipschitz domain or on a measurable part of the
boundary. For some basic domains (rectangles, cubes, and right triangles) exact
and easily computable constants in these inequalities can be derived [1]. With
the help of the inequalities a new type of a posteriori estimates are derived. The
major difference with respect to well known error a posteriori estimates of the
functional type (see an overview in [2]) consists of that new estimates are applicable
to a much wider set of approximations. They can be useful for approximations
violating boundary conditions and for nonconforming approximations. One other
application is related to modeling errors arising as a result of coarsening of a
boundary value problem. In this case, the estimates yield directly computable
bound of the modeling error encompased in the coarsened solution. Constants
in Poincaré type inequalities enter all these estimates, which provide guaranteed
error control of the corresponding approximation and modeling errors. Finally, we
discuss possible applications to a posteriori estimates of nonlinear elliptic problems.
References
[1] A. Nazarov, S. Repin Exact constants in Poincare type inequalities for functions with zero mean boundary traces. Preprint V.A. Steklov Inst. Math. St.
Petersburg, 2012 (arXiv [math. AP], 1211.2224)
[2] S. Repin. A posteriori error estimates for partial differential equations. Walter
de Gruyter, Berlin, 2008.
319
Thomas Richter
Universität Heidelberg, DE
A Fully Eulerian Formulation for Fluid-Structure Interactions
Minisymposium Session NFSI: Thursday, 11:30 - 12:00, CO122
In this contribution, we present a monolithic formulation for fluid-structure interactions, where both subproblems - fluid and solid - are given in Eulerian coordinates on a fixed background mesh.
This formulation comes without the necessity of artificial transformations of domains and meshes. It can be used to describe problems with very large deformation, free motion of the solid in the fluid and it can model contact problems.
As a front-capturing method on a fixed background mesh, the interface moves
freely throughout the mesh. We present the Initial Point Set as an alternative to
Levelset formulations capturing this interface.
Joint work with Thomas Wick.
320
Marco Rozgic
Helmut Schmidt University of the Federal Armed Forces Hamburg, DE
Mathematical optimization methods for process and material parameter identification in forming technology
Contributed Session CT3.1: Thursday, 16:30 - 17:00, CO1
Parameter identification plays a crucial role in various mathematical applications
and technological fields. Both to determine good parameter sets and to judge the
quality of a computed set of parameters a rigorous mathematical theory is needed.
A common method to determine optimal parameters is to solve an inverse problem. Typical inverse problems that arise in forming processes are material and
process parameter identification [2]. The framework presented by Taebi et al. [3]
shows the impact of the ability to choose good process parameters when exploring a new technology. The presented methodology finds optimal parameters in a
quasi static forming process combined with an electromagnetic high speed forming
method in order to extend classical forming limits. The parameter space comprises
contributions of the triggering current (e.g., frequency, amplitude, damping, etc.),
geometric descriptions of the tool coil as well as deep drawing parameters (e.g.,
drawing radii or tribological parameters). The quality of a given parameter set
is determined by computing the distance of the simulated forming result to the
prescribed ideal shape. Every evaluation of the objective function requires a full
coupled (mechanical and electromagnetic) finite element simulation. Reliable and
fast computable material models are needed to perform efficient numerical finite
element simulations. Recently introduced anisotropic models [4] take into account
nonlinear kinematic and isotropic hardening. To identify material parameters we
introduce an optimization method based on a simulation of a uniaxial tensile test.
The objective is to fit the simulated data to the experimental results. Again the
objective function is only accessible by a finite element simulation, which makes
its evaluation expensive. Opposed to optimal control theory approaches often used
in the context of finite element based problems [1] we state the inverse problem
as a classical discretized nonlinear optimization problem where objective and constrain evaluation require full simulations of the underlying differential equations.
The arising nonlinear optimization problems can be solved by various methods.
Derivative free methods like genetic algorithms or simulated annealing usually require more function evaluations and parameter tuning than gradient descent based
methods, on the other hand gradient information is not always accessible. In our
approach we focus on the use of inner point methods as proposed in [5]. Inner
point methods are known to be fast and efficient. Furthermore the fact that the
parameters computed by the algorithm lie in the interior region defined by constrains is often beneficial in technological applications. The required derivative
information is computed by the finite difference method. To tackle the resulting
increase in the number of objective and constrain function evaluations adaptive
control heuristics are needed. Based on the duality gap, such a heuristic could for
example decide if either the full model or a reduced function, as in active set methods, should be used to perform a Newton step. Further the size of the duality gap
can be used to control if a rough discretized finite element simulation or a more
precise, fine discretized simulation should be performed. In case of a large duality
gap a less accurate simulation can yield sufficiently good descent directions in less
time. Finally the underlying elastoviscoplastic models have to be investigated in
order to assess the quality of an identified parameter set. Questions about con-
321
strain qualifications and necessary optimality conditions can only be answered by
a close observation of the whole discretization scheme. We will discuss the use
of inner point methods in the scope of material and process parameter identification for technological forming. A systematic approach to study the properties of
the resulting optimization problems is introduced. Further we will point towards
a duality gap based heuristic that can eventually help to reduce the number of
expensive function calls.
References
[1] R. Becker and R. Rannacher. An optimal control approach to a posteriori error
estimation in finite element methods. Acta numerica, 10(1):1–102, 2001.
[2] J.-L. Chenot, E. Massoni, and J. Fourment. Inverse problems in finite element simulation of metal forming processes. Engineering computations,
13(2/3/4):190–225, 1996.
[3] F. Taebi, O. Demir, M. Stiemer, V. Psyk, L. Kwiatkowski, A. Brosius, H. Blum,
and A. Tekkaya. Dynamic forming limits and numerical optimization of combined quasi-static and impulse metal forming. Computational Materials Science, 54(0):293 – 302, 2012.
[4] I. Vladimirov and S. Reese. Anisotropic finite plasticity with combined hardening and application to sheet metal forming. International Journal of Material
Forming, 1:293–296, 2008.
[5] A. Wächter and L. T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical
Programming, 106(1):25–57, 2006.
Joint work with Robert Appel, and Marcus Stiemer.
322
Gianluigi Rozza
SISSA, International School for Advanced Studies, mathLab, IT
A reduced computational and geometrical framework for viscous optimal flow control in parametrized systems
Minisymposium Session ROMY: Thursday, 15:30 - 16:00, CO016
Any computational problem is filled with uncertain elements, such as (i) material parameters and coefficients, boundary conditions, and (ii) geometrical configurations. Usually such factors cannot be completely identified to the point of
absolute certainty; the former may be recovered from measurements, while the
latter can be obtained as a result of a shape identification process. In general,
inverse identification problems entail very large computational efforts, since they
involve iterative optimization procedures that require several input/output evaluations. Incorporating geometrical configurations into the framework, e.g. when
dealing with problems or optimal shape design, makes the inverse problem even
less affordable. Given a parameterized PDE model of our system, the forward
problem consists in evaluating outputs of interest (depending on the state solution of the PDE) for specified parameter inputs. On the other hand, whenever
some parameters are uncertain, we aim at inferring their values (and/or distributions) from indirect observations (and/or measures) by solving an inverse problem:
given an observed output, can we deduce the value of the parameters that resulted
in this output? Such problems are often encountered as problems of parameter
identification, variational data assimilation, flow control or shape optimization.
Computational inverse problems are characterized by two main difficulties:
1. The forward problem is typically a nonlinear PDE, possibly incorporating
coupled multiphysics phenomena. State-of-the-art discretization methods
and parallelized codes are therefore required to solve them up to a tolerable
accuracy. This is exacerbated by the fact that solving the inverse problem
requires multiple solutions of the forward problem. Hence, if the forward
problem can be replaced with an inexpensive (but reliable) surrogate, solving
the inverse problem is much more feasible.
2. Uncertainty in the model parameters can be large when the parameters describe geometric quantities such as shape. This is especially true in biomedical applications, where observed geometries are often patient-specific and
only observable through medical imaging procedures which are highly susceptible to measurement noise.
In this talk we propose a general framework for computationally solving inverse
and optimal flow control problems using reduced basis methods, and apply it to
some inverse identification problems in haemodynamics. An implementation of
the reduced basis method is presented by considering different shape or domain
parametrizations by non-affine maps with flexible techniques, such as free-form
deformations or radial basis functions. In order to develop efficient numerical
schemes for inverse problems related with shape variation such as shape optimization, geometry registration and shape analysis through parameter identification,
we combine a suitable low-dimensional parametrization of the geometry (yielding a geometrical complexity reduction) with reduced basis methods (yielding a
reduction of computational complexity). The analysis will focus on the general
properties (stability, reliability, accuracy) and performance of the reduced basis
323
method for Stokes and Navier-Stokes equations and we will highlight its special
suitability for the numerical study of viscous flows in parametrized geometries with
emphasis in cardiovascular problems.
Joint work with Andrea Manzoni, Federico Negri, and Alfio Quarteroni.
324
Karl Rupp
Argonne National Laboratory, US
ViennaCL - Portable High Performance at High Convenience
Minisymposium Session PARA: Monday, 12:10 - 12:40, CO016
High-level application programming interfaces (API) are said to be the natural
enemy of performance. Even though suitable programming techniques as well as
just-in-time compilation approaches have been developed in order to overcome
most of these limitations, the advent of general purpose computations on graphics
processing units (GPUs) has lead to a renaissance of a wide-spread use of low-level
programming languages such as CUDA and OpenCL.
Porting existing code to GPUs is, however, in many cases a very time consuming
process if low-level programming languages are used. They require the programmer
to understand many details of the underlying hardware and often consume a larger
amount of development time than what is saved by a reduced total execution time.
On the other hand, high-level libraries for GPU computing can significantly reduce
the porting effort without changing too much of existing code.
ViennaCL is one of the most widely used library offering a high-level C++ API
for linear algebra operations on multi-core CPUs and many-core architectures
such as GPUs. In particular, we demonstrate that a high-level API for linear
algebra operations can still be provided without sacrificing performance on GPUs.
Furthermore, the generic implementations of algorithms such as iterative solvers
allow for code reuse beyond device and library boundaries, making the transition
from purely CPU-based code to GPU-accelerated code as seamlessly as possible.
Also, we explain why and how ViennaCL manages different parallel computing
backends and assess the role of autotuning for achieving portable performance.
Benchmark results for GPUs from NVIDIA and AMD as well as for Intel’s MIC
platform are presented along with a discussion of techniques for achieving portable
high performance.
325
Figure 1: STREAM-like benchmark for the performance of vector additions (top)
and performance comparison for 50 iterations of the conjugate gradient method
(bottom) on different computing hardware.
Joint work with Philippe Tillet, Florian Rudolf, and Josef Weinbub.
326
Daniel Ruprecht
Institute of Computational Science, University of Lugano, CH
Convergence of Parareal for the Navier-Stokes equations depending on the Reynolds
number
Contributed Session CT1.6: Monday, 18:00 - 18:30, CO017
1
Introduction
The number of cores in state-of-the-art high-performance computing systems is
rapidly increasing and has reached the order of millions already. This requires
new inherently parallel algorithms that feature a maximum degree of concurrency.
A promising approach for time-dependent partial differential equations are method
that parallelize in time. The Parareal parallel-in-time method has been introduced
in (Lions et al., 2001) and the first study considering its application to the NavierStokes equations is (Fischer et al., 2003). Another study reporting speedups for
experiments on up to 24 processors with a Reynolds number of Re = 1000 can be
found in (Trindade and Pereira, 2004). A larger scale study using up to 2,048 cores
is conducted in (Croce et al., 2012), but also only for Re = 1000. It is has been
demonstrated numerically and theoretically that Parareal can exhibit instabilities
when applied to advection-dominated problems, see e.g. (Ruprecht and Krause,
2012) and references given there. Thus it can be expected that Parareal will cease
to converge for increasing Reynolds numbers.
2
Intended Experiments and Preliminary Results
The present study aims at providing a detailed numerical investigation of the convergence behavior of the Parareal method over a wide range of Reynolds numbers.
The setup will be a two-dimensional driven cavity flow. The employed code solves
the incompressible Navier-Stokes equations in dimensionless form, that is
1
∆u
Re
∇·u=0
ut + u · ∇u + ∇p =
(1)
(2)
so that the Reynolds number can be directly controlled as a parameter. Based on
the documented instability of Parareal for advection-dominated flows, we expect
an instability to arise for increasing Reynolds number. A first hint is given in Figure 1: It shows the "residuals" of the Parareal, that is the maximum change from
the previous to the current iteration, for different Reynolds numbers. While for
small Reynolds numbers the iteration converges very quickly, for larger Reynolds
numbers the iteration stalls for several iterations before starting to converge. The
simulations used a small time-step of ∆t = 1/250 in the coarse method and half
this step size in the fine propagator. The spatial resolution of ∆x = 1/32. The
coarse propagator run alone is stable and provides a reasonable solution for all
depicted Reynolds numbers.
327
3
Outlook
The talk will present a comprehensive study of the behavior of Parareal for large
Reynolds numbers. It will illustrate how the convergence behavior depends on the
Reynolds number and also explore the effect of spatial and temporal resolution.
Also, studies with Parareal using implicit as well as explicit integrators will be
performed. For the linearized Navier-Stokes equations, the effect of adding a
stabilization as in (Ruprecht and Krause, 2012) will be analyzed.
Figure 1: Maximum defect between current and previous iteration in Parareal for
Reynolds numbers ranging from Re = 102 to Re = 105 .
Joint work with Johannes Steiner, Robert Speck, and Rolf Krause.
328
Oxana Sadovskaya
Institute of Computational Modeling SB RAS, RU
Parallel Software for the Analysis of Dynamic Processes in Elastic-Plastic and
Granular Materials
Contributed Session CT3.6: Thursday, 17:00 - 17:30, CO017
The universal mathematical model for numerical solution of 2D and 3D problems of
the dynamics of deformable media with constitutive relationships of rather general
form is worked out [1]. The model for description of the process of deformation of
elastic bodies can be represented as the system of equations:
n
A
X
∂U
∂U
=
+ QU + G ,
Bi
∂t
∂xi
i=1
(1)
where U is unknown vector–function, A is a symmetric positive definite matrix of
coefficients under time derivatives, B i are symmetric matrices of coefficients under
derivatives with respect to the spatial variables, Q is an antisymmetric matrix, G
is a given vector, n is the spatial dimension of a problem (2 or 3). The dimension
of the system (1) and concrete form of matrices–coefficients are determined by the
used mathematical model. When taking into account the plastic deformation of a
material, the system of equations (1) is replaced by the variational inequality:
n
∂U X i ∂U
e
−
B
(U − U ) A
− QU − G ≥ 0 ,
∂t
∂xi
i=1
e, U ∈ F ,
U
(2)
where F is a given convex set, by means of which some constraints are imposed
e is an arbitrary admissible element of F . In the
on possible states of a medium, U
problems of mechanics of granular media with plastic properties a more general
variational inequality
n
∂U X i ∂V
B
−
(Ve − V ) A
− QV − G ≥ 0 ,
Ve , V ∈ F ,
(3)
∂t
∂xi
i=1
takes place, where the vector–functions V and U are related by the equations
V = λ U + (1 − λ) U π ,
U=
1−λ π
1
V −
V .
λ
λ
Here λ ∈ (0, 1] is the parameter of regularization of the model characterizing the
ratio of elastic moduli in tension and compression, U π is the projection of the vector
of solution onto the given convex cone K, by means of which the different resistance
of a material to tension and compression is described. The set F of admissible
variations,
included in (2)
n
o and (3), can be defined by the von Mises yield condition:
F = U : τ (σ) ≤ τs , where σ is the stress tensor, τ (σ) is the intensity of
tangential stresses, τs is the yield point of particles. As a convex cone K of
stresses,
criterion, the von Mises–Schleicher circular cone
n allowed by the strength
o
K = U : τ (σ) ≤ æ p(σ) can be used, where p(σ) is the hydrostatic pressure, æ
is the parameter of internal friction.
In the framework of considered mathematical model the parallel computational algorithm is proposed for numerical analysis of dynamic processes in elastic-plastic
and granular materials. The system of equations (1) is solved by means of the
329
splitting method with respect to spatial variables. An explicit monotone finitedifference ENO–scheme is applied for solving 1D systems of equations at the stages
of splitting method. Variational inequalities (2) and (3) are solved by splitting with
physical processes, which leads to the system (1) and the procedure of solution
correction, taking into account plastic properties of a material. This procedure
consists of determining a fixed point of the contractive mapping and is implemented by the method of successive approximations. Granularity of materials
is accounted by means of the procedure for finding the projection onto the convex cone K of admissible stresses. The parallelizing of computations is carried
out using the MPI library. The data exchange between processors occurs at step
“predictor” of the finite-difference scheme. At first each processor exchanges with
neighboring processors the boundary values of their data, and then calculates the
required quantities in accordance with the difference scheme. Mathematical models are embedded in programs by means of software modules that implement the
constitutive relationships, the initial data and boundary conditions of problems.
The universality of programs is achieved by a special packing of the variables,
used at each node of the cluster, into large 1D arrays. Detailed description of the
parallel algorithm one can found in [1].
Program system allows to simulate the propagation of elastic-plastic waves produced by external mechanical effects in a medium body, aggregated of arbitrary
number of heterogeneous blocks. Some computations of dynamic problems were
performed on the cluster MVS–100k of Joint Supercomputer Center of RAS (Moscow).
In Fig. 1 one can see the examples of computations for 2D Lamb’s problem about
the action of concentrated impulsive load on the boundary of an elastic medium.
Level curves of the normal stress for a homogeneous material (left) and for a block
medium consisting of 6 blocks with interlayers from a more pliant material (right)
are shown.
This work was supported by RFBR (grant no. 11–01–00053) and Complex Fundamental Research Program no. 18 of the Presidium of RAS.
References
[1] O. Sadovskaya, V. Sadovskii. Mathematical Modeling in Mechanics of Granular
Materials. Ser.: Advanced Structured Materials, Vol. 21. Springer, Heidelberg
– New York – Dordrecht – London (2012).
Figure 1: 2D Lamb’s problem for a homogeneous elastic medium (left) and for
a block medium consisting of 6 blocks with pliant interlayers (right)
330
Vladimir Sadovskii
Institute of Computational Modeling SB RAS, RU
Hyperbolic Variational Inequalities in Elasto-Plasticity and Their Numerical Implementation
Contributed Session CT3.6: Thursday, 18:00 - 18:30, CO017
Thermodynamically consistent systems of conservation laws were firstly obtained
by Godunov for the models of reversible thermodynamics – elasticity theory, gas
dynamics and electrodynamics [1]. Such form of equations assumes the setting socalled generating potentials Φ(U ) and Ψj (U ), depending on the unknown vector–
function U of the state variables. The first of them must be strongly convex
function of U . By means of generating potentials the governing system is written
in the next form:
n
∂ ∂Φ X ∂ ∂Ψj
=
,
(1)
∂t ∂U
∂xj ∂U
j=1
or in a more general form, including terms that are independent of derivatives.
The additional conservation law
X
n
∂Φ
∂
∂Ψj
∂
U
−Φ =
− Ψj
U
(2)
∂t
∂U
∂xj
∂U
j=1
is valid for this system, which may be a conservation law of energy or of entropy.
Thermodynamically consistent system of conservation laws in the form (1), (2)
turn out to be very useful in justification of the mathematical correctness of models. It is intended for the integral generalization, which allows to construct discontinuous solutions. For numerical analysis of the system (1), (2) the effective
shock-capturing methods, such as Godunov’s method, adapted to the computation
of solutions with discontinuities, may be applied.
This talk addresses to generalization and application of this approach for the analysis of thermodynamically irreversible models of mechanics of deformable media
taking into account plastic deformation of materials. For such models the governing systems are formulated as variational inequalities for hyperbolic operators
with one-sided constraints, describing the transition in plastic state:
n
∂ ∂Φ X ∂ ∂Ψj
(Ũ − U )
−
≥ 0,
∂t ∂U j=1 ∂xj ∂U
U, Ũ ∈ F.
(3)
Here F is a convex set, whose boundary describes the yield surface of a material
in the space of stresses. On this basis a priori integral estimates are constructed in
characteristic cones of operators, from which follows the uniqueness and continuous
dependence on initial data of solutions of the Cauchy problem and of the boundaryvalue problems with dissipative boundary conditions. With the help of an integral
generalization of variational inequalities the relationships of strong discontinuity in
dynamic models of elastic-plastic and granular media are obtained, whose analysis
allows us to calculate velocities of shock waves and to construct discontinuous
solutions.
Original shock-capturing algorithms are developed, which can be considered as a
realization of the splitting method with respect to physical processes [2]. Such
algorithms automatically satisfy the properties of monotonicity and dissipativity
331
on discreet level. They are applicable for computation of the solutions with singularities of the type of strong discontinuities (elastic-plastic shock waves) and of
discontinuities of displacements.
The approximation of differential operator and constraint on an example of the
inequality (3) leads to the following discrete problem:
(Ũ − Û k+1 )
n
X
∂Ψkj
∂Φk+1
∂Φk
−
− ∆t
≥ 0,
Λj
∂U
∂U
∂U
j=1
Û k+1 , Ũ ∈ F,
where Û k+1 is a special combination of U k+1 and U k , ∆t is the time step of a grid,
and Λj is the differential operator approximating partial derivative with respect
to the spatial variable xj . In the case of Û k+1 = U k+1 there is the most simple
problem. Its solution can be found in two steps: at first the vector
n
X
∂Ψkj
∂Φk
∂ Φ̄k+1
=
− ∆t
,
Λj
∂U
∂U
∂U
j=1
implementing the explicit finite-difference scheme at each time step for the system of equations (1), is calculated and then the solution correction is made in
accordance with the variational inequality
k+1
∂ Φ̄k+1
∂Φ
−
(Ũ − U k+1 )
≥ 0,
U k+1 , Ũ ∈ F,
∂U
∂U
which is equivalent by convexity of Φ(U ) to the problem of conditional minimization of the function Φ(U k+1 ) − U k+1 ∂ Φ̄k+1 /∂U under the constraint U k+1 ∈ F .
If the generating potential Φ(U ) is a quadratic function, then the solution correction is reduced to determining the projection onto the convex set F with respect to
the corresponding norm. This method of correction was first used by Wilkins under numerical solution of elastic-plastic problems, it is widespread now. However
there exist more accurate algorithms, which are realized rather simple in [3].
With the help of these algorithms the results of simulation of the wave motion in
elastic-plastic and granular media are obtained.
This work was supported by the Russian Foundation for Basic Research (grant
no. 11–01–00053) and the Complex Fundamental Research Program no. 18 of the
Presidium of RAS.
References
[1] S.K. Godunov, E.I. Romenskii. Elements of Continuum Mechanics and Conservation Laws. Kluwer Academic / Plenum Publishers, New York – Boston –
Dordrecht – London – Moscow (2003).
[2] O. Sadovskaya, V. Sadovskii. Mathematical Modeling in Mechanics of Granular
Materials. Ser.: Advanced Structured Materials, Vol. 21. Springer, Heidelberg
– New York – Dordrecht – London (2012).
[3] V.M. Sadovskii. Discontinuous Solutions in Dynamic Elastic-Plastic Problems.
Fizmatlit, Moscow (1997) [in Russian].
332
Davood Saffar Shamshirgar
PhD student, Applied and Computational Mathematics, KTH, SE
The Spectrally Fast Ewald method and a comparison with SPME and P3M methods
in Electrostatics
Contributed Session CT1.8: Monday, 17:30 - 18:00, CO123
The Ewald summation formula is the basis for different methods used in molecular
dynamic simulation in which the computation of the long range interactions in a
periodic setting is important. The P3M method by Hockney and Eastwood (1981)
and SPME method by Essmann et al. (1995) are FFT-based Ewald methods that
scale as O(N log(N )), where N is the number of particles. These methods are
used in several major molecular dynamics packages e.g. GROMACS, NAMD and
AMBER. The new spectrally accurate FFT based algorithm developed by Lindbo
and Tornberg (2011), is similar in structure to the above mentioned methods. In
the new method, approximation errors can however be controlled without increasing the resolution of the FFT grid, in contrast to the other methods. Hence, the
Spectral Ewald (SE) method significantly reduces both the computational cost
for the FFTs and the associated memory use. The price you pay is an increased
“spreading" or “interpolation" cost when evaluating the grid functions, arising from
the larger support of the truncated Gaussians used for the spreading. The use of
Gaussians instead of polynomial interpolation is what allows for the spectral accuracy. Another key aspect of the new method is the error estimate yielding a
straight forward parameter selection.
The SE method has earlier been shown by Lindbo and Tornberg (2011) to be more
efficient than the SPME and P3M methods for all but very low accuracies. To have
a fair comparison, the new algorithm is implemented as a plug-in into GROMACS,
a molecular dynamics package which primarily developed for simulation of lipids
and proteins, where the other methods are already available and highly optimized.
Joint work with Anna-Karin Tornberg.
333
Mehmet Sahin
Istanbul Technical University, TR
Parallel Large-Scale Numerical Simulations of Purely-Elastic Instabilities with a
Template-Based Mesh Refinement Algorithm
Minisymposium Session MANT: Tuesday, 10:30 - 11:00, CO017
The parallel large-scale unstructured finite volume method proposed in [Sahin, A
stable unstructured finite volume method for parallel large-scale viscoelastic fluid
flow calculations. J. Non-Newtonian Fluid Mech., 166 (2011) 779–791] has been
incorporated with a template-based mesh refinement algorithm in order to investigate the viscoelastic fluid flow instabilities. The numerical method based on
side-centered finite volume method where the velocity vector components are defined at the mid-point of each cell face, while the pressure term and the extra stress
tensor are defined at element centroids. The present arrangement of the primitive
variables leads to a stable numerical scheme and it does not require any ad-hoc
modifications in order to enhance the pressure-velocity-stress coupling. The combination of the present numerical method with the log-conformation representation
proposed in [R. Fattal, R. Kupferman, Constitutive laws for the matrix-logarithm
of the conformation tensor. J. Non-Newtonian Fluid Mech. 123 (2004) 281–285]
and the geometric non-nested multilevel preconditioner for the Stokes system have
enabled us to simulate large-scale viscoelastic fluid flow problems on highly parallel
machines. The calculations are presented for an Oldroyd-B fluid past a confined
circular cylinder in a rectangular channel and the sphere falling in a circular tube
at relatively high Weissenberg numbers. The present numerical calculations reveal
three-dimensional purely-elastic instabilities in the wake of a confined single cylinder which is in accord with the earlier experimental results in the literature. In
addition, the flow field is found out to be no longer symmetric at high Weissenberg
numbers.
Figure 1: Computed surface streamtraces at W e = 2.0 on the cylinder surface
(r = 1.01R) for an Oldroyd-B fluid past a confined circular cylinder in a rectagular
channel with a periodic boundary condition in the spanwise direction (β = 0.59).
The streamtrace color indicates v−velocity components.
Joint work with Evren ONER.
334
Giovanni Samaey
Numerical Analysis and Applied Mathematics, Dept. Computer Science, KU Leuven, BE
A micro-macro parareal algorithm: application to singularly perturbed ordinary
differential equations
Contributed Session CT4.1: Friday, 09:20 - 09:50, CO1
We introduce a micro-macro parareal algorithm for the time-parallel integration of
multiscale-in-time systems. The algorithm first computes a cheap, but inaccurate,
solution using a coarse propagator (simulating an approximate slow macroscopic
model), which is iteratively corrected using a fine-scale propagator (accurately
simulating the full microscopic dynamics). This correction is done in parallel over
many subintervals, thereby reducing the wall-clock time needed to obtain the solution, compared to the integration of the full microscopic model. We provide a
numerical analysis of the algorithm for a prototypical example of a micro-macro
model, namely singularly perturbed ordinary differential equations. We show that
the computed solution converges to the full microscopic solution (when the parareal
iterations proceed) only if special care is taken during the coupling of the microscopic and macroscopic levels of description. The convergence rate depends on the
modeling error of the approximate macroscopic model. We illustrate these results
with numerical experiments.
Joint work with Frederic Legoll, and Tony Lelievre.
335
Mattias Sandberg
KTH Mathematics, SE
An Adaptive Algorithm for Optimal Control Problems
Contributed Session CT4.7: Friday, 09:50 - 10:20, CO122
The analysis and performance of numerical computations for optimal control problems is complicated by the fact that they are ill-posed. It is for example often the
case that optimal solutions depend discontinuously on data. Moreover, the optimal control, if it exists, may be a highly non-regular function, with many points
of disontinuity etc. On the other hand optimal control problems are well-posed in
the sense that the associated value function is well-behaved, with such properties
as continuous dependence on data.
I will present an error representation for approximation of the value function when
the Symplectic Euler scheme is used to discretize the Hamiltonian system associated with the optimal control problem. It is given by
X
ū(x0 , t0 ) − u(x0 , t0 ) =
∆t2n ρn + R,
(1)
n
where ū is the approximation of the value function u, and the term ρn is an error
density which is computable from the Symplectic Euler solution. I will show a
theorem which says that the remainder term R is small compared with the error
density sum in (1). The proof uses two fundamental facts:
1. The value function solves a non-linear PDE, the Hamilton-Jacobi-Bellman
equation. When this property is used, we take advantage of the well-posed
character of the optimal control problem.
2. The Symplectic Euler scheme corresponds to the minimization of a discrete
optimal control problem.
Using this error representation I will show an example of an adaptive algorithm,
and illustrate its performance with numerical tests. I will also discuss the applicability of the adaptive algorithm in cases where the Hamiltonian is non-smooth.
Joint work with Jesper Karlsson, Stig Larsson, Anders Szepessy, and Raul Tempone.
336
Giancarlo Sangalli
Universita‘ di Pavia, IT
Isogeometric elements for the Stokes problem
Minisymposium Session ANMF: Monday, 16:00 - 16:30, CO1
In this work I discuss the application of IsoGeometric Analysis to incompressible
viscous flow problems. We consider, as a prototype problem, the Stokes system and
we propose various choices of compatible Spline spaces for the approximations to
the velocity and pressure fields. The proposed choices can be viewed as extensions
of the Taylor-Hood, Nédélec and Raviart-Thomas pairs of finite element spaces,
respectively. We study the stability and convergence properties of each method
and discuss the conservation properties of the discrete velocity field in each case.
Joint work with Andrea Bressan, Annalisa Buffa, and Carlo De Falco.
337
Dmitry Savostyanov
University of Southampton, GB
Alternating minimal energy methods for linear systems in higher dimensions. Part
I: the framework and theory for SPD systems
Contributed Session CT2.8: Tuesday, 14:30 - 15:00, CO123
We propose a new algorithm for the approximate solution of large-scale highdimensional tensor-structured linear systems. It can be applied to high-dimensional differential equations, which allow a low-parametric approximation of the
multilevel matrix, right-hand side and solution in the tensor train format. We combine the Alternating Linear Scheme approach with the basis enrichment idea using
Krylov–type vectors. We obtain the rank–adaptive algorithm with the theoretical
convergence estimate not worse than the one of the steepest descent. The practically observed convergence is significantly faster, comparable or even better than
the convergence of the DMRG–type algorithm. The complexity of the method is
still at a level of ALS. The method is successfully applied for a high–dimensional
problem of quantum chemistry, namely the NMR simulation of a large peptide.
Keywords: high–dimensional problems, tensor train format, ALS, DMRG, steepest
descent, convergence rate, superfast algorithms, NMR.
Joint work with Sergey Dolgov.
338
Robert Scheichl
University of Bath, GB
Hierarchical Multilevel Markov Chain Monte Carlo Methods and Applications to
Uncertainty Quantification in Subsurface Flow
Minisymposium Session UQPD: Wednesday, 11:30 - 12:00, CO1
In this talk we address the problem of the prohibitively large computational cost
of Markov chain Monte Carlo (MCMC) methods for large–scale PDE applications
with high dimensional parameter spaces. We propose a new multilevel version of a
standard Metropolis-Hastings algorithm, and give an abstract, problem dependent
theorem on the cost of the new multilevel estimator.
The parameters appearing in PDE models of physical processes are often impossible to determine fully and are hence subject to uncertainty. It is of great
importance to quantify the resulting uncertainty in the outcome of the simulation. A popular way to incorporate uncertainty is to model the input parameters
in terms of random processes. Based on the information available, a probability
distribution (the prior) is assigned to the input parameters. If in addition to this
assumed distribution, we have some dynamic data (or observations) related to the
model outputs, it is possible to condition on this data to reduce the overall uncertainty (the posterior). However, in most situations, this posterior distribution
is intractable and exact sampling from it is unavailable. One way to circumvent
this problem, is to generate samples using a Metropolis-Hastings type MCMC approach, which consists of two main steps: proposing a new sample, e.g. using a
random walk from a previous sample, and then comparing the likelihood (i.e. the
data fit) with that of the previous sample. The proposed sample gets accepted and
used for inference, or rejected and a new sample is proposed. A major problem
with this approach, e.g. in subsurface applications, is that each evaluation of the
likelihood involves the numerical solution of a PDE with highly varying coefficients
on a fine spatial grid. The likelihood has to be calculated also for samples that
end up being rejected, and so the overall cost of the algorithm becomes extremely
expensive. This is particularly true for high-dimensional parameter spaces, typical
in realistic subsurface flow problems, where the acceptance rate of the algorithm
can be very low.
The key ingredient in our new multilevel MCMC algorithm is a two-level proposal
distribution that ensures (as in the case of multilevel Monte Carlo based on i.i.d.
samples) that we have a dramatic variance reduction on the finer levels of the
multilevel estimator, leading to an overall variance reduction for the same computational cost, or conversely to a significantly lower computational cost for the
same variance. For a typical model problem in subsurface flow with lognormal
prior permeability, we then provide a detailed analysis of the assumptions in our
abstract complexity theorem and show gains in the ε-cost of at least one whole
order over the standard Metropolis-Hastings estimator. This requires a judicious
"partitioning" of the prior space across the levels. One of the crucial theoretical
observations is that on the finer levels the acceptance probability tends to 1 as the
mesh is refined and as the dimension of the prior is increased. Numerical experiments confirming the analysis and demonstrating the effectiveness of the method
are presented with consistent gains of up to a factor 50 in our tests.
Joint work with C. Ketelsen, and A.L. Teckentrup.
339
Friedhelm Schieweck
Department of Mathematics, University of Magdeburg, DE
An efficient dG-method for transport dominated problems based on composite finite
elements
Minisymposium Session ANMF: Monday, 15:30 - 16:00, CO1
The discontinuous Galerkin (dG) method applied to transport dominated problems
has the big advantage that it delivers with its upwind version for convective terms
a parameter-free stabilization of higher order. On the other hand, it has compared
to continuous finite element methods the disadvantages that it needs much more
unknowns as well as much more couplings between the unknowns and that a static
condensation for higher order elements is not possible.
In this talk, we propose a modification of the underlying finite element space
that keeps the advantage and removes the disadvantages of the usual dG-method.
The idea is to use composite finite elements, i.e., for instance, quadrilateral or
hexahedral elements each of which is composed of a fixed number of triangular
or tetrahedral sub-elements. Then, the finite element space is constructed from
functions whose restrictions to the sub-elements are polynomials of some maximal
order k and which are continuous along the common faces of neighboured quadrilateral or hexahedral elements. Jumps are allowed only along common faces of
sub-elements which are inside of the same composite element.
Thus, the total number of unknowns is reduced essentially compared to the classical discontinuous finite element space. Moreover, all the degrees of freedom which
are related to the interior of the composite elements can be removed from the
global system of equations by means of static condensation. Finally, we can prove
for the convection-diffusion-reaction equation that the usual (upwind) dG-method
applied to such composite finite element space has the same good stability and convergence properties as for the classical discontinuous finite element space. For this
model problem, we show some numerical examples in the convection dominated
case and compare the classical with our new approach.
340
Claudia Schillings
SAM, ETH Zuerich, CH
Sparsity in Bayesian Inverse Problems
Minisymposium Session UQPD: Thursday, 12:00 - 12:30, CO1
We present a novel, deterministic approach to inverse problems for identification
of unknown, parametric coefficients in differential equations from noisy measurements. Based on new sparsity results on the density of the Bayesian posterior,
we design, analyze and implement a class of adaptive, deterministic sparse tensor
Smolyak quadrature schemes for the efficient numerical evaluation of expectations under the posterior. Convergence rates for the quadrature approximation
are shown, both theoretically and computationally, to depend only on the sparsity class of the unknown and, in particular, are provably higher than those of
Monte-Carlo (MC) and Markov-Chain Monte-Carlo methods.
This work is supported by the European Research Council under FP7 Grant
AdG247277.
Joint work with Christoph Schwab.
341
Karin Schnass
University of Sassari, IT
Non-Asymptotic Dictionary Identification Results for the K-SVD Minimisation
Principle
Minisymposium Session ACDA: Monday, 12:10 - 12:40, CO122
In this presentation we give theoretical insights into the performance of K-SVD, a
dictionary learning algorithm that has gained significant popularity in practical applications. The particular question studied is when a dictionary Φ ∈ Rd×K can be
recovered as local minimum of the minimisation criterion underlying K-SVD from
a set of N training signals yn = Φxn . A theoretical analysis of the problem leads
to two types of identifiability results assuming the training signals are generated
from a tight frame with coefficients drawn from a random symmetric distribution.
First asymptotic results showing, that in expectation the generating dictionary
can be recovered exactly as a local minimum of the K-SVD criterion if the coefficient distribution exhibits sufficient decay. This decay can be characterised by
the coherence of the dictionary and the `1 -norm of the coefficients. Based on the
asymptotic results it is further demonstrated that given a finite number of training samples N , such that N/ log N = O(K 3 d), except with probability O(N −Kd )
there is a local minimum of the K-SVD criterion within distance O(KN −1/4 ) to
the generating dictionary.
342
Reinhold Schneider
TU Berlin , DE
Convergence of dynamical low rank approximation in hierarchical tensor formats
Minisymposium Session LRTT: Tuesday, 11:00 - 11:30, CO3
In tensor product approximation, Hierarchical Tucker tensor format (Hackbusch)
and Tensor Trains (TT) (Tyrtyshnikov) have been introduced recently offering
stable and robust approximation by a low order cost . For many problems, which
could not be handled so far, this approach has the potential to circumvent from
the curse of dimensionality. For numerical computations, we cast the computation of an approximate solution into an optimization problems constraint by the
restriction to tensors of prescribed ranks r. For approximation by elements from
this highly nonlinear manifold , we apply the Dirac Frenkel variational principle by
observing the differential geometric structure of the novel tensor formats. We analyse the (open) manifold of such tensors and its projection onto the tangent space,
and investigate the convergence and possibly convergence rates in this framework.
Literature:
1. C. Lubich, T. Rohwedder, R. Schneider and B. Vandereycken Dynamical approximation of hierarchical Tucker and tensor train tensors SPP1324 Preprint
(126/2012)
2. B. Khoromskij, I. Oseledets and R. Schneider Efficient time-stepping scheme
for dynamics on TT-manifolds, MIS Preprint 80/2011
343
Katharina Schratz
INRIA and ENS Cachan Bretagne, FR
Efficient numerical time integration of the Klein-Gordon equation in the nonrelativistic limit regime
Minisymposium Session TIME: Thursday, 14:00 - 14:30, CO015
We consider the Klein-Gordon equation in the non-relativistic limit regime, i.e.
the speed of light c formally tending to infinity. Due to the highly-oscillatory
nature of the solution in this regime, its numerical simulation is very delicate.
Here we will construct an asymptotic expansion for the exact solution in terms of
the small parameter c−2 which allows us to filter out the highly-oscillatory phases.
We will see that in the first approximation the numerical task reduces to the time
integration of a system of non-linear c-independent Schrödinger equations.
Thus, this approach allows us to construct numerical schemes that are robust with
respect to the large parameter c producing high oscillations in the exact solution.
Joint work with Erwan Faou.
344
Mauricio Sepúlveda
Universidad de Concepción, CL
Convergent Finite Volume Schemes for Nonlocal and Cross Diffusion Reaction
Equations. Applications to biology
Contributed Session CT2.6: Tuesday, 14:00 - 14:30, CO017
In this work, we consider reaction-diffusion systems with nonlocal and cross diffusion. We construct a finite volume scheme for this system, we establish existence
and uniqueness of the discrete solution, and it is also showed that the scheme
converges to the corresponding weak solution for the model studied. The convergence proof is based on the use of the discrete Sobolev embedding inequalities
with general boundary conditions and a space-time L1 compactness argument
that mimics the compactness lemma due to S. N. Kruzhkov. The first example of
application is the description of three interacting species in a HP food chain structure. The second example of application corresponds to a mathematical model
with cross-diffusion for the indirect transmission between two spatially distributed
host populations having non-coincident spatial domains, transmission occurring
through a contaminated environment. We give also, several numerical examples.
References
[1] M. Bendahmane and M. Sepúlveda, Convergence of a finite volume scheme for
nonlocal reaction-diffusion systems modelling an epidemic disease. Discrete and
Continuous Dynamical Systems - Series B. Vol. 11, 4 (2009) 823-853.
[2] V. Anaya, M. Bendahmane and M. Sepúlveda, Mathematical and numerical
analysis for reaction-diffusion systems modeling the spread of early tumors. Boletin de la Sociedad Espanola de Matematica Aplicada. Vol 47, (2009), 55-62.
[3] V. Anaya, M. Bendahmane and M. Sepúlveda, A numerical analysis of a
reaction-diffusion system modelling the dynamics of growth tumors. Mathematical Models and Methods in Applied Sciences. Vol. 20, 5 (2010) 731-756.
[4] V. Anaya, M. Bendahmane and M. Sepúlveda, Mathematical and numerical
analysis for predator-prey system in a polluted environment. Networks and Heterogeneous Media. Vol. 5, 4 (2010) 813-847.
[5] V. Anaya, M. Bendahmane and M. Sepúlveda, Numerical analysis for HP
food chain system with nonlocal and cross diffusion. Submitted. Prepublicación
2011-11, DIM, Universidad de Concepción.
Joint work with Verónica Anaya, and Mostafa Bendahmane.
345
Alexander Shapeev
University of Minnesota, US
Atomistic-to-Continuum coupling for crystals: analysis and construction
Minisymposium Session MSMA: Monday, 11:10 - 11:40, CO3
Atomistic-to-continuum (AtC) coupling is a popular approach of utilizing an atomistic resolution near the defect core while using the continuum model to resolve
the elastic far-field. In my talk I will
(1) give a brief introduction to AtC coupling,
(2) present one of the recent developments in construction of a consistent energybased AtC coupling method,
(3) and discuss the complexity of computations.
346
Natasha Sharma
University of Heidelberg, DE
Convergence Analysis of an Adaptive Interior Penalty Discontinuous Galerkin
Method for the Helmholtz Problem
Minisymposium Session MMHD: Thursday, 15:00 - 15:30, CO017
We consider the numerical solution of the 2D Helmholtz equation by an adaptive
Interior Penalty Discontinuous Galerkin method based on adaptively refined simplicial triangulations of the computational domain. The a posteriori error analysis
involves a residual type error estimator consisting of element and edge residuals
and a consistency error which, however, can be controlled by the estimator. The
refinement is taken care of by the standard bulk criterion (Dörfler marking) known
from the convergence analysis of adaptive finite element methods for linear second
order elliptic PDEs. The main result is a contraction property for a weighted sum
of the energy norm of the error and the estimator which yields convergence of the
adaptive IPDG approach. Numerical results are given that illustrate the performance of the method.
Joint work with Ronald H.W. Hoppe.
347
Zhiqiang Sheng
Institute of Applied Physics and Computational Mathematics, CN
The nonlinear finite volume scheme preserving maximum principle for diffusion
equations on polygonal meshes
Minisymposium Session SDIFF: Monday, 12:10 - 12:40, CO123
We further develop the nonlinear finite volume scheme of diffusion equation on
polygonal meshes, and construct a nonlinear finite volume scheme which satisfies
the discrete maximum principle. Our scheme is locally conservative and has only
cell-centered unknowns. Numerical results are presented to show how our scheme
works for preserving discrete maximum principle and positivity on various distorted meshes.
Joint work with Guangwei yuan.
348
Corina Simian
University of Zurich, CH
Conforming and Nonconforming Intrinsic Discretization for Elliptic Partial Differential Equations
Contributed Session CT4.6: Friday, 09:20 - 09:50, CO017
The aim of this presentation is to introduce a general method for the construction of intrinsic conforming and non-conforming finite element spaces. As a model
problem we consider the Poisson equation, however this approach can be applied
for the discretization of more general elliptic equations. We will derive piecewise
polynomial intrinsic conforming and non-conforming finite element spaces and local basis functions for these spaces. In the conforming case our method leads
to a finite element space spanned by the gradients of the well known hp-finite
elements. In the non-conforming case we employ the stability and convergence
theory for non-conforming finite elements based on the second Strang Lemma and
derive, from these principles, weak compatibility conditions for non-conforming
finite elements across the boundary, for domains Ω ⊂ Rd , d ∈ {2, 3}. For d = 2
our space contains all gradients of hp-finite element basis functions enriched by
some edge-type non-conforming basis functions for even polynomial degree and by
some triangle-type non-conforming basis functions for odd polynomial degree.
Joint work with Stefan Sauter.
349
Valeria Simoncini
Universita’ di Bologna, IT
Solving Ill-posed Linear Systems with GMRES
Minisymposium Session CTNL: Wednesday, 11:30 - 12:00, CO015
Almost singular linear systems arise in discrete ill-posed problems. Either because
of the intrinsic structure of the problem or because of preconditioning, the spectrum of the coefficient matrix is often characterized by a sizable gap between a
large group of numerically zero eigenvalues and the rest of the spectrum. Correspondingly, the right-hand side has leading eigencomponents associated with the
eigenvalues away from zero. In this talk the effect of this setting in the convergence of the Generalized Minimal RESidual (GMRES) method is considered. It
is shown that in the initial phase of the iterative algorithm, the residual components corresponding to the large eigenvalues are reduced in norm, and these can
be monitored without extra computation. The analysis is supported by numerical
experiments on singularly preconditioned ill-posed Cauchy problems for partial
differential equations with variable coefficients.
Joint work with Lars Eldén, Linköping University, Sweden.
350
Jonathan Skowera
ETH Zurich, CH
Entanglement via algebriac geometry
Minisymposium Session LRTT: Monday, 11:40 - 12:10, CO1
Tensor rank generalizes matrix rank and admits a formulation in algebraic geometry in terms of secant varieties of Segre embeddings. Entangled quantum states
appear as states of tensor rank greater than one. States related by stochastic local operations and classical communication fall into the same entanglement class.
A unitary group action preserves tensor rank while orbits of a semisimple group
action correspond to entanglement classes. Entanglement classes are furthermore
characterized by polytopes arising as images under symplectic moment maps. We
review these notions in elementary terms and discuss connections.
351
Iain Smears
University of Oxford, GB
Discontinuous Galerkin finite element approximation of HJB equations with Cordès
coefficients
Minisymposium Session NMFN: Monday, 12:40 - 13:10, CO2
Hamilton–Jacobi–Bellman (HJB) equations are fully nonlinear second order PDE
that arise in the study of optimal control of stochastic processes. For problems
with Cordès coefficients, we present an hp-version discontinuous Galerkin FEM
that is consistent, stable and high-order, with convergence rates that are optimal
with respect to mesh size, and suboptimal in the polynomial degree by only half
an order. The scheme is obtained by coupling the residual of the numerical solution to discrete analogues of identities that are central to the analysis of the
continuous problem. Numerical experiments illustrate the accuracy and computational efficiency of the scheme, with particular emphasis on problems with strongly
anisotropic diffusions.
I. S MEARS AND E. S ÜLI, Discontinuous Galerkin finite element approximation of
Hamilton–Jacobi–Bellman equations with Cordès coefficients, Tech. Report NA
13/03, Univ. of Oxford, 2013. In Review. http://eprints.maths.ox.ac.uk/1671/
I. S MEARS AND E. S ÜLI, Discontinuous Galerkin finite element approximation of
non-divergence form elliptic equations with Cordès coefficients, Tech. Report NA
12/17, Univ. of Oxford, 2012. In Review. http://eprints.maths.ox.ac.uk/1623/
Joint work with Endre Suli.
352
Kathrin Smetana
Massachusetts Institute of Technology, US
The Hierarchical Model Reduction-Reduced Basis approach for nonlinear PDEs
Minisymposium Session SMAP: Monday, 16:00 - 16:30, CO015
Many phenomena in fluid dynamics have dominant spatial directions along which
the essential dynamics occur. Nevertheless, the processes in the transverse directions are often too relevant for the whole problem to be neglected. For such
situations we present a new problem adapted version of the hierarchical model
reduction approach. The hierarchical model reduction approach (see [3] and references therein) uses a truncated tensor product decomposition of the solution and
hierarchically reduces the full problem to a small lower dimensional system in the
dominant directions, coupled by the transverse dynamics. In previous approaches
[3] these transverse dynamics are approximated by a reduction space constructed
from a priori chosen basis functions such as trigonometric or Legendre polynomials. We present the hierarchical model reduction-reduced basis approach [2] where
the reduction space is constructed a posteriori from solutions (snapshots) of appropriate reduced parametrized problems in the transverse directions. To get an
efficient lower-dimensional approximation also for nonlinear PDEs we introduce
for the approximation of the nonlinear operator the adaptive Empirical Projection
Method which employs the Empirical Interpolation Method [1]. An a posteriori error estimator which includes both the errors caused by the model reduction
and the approximation of the nonlinear operator is presented. Numerical experiments demonstrate that the hierarchical model reduction-reduced basis approach
converges exponentially fast with respect to the model order for problems with
smooth solutions but also for some test cases where the source term belongs to
C 0 (Ω) only. Run-time experiments verify a linear scaling of the proposed method
in the number of degrees of freedom used for the computations in the dominant
direction.
References
[1] M. Barrault, Y. Maday, N. C. Nguyen, and A. T. Patera, An ’empirical interpolation’ method: application to efficient reduced-basis discretization of partial differential equations, C. R. Math. Acad. Sci. Paris Series I, 339 (2004),
pp. 667–672.
[2] M. Ohlberger and K. Smetana, A new problem adapted hierarchical model
reduction technique based on reduced basis methods and dimensional splitting,
Preprint FB 10, University Munster num. 03/10 (2010).
[3] S. Perotto, A. Ern, and A. Veneziani, Hierarchical local model reduction for
elliptic problems: a domain decomposition approach, Multiscale Model. Simul.
8 (2010), pp. 1102–1127.
Joint work with Mario Ohlberger.
353
Alexandra Smirnova
Georgia State Unniversity, US
A Novel Stopping Criterion for Iterative Regularization with Undetermined Reverse Connection
Minisymposium Session STOP: Thursday, 14:30 - 15:00, CO1
In our talk, we address a problem of minimizing a (non)linear functional,
Φ(u) := ||F (u) − y||2 ,
F : DF ⊂ X → Y,
on a pair of Hilbert spaces. We investigate the generalized Gauss-Newton scheme
[B09, BS10a, BS10b]
un+1 = ξn − θ(F 0∗ (un )F 0 (un ), τn )F 0∗ (un ){F (un ) − yδ − F 0 (un )(un − ξn )},
u0 , ξn ∈ DF ⊂ X ,
(1)
and consider three basic groups of generating functions. In this framework, we
present a nonstandard approximation of pseudoinverse through a "gentle" iterative
truncation, and prove its optimality on the class of generating functions with the
same correctness coefficient.
The convergence analysis of (1) in the noise-free case has been carried out in
[B09], an a priori and a posteriori stopping rules have been justified in [BS10a] and
[BS10b], respectively. In [B09, BS10a, BS10b], the modified source condition
1
,
(2)
2
which depends on the current iteration point un , is proposed. We call condition
(2) the undetermined reverse connection. It has been shown in [B09, BS10a, BS10b]
that even though (2) still contains the unknown solution û, the norm of ωn in (2)
is greater than the norm of ω in the case of the source condition with ξ fixed.
Moreover, the norm of ωn can even tend to infinity as n → ∞. Specifically, at
every step of iterative process (1), the element ξn may be such that
û − ξn = (F 0∗ (un )F 0 (un ))p ωn ,
||wn || ≤
ε
,
τnk
1
≤ p − k,
2
p≥
ε ≥ 0.
(3)
The main disadvantage of undetermined reverse connection (2) is the need to find
ξn satisfying (2) in each step of the iteration. How can such a ξn be found in
practice? Well, the problem is similar to the one with single ξ: no general recipe is
known and we just hope to get lucky after trying different ξ’s. In case of (2), one
can argue that the set of potential candidates for the test function is larger due to
(3). Still, with n source conditions in place of one, it is unlikely that (2) will hold
at every step "by chance". Therefore in this talk, we look into the convergence
analysis of algorithm (1) under a more realistic "noisy" source condition:
û − ξn = (F 0∗ (un )F 0 (un ))p ωn + ζn ,
ε ≥ 0,
p≥
1
,
2
||ωn || ≤
||ζn || ≤ ∆.
ε
,
τnk
1
≤ p − k,
2
(4)
To that end, we introduce a novel a posteriori stopping rule. Let N = N (δ, ∆, yδ )
be the number of the first transition of ||F (un ) − yδ || through the level σnµ , 21 ≤
µ < 1, i.e.,
µ
||F (xN (δ,∆,yδ ) ) − yδ || ≤ σN
and σnµ < ||F (un ) − yδ ||,
354
0 ≤ n < N (δ, ∆, yδ ),
(5)
where
√
C3 ∆ τn
,
σn := δ +
C1
and ||y − yδ || ≤ δ.
(6)
The constant ∆ in (6) is usually harder to estimate than the noise level δ. However
for the asymptotic behavior of the approximate solution u = uN (δ,∆,yδ ) as δ and
√
∆ tend to zero, this is not relevant. It follows from (6) that due to the factor τn ,
the contribution of ∆ to the total error of the model approaches zero as n → ∞.
In other words, error in the source condition disappears in the overall noise as we
iterate. Notice that if F 0∗ (·)F 0 (·) is compact and the null space of F 0∗ (·)F 0 (·) is
{0}, then the range of F 0∗ (·)F 0 (·) is dense in X , so in any neighborhood of un
there are points ξn for which (5) holds with ∆ = 0. On the other hand, since the
range of F 0∗ (·)F 0 (·) is not closed, in the same neighborhood there are also points
ξn for which (5) holds with ∆ 6= 0. In practice, one can try different ξn ’s and
choose those for which the iterative scheme works better, that is convergence is
more rapid and the algorithm is more stable.
To illustrate the practical aspects related to the converge result, numerical simulations for a large-scale image de-blurring system are presented.
References
[B09] Bakushinsky, A. B. [2009] Iterative methods with fuzzy feedback for solving
irregular operator equations, Dokl. Russian Acad. Sci. 428 N5, 1-3.
[BS10a] Bakushinsky, A. B. and Smirnova, A. [2010] Irregular operator equations
by iterative methods with undetermined reverse connection, Journal of Inverse
and Ill-Posed Problems, 18 N2, 147–165.
[BS10b] Bakushinsky, A. B. and Smirnova, A. [2010] Discrepancy Principal for Generalized GN Iterations Combined with the Reverse Connection Control. Inverse
and Ill-Posed Problems, 18, N4, 421-432.
Joint work with A. Bakushinsky and Hui Liu.
355
Benjamin Stamm
Laboratoire J.-L. Lions, Paris 6 and CNRS, FR
Domain decomposition for implicit solvation models
Minisymposium Session MSMA: Monday, 12:40 - 13:10, CO3
In this talk, we present a domain decomposition algorithms for implicit solvent
models that are widely used in computational chemistry. We show that, in the
framework of the COSMO model, with van der Waals molecular cavities and classical charge distributions, the electrostatic energy contribution to the solvation
energy, usually computed by solving an integral equation on the whole surface of
the molecular cavity, can be computed more efficiently by using an integral equation formulation of Schwarz’s domain decomposition method for boundary value
problems. In addition, the so-obtained potential energy surface is smooth, which
is a critical property to perform geometry optimization and molecular dynamics
simulations.
We present the methodology, set up the mathematical foundations of the approach,
and present a numerical study of the accuracies and convergence rates of the resulting algorithm. If time permitting, we present the applicability of the method to
large molecular systems of biological interest and illustrate that computing times
and memory requirements scale linearly with respect to the number of atoms.
Joint work with E. Cancès, F. Lipparini, Y. Maday, and B. Mennucci.
356
Simeon Steinig
University of Stuttgart, DE
Convergence Analysis and A Posteriori Error Estimation for State-Constrained
Optimal Control Problems
Minisymposium Session FEPD: Monday, 16:00 - 16:30, CO017
In our talk we present a convergence result for finite element discretisations on
non-quasi-uniform meshes of optimal control problems with constraints involving
the state or the gradient of the state.
In a second step we present an a posteriori error estimator involving the actually computed discrete solutions to a regularised problem that provides an upper
bound for the error up to constants.
Joint work with Prof. A. Roesch, and Prof. K.G. Siebert.
357
Rolf Stenberg
Department of Mathematics and Systems Analysis, FI
Mixed Finite Element Methods for Elasticity
Plenary Session: Wednesday, 08:20 - 09:10, Rolex Learning Center Auditorium
During the last decade the theory of mixed finite element methods have been
recast with the aid of differential geometry. This was first done for methods
for scalar second order elliptic equation, e.g. the Raviart-Thomas-Nedelec and
Brezzi-Douglas-Marini-Duran-Fortin families. Lately, the theory has been extended to methods for linear elasticity. Both methods with a symmetric approximation for the stress tensor (Fraijs de Veubeke, Watwood-Hartz, Johnson-Mercier,
Arnold-Douglas-Gupta,. . . ), and methods where the symmetry is imposed weakly
(Fraijs de Veubeke, Arnold-Brezzi-Douglas, Stenberg, Arnold-Falk-Winther,. . . ),
have been analyzed. The purpose of this talk is to highlight an alternative and more
elementary way of analysis, which, nevertheless, gives optimal error estimates. The
approach is that of using mesh dependent norms, first used by Babuska, Osborn
and Pitkäranta in 1980. In this, the norm used for the “stress” variable is the L2
norm, which has the physical meaning of energy. For the “displacement” variable
the broken H 1 norm (now well-known from Discontinuous Galerkin Methods) is
used. The stability of the methods follows directly from local scaling arguments.
The second ingredient is the so-called “equilibrium condition”, which the methods
fulfil. Using these, the quasi-optimal error estimate for the stress follows by the
classical saddle point theory. For the displacement the analysis yields a superconvergence result for the distance between the L2 projection onto the discrete
space and the finite element solution. This is utilized to postprocess the displacement yielding an approximation of two polynomial degrees higher, with an optimal
convergence rate. This postprocessing turns out to be crucial in a posteriori estimates. Based on the hypercircle idea of Prager and Synge one obtains a posteriori
estimates with explicitly computable constants.
358
Christian Stohrer
Department of Mathematics and Computer Science, University of Basel, CH
Micro-Scales and Long-Time Effects: FE Heterogeneous Multiscale Method for the
Wave Equation
Contributed Session CT2.2: Tuesday, 15:30 - 16:00, CO2
For limited time the propagation of waves in a highly oscillatory medium is welldescribed by the non-dispersive homogenized wave equation. With increasing time,
however, the true solution deviates from the classical homogenization limit, as a
large secondary wave train develops unexpectedly. In [1] a finite element heterogeneous multiscale method (FE-HMM) was proposed and convergence to the
homogenized solution was shown. However, neither the homogenized solution, nor
the FE-HMM of [1] capture these dispersive effects. We propose a new FE-HMM
which is able to recover not only the short-scale macroscopic behavior of the wave
field, but also those secondary long-time dispersive effects.
Effective dispersive equation Let Ω ⊂ Rd be a domain and T > 0. We consider
the wave equation
(
∂tt uε − ∇ · (aε ∇uε ) = F in Ω × (0, T ),
uε (x, 0) = f (x), ∂t uε (x, 0) = g(x) in Ω
where aε (x) ∈ (L∞ (Ω))d×d is highly oscillatory and where we suppose that aε is
symmetric, uniformly elliptic and bounded.
Various formal asymptotic arguments suggest, that the linearized improved Boussinesq equation may serve as an effective dispersive equation which describes well
the long-time macroscopic behavior of wave propagation, e.g. [3]. Moreover, for
d = 1 and ε-periodic aε it was shown in [4] that uε converges to the solution of
∂tt (ueff − ε2 b∂xx ueff ) − a0 ∂xx ueff = F
(1)
for T ∈ O(ε−2 ). Here a0 denotes the homogenized coefficient from classical homogenization theory and b > 0. The weak formulation of this dispersive effective
equation motivates the design of an FE-HMM, where not only an effective bilinear
but in addition an effective inner product is used.
Multiscale Algorithm We now give a brief description of our new FE-HMM
scheme, more details are given in [2]. First, we generate a macro triangulation TH
and choose an appropriate macro FE space S(Ω, TH ). By macro we mean that
H is allowed. The FE-HMM solution uH is given by the following problem:
Find uH : [0, T ] → S(Ω, TH ) such that
(
(∂tt uH , vH )Q + BH (uH , vH ) = (F, vH ) for all vH ∈ S(Ω, TH ),
(2)
uH (0) = fH , ∂t uH (0) = gH in Ω,
where the initial data fH and gh are approximations of f and g in S(Ω, TH ). The
bilinear form BH is the standard FE-HMM bilinear as in [1], but the effective
inner product (·, ·)Q is novel. It consists of two parts: The first part corresponds
to a standard approximation of the L2 -inner product by numerical quadrature,
whereas the second part is a correction, needed to capture the long-time dispersive effects. It can be shown, that this correction is of order O(ε2 ), which is in
359
good correspondence with (1). The computation of BH and (·, ·)Q relies on the
numerical solution of micro problems in sampling domains. Since BH is elliptic
and bounded and (·, ·)Q is a true inner product, the FE-HMM is well-defined for
all H > 0.
√
The usefulness of the method can be seen in Figure 1. We set aε (x) = 2 +
sin(2πx/ε) with ε = 1/50 and computed a reference solution by fully resolving the
micro structure. The new FE-HMM succeeds in capturing, the long-time effects.
In contrast, the solution of the FE-HMM of [1] is unable to capture those dispersive
effects.
References
[1] A. Abdulle and M. J. Grote, Finite Element Heterogeneous Multiscale Method
for the Wave Equation, Multiscale Model. Simul., 9 (2011), pp. 766–7921.
[2] A. Abdulle, M. J. Grote and C. Stohrer, Finite Element Heterogeneous Multiscale Method for the Wave Equation: Long-Time Effects, in prep.
[3] J. Fish, W. Chen and G. Nagai, Non-local dispersive model for wave propagation in heterogeneous media. Part 1: one-dimensional case, Int. J. Numer.
Meth. Eng., 54 (2002), pp. 331–346.
[4] A. Lamacz, Dispersive Effective Models for Waves in Heterogeneous Media,
Math. Models Methods Appl. Sci., 21 (2011), pp. 1871–1899.
T=100
1
ref.
HMM (see [1])
HMM (new)
0.8
0.6
0.4
0.2
0
−1 −0.8 −0.6 −0.4 −0.2
0
0.2
0.4
0.6
0.8
1
Figure 1: Reference solution (ref.), FE-HMM from [1] and new FE-HMM.
Joint work with Assyr Abdulle, and Marcus J. Grote.
360
Martin Stoll
MPI Magdeburg, DE
Fast solvers for Allen-Cahn and Cahn-Hilliard problems
Minisymposium Session CTNL: Tuesday, 12:00 - 12:30, CO015
We consider the efficient solution of an Allen-Cahn variational inequality subject
to volume constraints as well as a Cahn-Hilliard variational inequality both obtained from the gradient flow of a Ginzburg-Landau energy. Using an implicit
time discretization this is formulated as an optimal control problem with pointwise constraints. For both problems we employ a non-smooth potential, which in
turn requires the solution of the optimal control problem via a semi-smooth Newton method. This method then requires the efficient solution of large structured
linear systems. For realistic problems the matrix size easily becomes intractable
for direct methods and we propose the use of preconditioned Krylov subspace
methods. Our goal is to present preconditioners that are tailored towards both
the Allen-Cahn and the Cahn-Hilliard equations and show robust performance
with respect to the crucial parameters such as mesh-size or value of the regularization parameter. Numerical results illustrate the competitiveness of this approach.
Joint work with Luise Blank, Lavinia Sarbu, Jessica Bosch, and Peter Benner.
361
Zdenek Strakos
Faculty of Mathematics and Physics, Charles University in Prague, CZ
Remarks on algebraic computations within numerical solution of partial differential
equations
Minisymposium Session CTNL: Wednesday, 12:00 - 12:30, CO015
Numerical solution of partial differential equations (PDE) starts with a finite dimensional approximation of the mathematical model. This is typically done (e.g.
in the finite element method) using some spatial meshes over the given domain
and by some form of time discretization. The unknown functions are then approximated as linear combinations of a finite number of basis functions, which
leads to a finite dimensional representation of the original model. As the mesh
refines, the state-of-the-art paradigm investigates convergence of the finite dimensional solution to the solution of the original model. Proving such convergence
often requires fine mathematical techniques. Here a priori error analysis indicates
how the error (asymptotically) decreases as the mesh is refined. Resulting bounds
are not computable because they typically involve the unknown solution of the
problem. A posteriori error analysis estimates the size of the actual error of the
computed solution, and it can provide a tool for stopping the computations when
the sufficient accuracy is reached. The decisive criterion should be accuracy to
which the results of computation reflect the properties of the genuine (analytical)
solution of the given PDE.
The numerical solution process represents in case of difficult problems a challenge.
Despite the fact that PDE discretisation, a priori and a posteriori error analysis and algebraic (matrix) computations represent well-established fields, there
are important issues which are under investigation. They should not be studied
separately within the particular fields. Modeling with its mathematical analysis
together with discretisation, error estimation and solving the resulting finite dimensional discrete problems should be considered closely related tasks of a single
solution process. A failure in a subtask may not be identifiable within the same
subtask. It may show up later in the form of difficulties in numerical computations
and/or in interpretation of the obtained numerical approximations.
The fact that the state-of-the-art results may offer only partial answers can be
documented on the approach to proving convergence of the discrete approximate
solution when the mesh refines using some form of adaptation. The proofs are
based on seeing individual mesh refinement steps (adaptation cycles) as contractions for some error estimators, where the contraction parameter is independent
of the refinement step. This seemingly allows reaching an arbitrary prescribed
accuracy in a finite number of contraction steps. In practical computations, however, an arbitrary accuracy can not be reached simply due to the fact that the
discretized algebraic problem cannot be solved exactly, and the restriction on a
maximal attainable accuracy can be for difficult problems significant.
In practical computations we may not aim at highly accurate numerical solutions of the discretized problems since that could make the whole solution process
unfeasible. The principal questions which may occur is therefore what is the
maximal attainable accuracy of numerical computations, whether the prescribed
user-specified accuracy can be reached and at which price.
Construction of efficient numerical algorithms requires for challenging problems
a global communication between the information obtained at different (possibly
distant) parts of the solution domain. This can be achieved via incorporating
362
coarse space components (e.g. using domain decomposition or multigrid methods).
Efficient preconditioning can be seen as another tool for achieving the same goal.
Preconditioning should reflect the physical nature of the problem expressed in the
mathematical model. It can be motivated using a functional analytic operator
description (operator preconditioning).
Finally, construction of fully computable a posteriori error estimators which allows
for the local error control and comparison of the size of the error from different
sources (discretisation, linearization, inexact algebraic computation) is a prerequisite for reliable, robust and efficient adaptive approaches. This requires combination of rather diverse techniques from functional analysis through numerical
analysis to analysis of iterative matrix computations including effects of rounding
errors.
This contribution will review some approaches to the questions mentioned above.
It will use a combination of the function spaces and algebraic settings, and it will
illustrate recent progress as well as difficulties which need to be resolved.
363
Hiroaki Sumitomo
Keio University, JP
GPU accelerated Symplectic Integrator in FEA for solid continuum
Contributed Session CT1.6: Monday, 18:30 - 19:00, CO017
GPU accelerated Symplectic Integrator in FEA for solid continuum is proposed.
Time integration methods such as Newmark-beta method are widely used to evaluate the dynamic response of solid continuum in FEA. A set of simultaneous
equations should be solved for every time step in these time integration methods.
This results in high computational cost.
PDS (Particle Discretized Scheme)-FEM[1] could be a solution to avoid this intensive computation. It applies particle discretization to a displacement field; the
domain is decomposed into a set of Voronoi blocks and the non-overlapping characteristic functions for the Voronoi blocks are used to discretize the displacement
function. Each block is connected to adjacent blocks by springs in this discretized
field. Spring constants are equivalent to the corresponding components of the stiffness matrix of general FEM. Thus, PDS-FEM enables us to compute a deformation
of solid continuum using particle simulation, instead of solving the simultaneous
equations. Therefore, time integration required for analysis of deformable solid
continuum can be handled in the framework of analytical dynamics.
Symplectic Integrator is a numerical integration scheme for particle simulation and
is widely used in discrete element method and molecular dynamics. The Hamiltonian is conserved in Symplectic Integrator. The interaction between 2 particles is
computed by matrix operation. A particle is affected only by the adjacent particles in FEA for solid continuum. Thus, matrix operation is represented as SpMV
(Sparse Matrix Vector product). SpMV spends most of the computational time
in Krylov subspace solvers and numerous researches for GPU algorithm have been
proposed[2]. In this study, GPU accelerated Symplectic Integrator was implemented focusing on the acceleration of SpMV to reduce the computational time.
The main points are shown below:
1. The performance of SpMV was improved compared with the conventional
method. We focused on the form of matrix in the 3D problem to decrease
the amount of transferring data between device memory and cores.
2. All calculations including SpMV, vector update, and applying boundary conditions were implemented only on the GPU. No communication between host
and device memory via a PCI-Express is necessary.
3. Approximately 90 % efficiency was achieved when 3 GPUs were used. Communication was overlapped with computation in SpMV and the total amount
of communication data was reduced by using domain decomposition method.
The performance of the Symplectic Integrator using 3 GPUs improved up to 121×
acceleration compared with Intel Xeon CPU.
References
[1] M. Hori, K. Oguni and H. Sakaguchi: Proposal of FEM implemented with particle discretization for analysis of failure phenomena, Journal of the Mechanics
and Physics of Solids, 53, 3, 681-703, (2006).
364
[2] F.Vazquez, J.J.Fernandez, E.M. Garzon: A new approach for sparse matrix
vector product on NVIDIA GPUs, Concurrency Computat.: Pract. Exper.
Vol.23, 815-826, (2010).
Joint work with Kenji Oguni.
365
Anders Szepessy
Royal Institute of Technology, SE
How accurate is molecular dynamics for crossings of potential surfaces?
Part I: Error estimates
Contributed Session CT4.9: Friday, 08:20 - 08:50, CO124
The difference of the value of observables for the time-independent Schrödinger
equation, with matrix valued potentials, and the values of observables for ab initio
Born-Oppenheimer molecular dynamics, of the ground state, depends on the probability to be in excited states. In this talk I will present a method to determine
the probability to be in excited states from Landau-Zener like dynamic transition
probabilities, based on Ehrenfest molecular dynamics and stability analysis of a
perturbed eigenvalue problem. A perturbation pE , in the dynamic transition probability for a time-dependent Schrödinger WKB-transport equation, yields through
1/2
resonances a larger probability of the order O(pE ) to be in an excited state for
the time-independent Schrödinger equation, in the presence of crossing or nearly
crossing electron potential surfaces. The stability analysis uses Egorov’s theorem
1/2
and shows that the approximation error for observables is O(M −γ/2 + pE ) for
large nuclei-electron mass ratio M , provided the molecular dynamics has an ergodic limit which can be approximated with time averages over the period T and
convergence rate O(T −γ ), for some γ > 0. Numerical simulations verify that the
transition probability pE can be determined from Ehrenfest molecular dynamics
simulations.
Joint work with Håkon Hoel (KAUST), Ashraful Kadir (KTH), Petr Plechac
(Univ. Delaware), and Mattias Sandberg (KTH).
366
Lorenzo Tamellini
Ecole Polytechnique Fédérale de Lausanne / Politecnico di Milano (Italy), CH
Quasi-optimal polynomial approximations for elliptic PDEs with stochastic coefficients
Contributed Session CT3.7: Thursday, 18:00 - 18:30, CO122
Partial differential equations with stochastic coefficients conveniently model problems in which the data of a given PDE (coefficients, forcing terms, boundary
conditions) are affected by uncertainty, due e.g. to measurement errors, limited
data availability or intrinsic variability of the described system. In this talk we
consider a particular case that arises in a number of different engineering fields,
i.e. the case of an elliptic PDE with diffusion coefficient depending on N random
variables y1 , . . . , yN .
In this context, the solution u of the PDE at hand can be seen as a random
function, u = u(y1 , . . . , yN ), and common goals include computing its mean and
variance, or the probability that it exceeds a given threshold; such analysis is
usually referred to as “Uncertainty Quantification”. This could be achieved with
a straightforward Monte Carlo method, that may however be very demanding in
terms of computational costs. Methods based on polynomial approximations of
u(y1 , . . . , yN ) have thus been introduced, aiming at exploiting the possible degree
of regularity of u with respect to y1 , . . . , yN to alleviate the computational burden.
Such polynomial approximations can be obtained e.g. with Galerkin projections
or collocation methods over the parameters space.
Although effective for problems with a moderately low number of random parameters, these methods suffer from a degradation of their performance as the number
of random parameters increase (“curse of dimensionality”). Minimizing the impact of the “curse of dimensionality” is therefore a key point for the application of
polynomial methods to high-dimensional problems.
In this talk we will explore possible strategies to determine efficient polynomial
approximations of u with given computational cost (the so-called “best M terms”
approximation of u). In particular, we will consider a “knapsack approach”, in
which we estimate the cost and the “error reduction” contribution of each possible
component of the polynomial approximation, and then we choose the components
with the highest “error reduction”/cost ratio. The estimates of the “error reduction” are obtained by a mixed “a-priori”/“a-posteriori” approach, in which we first
derive a theoretical bound and then tune it with some inexpensive auxiliary computations. We will present theoretical convergence results obtained for some specific problems as well as numerical results showing the efficiency of the proposed
approach. Extension to the case where N → ∞ will also be discussed.
References
[1] J. Beck, F. Nobile, L. Tamellini, R. Tempone, “On the optimal polynomial
approximation of stochastic PDEs by Galerkin and collocation methods”, Math.
Mod. Methods Appl. Sci. (M3AS), 22, 2012.
[2] J. Beck, F. Nobile, L. Tamellini, R. Tempone, “Convergence of quasi-optimal
Stochastic Galerkin Methods for a class of PDEs with random coefficients”, to
appear in Comput. Math. Appl. Also available as MATHICSE Technical report
24/2012, Ecole Politechnique Fédérale Lausanne - Switzerland.
367
[3] J. Beck, F. Nobile, L. Tamellini, R. Tempone, “A quasi-optimal sparse grids
procedure for groundwater flows”, to appear in Proceedings of the International
Conference on Spectral and High-Order Methods 2012 (ICOSAHOM’12),
Lecture Notes in Computational Science and Engineering, Springer, 2012.
Also available as MATHICSE Technical report 46/2012, Ecole Politechnique
Fédérale Lausanne - Switzerland.
[4] L. Tamellini, “Polynomial approximation of PDEs with stochastic coefficients”,
Ph.D. thesis, Politecnico di Milano - Italy.
Joint work with J. Beck, R. Tempone, and F. Nobile.
368
Mattia Tani
Università di Bologna, IT
CG methods in non-standard inner product for saddle-point algebraic linear systems with indefinite preconditioning
Contributed Session CT4.2: Friday, 09:20 - 09:50, CO2
Developing a good solver for saddle-point algebraic linear systems is often a challenging task, due to indefiniteness and poor spectral properties of the coefficient
matrix. In the past few years, the employment of indefinite preconditioners leading to systems which are symmetric (and sometimes even positive definite) in a
non-standard inner product has drawn significant attention.
In its basics, the method works as follows: given the linear system Ax = b, let
P be a preconditioner and D be a symmetric and positive definite matrix such
that the preconditioned system is symmetric in the inner product defined by D,
that is, DP −1 A = (P −1 A)T D. If, in addition, DP −1 A is positive definite, then
the Conjugate Gradients method in the D−inner product can be employed on the
preconditioned system, and the rate of convergence of the method, measured in
the error DP −1 A−norm, only depends on the (all) real eigenvalues of P −1 A.
The aim of this presentation is twofold. Firstly, we report on some advances in
the spectral estimates of one of the preconditioned matrix known in literature [1].
Secondly, we explore the sometimes overlooked relation between the non-standard
minimized norm of the error and the Euclidean one. Particular emphasis is given
to the case when D is close to singular.
References
[1] M. Tani and V. Simoncini, Refined spectral estimates for preconditioned saddle
point linear systems in a non-standard inner product pp. 1-12, November 2012
Joint work with Valeria Simoncini.
369
Raul Tempone
MATHEMATICS, KAUST, SA
Numerical Approximation of the Acoustic and Elastic Wave Equations with Stochastic Coefficients
Minisymposium Session UQPD: Thursday, 10:30 - 11:00, CO1
Partial Differential Equations with stochastic coefficients are a suitable tool to
describe systems whose parameters are not completely determined, either because
of measurement errors or intrinsic lack of knowledge on the system. In the case
of linear elliptic PDEs with random inputs, an effective strategy to approximate
the state variables and their statistical moments is to use polynomial based approximations like Stochastic Galerkin or Stochastic Collocation method. These
approximations exploit the high regularity of the state variables with respect to
the input random parameters and for a moderate number of input parameters, are
remarkably more effective than classical sampling methods. However, the performance of polynomial approximations deteriorates as the number of input random
variables increases, an effect known as the curse of dimensionality. To address
this issue, we proposed strategies to construct optimal polynomial spaces and related generalized sparse grids. In this talk we focus instead on the second order
wave equation with a random wave speed and a related generalization to elastodynamics, presenting our recent results from [1] and [2]. Here, the propagation
speed is piecewise smooth in the physical space and depends on a finite number
of random variables. In particular, we show that, unlike for elliptic and parabolic
problems, the solution to hyperbolic problems is not in general analytic with respect to the input random variables. Therefore, the rate of convergence for a
Stochastic Collocation method may in principle only be algebraic. We show that
faster convergence rates are still possible for some quantities of interest and for
the wave solution with particular types of data. These theoretical results agree
with our numerical examples.
References: [1] A Stochastic Collocation Method fo the Second Order Wave Equation with a Discontinuous Random Speed, by M. Motamed, F. Nobile and R.
Tempone. Numerische Mathematik, Volume 123, Issue 3, pp. 493-536, 2013.
[2] Analysis and computation of the elastic wave equation with random coefficients,
by M. Motamed, F. Nobile, R. Tempone, 2012.
Joint work with Mohammad Motamed, and Fabio Nobile.
370
Jan ten Thije Boonkkamp
Eindhoven University of Technology, NL
Harmonic complete flux schemes for conservation laws with discontinuous coefficients
Contributed Session CT2.6: Tuesday, 14:30 - 15:00, CO017
The complete flux scheme is a discretization method for conservation laws of
advection-diffusion-reaction type. Basically, the numerical flux is determined from
the solution of a local boundary value problem for the entire equation, including
the source term. Consequently, the integral representation of the flux contains a
homogeneous and inhomogeneous part, corresponding to the advection-diffusion
operator and the source term, respectively. Suitable quadrature rules give the
numerical flux. We distinguish complete flux schemes for scalar equations and
systems of equations. In the latter case, the coupling between the constituen
equations is taken into account in the discretization.
In this talk we consider conservation laws where the diffusion coefficient/matrix
is a discontinuous function of the space coordinate. From its integral representation, we show that the scalar numerical flux at an interface can be considered as
the constant-coefficient complete flux with the diffusion coefficient replaced by the
harmonic average of the diffusion coefficients in the adjacent grid points. Likewise, for systems of equations, we obtain a similar expression for the numerical
flux vector at an interface, containing the (matrix) harmonic average of the diffusion matrices in the adjacent grid points. We collectively refer to these schemes as
harmonic complete flux schemes. The harmonic complete flux schemes turn out to
be more accurate than the standard complete flux schemes. We will demonstrate
the performance of the schemes for several test problems.
Joint work with L. Liu, and J. van Dijk.
371
Francesco Tesei
École Polytechnique Fédérale de Lausanne, CH
Multi Level Monte Carlo methods with Control Variate for elliptic SPDEs
Contributed Session CT4.8: Friday, 08:20 - 08:50, CO123
We consider the numerical approximation of a partial differential equation (PDE)
with random coefficients. Nowadays such problems can be found in many applications in which the lack of available measurements make an accurate reconstruction
of the coefficients involved in the mathematical model unfeasible. In particular we
focus on a model problem given by an elliptic partial differential equation in which
the randomness is given by the diffusion coefficient, modeled as a random field
with limited spatial regularity. This approach is inspired by the groundwater flow
problem which has a great importance in hydrology: in this context the diffusion
coefficient is given by the permeability of the subsoil and it is often modeled as
a lognormal random field. Several models have been proposed in the literature
leading to realizations having varying spatial smoothness for the covariance functions. In particular, a widely used covariance model is the exponential one that
has realizations with Hölder continuity C 0,α with α < 12 .
Models with low spatial smoothness pose great numerical challenges. The first
step of their numerical approximation consists in building a series expansion of
the input coefficient; we use here a Fourier expansion; whenever the random field
has low regularity, such expansions converge very slowly and this makes the use
of deterministic methods such as Stochastic Collocation on sparse grids highly
problematic since it is not possible to parametrize the problem with a relatively
small number of random variables without a significant loss of accuracy. A natural
choice is to try to solve such problems with a Monte Carlo type method. On the
other hand it is well known that the convergence rate of the standard Monte Carlo
method is quite slow, making it impractical to obtain an accurate solution since
the associated computational cost is given by the number of samples of the random field multiplied by the cost needed to solve a single deterministic PDE which
require a very fine mesh due to the roughness of the coefficient. Multilevel Monte
Carlo methods have already been proposed in the literature in order to reduce the
variance of the Monte Carlo estimator, and consequently reduce the number of
solves on the fine grid.
In this work we propose to use a multilevel Monte Carlo approach combined with
an additional control variate variance reduction technique on each level. The control variate is obtained as the solution of the PDE with a regularized version of
the lognormal random field as input random data and its mean can be successfully
computed with a Stochastic Collocation method on each level. The solutions of
this regularized problem turn out to be highly positively correlated with the solutions of the original problem.
Within this Monte Carlo framework the choice of a suitable regularized version of
the input random field is the key element of this method; we propose to regularize
the random field by convolving the log-permeability with a Gaussian kernel. We
analyze the mean square error of the estimator and the overall complexity of the
algorithm. We also propose possible choices of the regularization parameter and of
the number of samples per grid so as to equilibrate the space discretization error,
the statistical error and the error in the computation of the expected value of the
control variate by Stochastic Collocation. Numerical examples demonstrate the
effectiveness of the method. A comparison with the standard Multi Level Monte
372
Carlo method is also presented.
Joint work with Fabio Nobile, Raul Tempone, and Erik von Schwerin.
373
Benjamin Tews
University of Kiel, DE
Optimal control of incompressible two-phase flows
Minisymposium Session ANMF: Monday, 15:00 - 15:30, CO1
We consider an optimal control problem of two incompressible and immiscible
Newtonain fluids. The motion of the interface between these two fluids can be
captured by a phase field models or level set method. Both methods are subject
to this talk. The state equation includes surface tension and is discretized by a
discontinuous Galerkin scheme in time and a continuous Galerkin scheme in space.
In order to resolve the interface propagation we also apply adaptive finite elements
in space and time. We derive first order optimality conditions including the adjoint equation, which is also formulated in a strong sense. The optimality system
on the discrete level is solved by Newton’s method. In the numerical examples we
compare level sets with a phase field model.
Joint work with Malte Braack.
374
Münevver Tezer-Sezgin
Middle East Technical University, TR
DRBEM Solution of Full MHD and Temperature Equations in a Lid-driven Cavity
Contributed Session CT1.3: Monday, 17:30 - 18:00, CO3
This study proposes the dual reciprocity boundary element method (DRBEM)
solution for full magnetohydrodynamics (MHD) equations coupled with the heat
transfer in a lid-driven square cavity by means of the Boussinessq approximation.
The two-dimensional, unsteady, laminar, incompressible MHD flow and energy
equations are given in terms of non-dimensional stream function ψ, temperature
T , induced magnetic field components Bx , By , and vorticity w as
∇2 ψ = −w
∂T
∂T
1
∂T
∇2 T =
+u
+v
P rRe
∂t
∂x
∂y
∂B
∂B
∂Bx
∂u
∂u
1
x
x
∇2 B x =
+u
+v
− Bx
− By
Rem
∂t
∂x
∂y
∂x
∂y
1
∂By
∂By
∂By
∂v
∂v
∇ 2 By =
+u
+v
− Bx
− By
Rem
∂t
∂x
∂y
∂x
∂y
1 2
∂w
∂w
∂w
Ra ∂T
∇ w=
+u
+v
−
Re
∂t
∂x
∂y
P rRe2 ∂x
2
∂Bx
∂Bx
Ha
∂ ∂By
∂ ∂By
Bx
−
+ By
−
−
ReRem
∂x ∂x
∂y
∂y
∂x
∂y
(1)
(2)
(3)
(4)
(5)
where u = ∂ψ/∂y, v = −∂ψ/∂x and w = ∂v/∂x − ∂u/∂y.
The bottom wall of the unit square cavity is the cold wall Tc = −0.5, and the top
wall is hot, Th = 0.5. The top lid moves with a velocity u = 1 and the no-slip
condition is imposed on the other walls. Externally applied magnetic field with
an intensity B0 = (0, 1) is in +y-direction.
In the DRBEM procedure, the right hand sides of equations (1)-(5) are approximated by using radial basis functions f = 1 + r + . . . + rn which are related to
Laplacian with particular solution û as ∇2 û = f . Thus, fundamental solution of
Laplace equation (u∗ = ln r/2π) is made use of for both sides of equations (1)-(5).
Discretization of the boundary of the cavity by using N linear boundary elements
and taking arbitrarily required L interior points, systems of equations
Hϕ − Gϕq = H Û − GQ̂ F −1 b,
(6)
are obtained, where H and G are BEM matrices containing the boundary integrals
of u∗ and q ∗ = ∂u∗ /∂n evaluated at the boundary nodes, respectively. The vectors
ϕ and ϕq = ∂ϕ/∂n represent the known and unknown information of ψ, T, Bx , By
or w at the nodes. Û and Q̂ are constructed from ûj and then q̂j = ∂ ûj /∂n
columnwise, and are matrices of size (N + L) × (N + L). The vector b represents
collocated values of right hand sides of equations (1)-(5), and F is the (N +
L) × (N + L) coordinate matrix containing radial basis functions fj ’s as columns
evaluated at N + L points. All the space derivatives are calculated by using
coordinate matrix F , and the time derivatives are discretized using BackwardEuler formula.
f = 1 + r, N = 160, and L = 1521 are used with 16−point Gaussian integration
in the construction of H and G matrices. Computations are carried for Prandtl
375
number P r = 0.1. Unknown boundary conditions for vorticity are extracted from
its definition by using coordinate matrix F .
The results are depicted with respect to varying physical parameters such as
Reynolds (Re), magnetic Reynolds (Rem), Hartmann (Ha) and Rayleigh (Ra)
numbers. The increase in Re causes to emerge the new cells at the bottom corners
of the cavity, and the convective heat transfer is developed. Heat transfer passes
to the conductive mode due to the decrease in buoyancy, and streamlines are divided into counter rotating cells inside the cavity as Ra increases. The circulation
in induced magnetic field lines with the increase in Rem shows the dominance of
convection terms in the induction equations. The well known MHD characteristics
with the increase in Ha which are the flattening tendency in the velocity, and the
suppression on the convective heat transfer are well observed (Figure 1). In the
figure, the visualized contours are streamlines, isotherms, vorticity and induced
magnetic field lines from left to right, and the increase in Ha, as Ha = 10 and
Ha = 100 from top to bottom.
As Ha increases, boundary layer formation starts in the flow and vorticity concentrates completely on the moving lid. Induced magnetic field weakens due to
the dominance of external magnetic field applied in +y direction.
DRBEM is an efficient and computationally boundary only numerical scheme in
solving the MHD heat transfer problem in a lid-driven cavity.
Figure 1: Re = 400, Rem = 40, Ra = 1000
Joint work with Bengisen Pekmen.
376
Mechthild Thalhammer
University of Innsbruck , AT
Multi-revolution composition methods for time-dependent Schrödinger equations
Minisymposium Session ASHO: Tuesday, 11:00 - 11:30, CO123
The error behaviour of the recently introduced multi-revolution composition methods is analysed for a class of highly oscillatory evolution equations posed in Banach
spaces. The scope of applications in particular includes time-dependent linear
Schrödinger equations, where the realisation of the composition approach is based
on time-splitting pseudo-spectral methods. The theoretical error bounds for the
resulting space and time discretisations are confirmed by numerical examples.
Joint work with Philippe Chartier and Florian Mehats.
377
Lutz Tobiska
Otto von Guericke University, DE
On stability properties of different variants of local projection type stabilizations
Minisymposium Session ANMF: Monday, 14:30 - 15:00, CO1
The local projection stabilization (LPS) is one way to stabilize standard Galerkin
finite element methods for solving convection-dominated convection-diffusion equations. In recent years, different variants have been developed and analyzed, e.g.,
the one-level LPS, the two-level LPS, the LPS with exponential enrichments, the
LPS with overlapping projection spaces. In the talk we will discuss the different
stabilization properties and compare them with the popular streamline diffusion
method (SDFEM).
378
Lutz Tobiska
Otto von Guericke University, DE
Influence of surfactants on the dynamics of droplets
Minisymposium Session GEOP: Wednesday, 10:30 - 11:00, CO122
We propose a finite element method for studying the influence of surfactants on
the dynamic of droplets. The mathematical model for a free surface flow with
surfactants consists of the Navier-Stokes equation and the surfactant concentration equation in the bulk coupled with the transport equation on the evolving free
surface [1,2]. In the proposed finite element scheme, the free surface is tracked by
an arbitrary Lagrangian-Eulerian (ALE) approach, and the coupled partial differential equations are spatially discretized by finite elements. This approach can be
extended to consider two-phase flows. We prefer discontinuous pressure approximations to suppress spurious velocities and to get a better local mass conservation.
However, the use of fully discontinuous pressure approximations leads to too many
additional degrees of freedom to satisfy the Babuška-Brezzi-condition between the
spaces approximating velocity and pressure. Therefore, a relaxed discontinuous
pressure approximation is used for which the pressure in each phase is continuous.
Numerical experiments for a growing droplet below a capillary, for a rising bubble,
and for Taylor flows [3] will be presented.
References
[1] S. Ganesan, L. Tobiska, Arbitrary Lagrangian-Eulerian finite element method
for computation of two-phase flows with soluble surfactants. J. Comp. Physics
231(2012), 3685–3702
[2] S. Ganesan, A. Hahn, K. Held, L. Tobiska, An accurate numerical method for
computation of two-phase flows with surfactants. European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012), J. Eberhardsteiner et. al. (eds.), Vienna, Austria, September 10-14, CD-ROM, ISBN:9783-9502481-9-7
[3] S.Aland, S. Boden, A. Hahn, F. Klingbeil, M. Weismann, S. Weller, Quantitative comparision of Taylor Flow simulations based on sharp- and diffuse-interface
models. Int. J. Numer. Methods in Fluids (submitted)
Joint work with S. Ganesan, A. Hahn, and K. Held.
379
Rony Touma
Lebanese American University, LB
Central finite volume schemes on nonuniform grids and applications
Contributed Session CT2.6: Tuesday, 15:30 - 16:00, CO017
In this work we develop a new unstaggered central scheme on nonuniform grids for
the numerical solution of general hyperbolic systems of conservation laws in one
space dimension. Many problems arising in physics and engineering sciences can be
formulated mathematically using hyperbolic systems of conservation laws or, in the
case of systems with a source term, hyperbolic systems of balance laws. Such problems occur for example in aerodynamics, magnetohydrodynamics (MHD), hydrodynamics and many more. Central schemes are particularly attractive for solving
hyperbolic systems as they avoid the resolution of the Riemann problems arising
at the cell interfaces, thanks to a layer of staggered cells. Central schemes first
appeared with the staggered version of Lax-Friedrichs’ scheme, where a piecewise
constant numerical solution was alternatingly evolved on two staggered grids. The
resulting scheme is first-order accurate with a stability number of 0.5. In 1990
Nessyahu and Tadmor (NT) [2] presented a predictor-corrector type, second-order
accurate scheme that is an extension of the Lax-Friedrichs scheme [1] which evolves
a piecewise linear numerical solution on two staggered grids. The NT scheme uses
a first-degree Taylor expansion in time to determine the numerical solution at
the intermediate time; furthermore slopes limiting reduces spurious oscillations in
the vicinity of discontinuities. In this work we propose a new one-dimensional
unstaggered central scheme on nonuniform grids for the numerical solution of homogeneous hyperbolic systems of conservation laws of the form
(
ut + f (u)x = 0,
(1)
u(x, t = 0) = u0 (x).
where u(x, t) = (u1 , u2 , · · · , up ) is the unknown p−components vector and f (u)
is the flux vector. System (1) is assumed to be hyperbolic, i.e., the Jacobian
matrix ∂f /∂u has p real eigenvalues and p linearly independent eigenvectors. We
discretize the computational domain [a, b] using n subintervals centered at the
nodes xk with different lengths ∆xk for k = 1, · · · , n. The proposed scheme evolves
a piecewise linear numerical solution Li (x, t) defined at the cell centers xi of the
control cells Ci = [xi−1/2 , xi+1/2 ] of a nonuniform grid, and avoids the resolution of
the Riemann problems arising at the cell interfaces, thanks to a layer of staggered
cells used implicitly. The evolved piecewise linear interpolant is defined by:
u(x, tn ) ≈ Li (x, tn ) = uni + (x − xi )(uni )0 , ∀x ∈ Ci ,
(2)
∂
u(x, tn )|x=xi approximates the slope to first-order accuracy. Spuwhere (uni )0 ≈ ∂x
rious oscillations are avoided using a slopes limiting procedure. The developed
scheme is then validated and used to solve classical problems arising in gas dynamics and in hydrodynamics. The obtained numerical results are in perfect agreement with corresponding ones appearing in the recent literature, thus confirming
the efficiency and potential of the proposed method to handle two-phase gas solid
flows problems.
380
References
[1] P.D. Lax, Weak solutions of nonlinear hyperbolic equation and their numerical
computation, Comm. Pure and Applied Math.7, (1954), 159-193
[2] H. Nessyahu and E. Tadmor, Non-oscillatory central differencing for hyperbolic
conservation laws, J. Comp. Phys., 87, 2, (1990), 408-463.
381
Paolo Tricerri
CMCS-EPFL - CEMAT-IST, CH
Fluid-Structure Interaction simulation of cerebral aneurysm using anisotropic model
for the arterial wall
Minisymposium Session NFSI: Thursday, 15:00 - 15:30, CO122
Brain aneurysms are abnormal dilatations of the cerebral arterial wall originated
from a localized weakening of the arterial tissue. It is estimated that million people around the world are affected by cerebral aneurysm for which the incidence
of patient death or serious morbidity following aneurysm rupture justifies the increasing attention that this disease is receiving, similarly to other cardiovascular
disorders. The coupling of the arterial tissue models (i.e. the structure) with
blood flow models (i.e. the fluid) together with the simulations of the coupled
Fluid-Structure Interaction (FSI) system aims at providing a better understanding of the physiological phenomena and, moreover, it can provide a flexible, reliable, and noninvasive predictive tool for medical decisions. Different aspects of
this illness have been addressed trying to correlate the onset or the evolution of
cerebral aneurysms to specific heamodynamics or morphological conditions [1]. In
literature, many studies have analyzed the influence on the numerical results of
the coupled FSI problem for different modeling choices (ranging from the blood
flow model [6] to parametric studies on the boundary conditions applied on the
external wall of the aneurysm to simulate the surrounding tissues [7]). However,
only limited investigations have been focused on the discussion and choice of the
arterial wall model [4, 5]. Indeed, typically the arterial wall is described as an
isotropic material even though the mechanical response of the tissue is strongly
anisotropic as experimentally observed.
This work aims at the numerical simulation of the coupled FSI system in the case of
cerebral aneurysms when considering anistropic models for the arterial wall. More
precisely, the arterial tissue will be described by an anisotropic constitutive law
[2, 3] in order to model the highly nonlinear and anisotropic mechanical response
of the tissue. The blood flow will be described by the Navier-Stokes equations and
the coupled FSI problem will be solved using a monolithic approach. Physiological
boundary conditions will be applied at the inlet and outlet of the fluid domain in
order to properly describe the blood flows [7]. Idealized geometries that mimic
anatomically realistic geometries of cerebral aneurysms are considered. Indeed,
when idealized geometries of aneurysms are used, the spatial distribution of the
collagen fibers can be analytically prescribed and the containment effect on the
deformation field due to the collagen fibers can be analyzed. We will investigate the
role of the structural models by analyzing the spatial distribution of quantities of
interests that are typically associated with the development of cerebral aneurysms
(e.g. wall shear stress, wall shear stress gradient, wall stresses, flow impingement).
Keywords: arterial tissue structural models, fluid-structure interaction, numerical
simulations, anisotropic constitutive law.
References
[1] Sforza D., Putman C.M., Cebral J.R., Haemodynamics of cerebral aneurysms,
Annual Reviews of Fluid Mechanics, 41 (2009), 91-107.
382
[2] Dalong L., Robertson A.M., A structural multi-mechanism constitutive equation for cerebral arterial tissue, International Journal of Solids and Structures
46 (2009), 2920-2928.
[3] Holzapfel G.A., Ogden R.W., Constitutive modelling of arteries. Proceedings
of the Royal Society A, 2010 466:1551-1597.
[4] Torii R., Oshima M., Kobayashi T., Takagi K., Tezduyar T.E., FluidStructure Interaction modelling of a patient-specific cerebral aneurysm: influence of structural modelling, Computational Mechanics, 43 (2008), 151-159.
[5] Torii R., Oshima M., Kobayashi T., Takagi K., Tezduyar T.E., Influence
of wall thickness on fluid-structure interaction computations of cerebral
aneurysms, International Journal for Numerical Methods in Biomedical Engineering, 26 (2010), 336-347.
[6] Cebral J.R., Mut F., Weir J., Putman C.M., Association of haemodynamic
characteristics and cerebral aneurysm rupture, American Journal of Neuroradiology, 32 (2011), 264-270.
[7] Malossi A.C.I., Partitioned solution of geometrical multiscale problems for
the cardiovascular system: models, algorithms, and applications, PhD Thesis,
École Polytechnique Fédérale de Lausanne, Switzerland, 2012.
Joint work with Luca Dedè, Adélia Sequeira, and Alfio Quarteroni.
383
Julie Tryoen
INRIA Bordeaux Sud-Ouest, FR
A semi-intrusive stochastic inverse method for uncertainty characterization and
propagation in hyperbolic problems
Contributed Session CT2.7: Tuesday, 15:00 - 15:30, CO122
Let U ≡ U (x, t, D) be the solution of a forward hyperbolic model M depending
on space, time and uncertain data vector D = {D1 , . . . , DN } ∈ Ξ ∈ RN . Relying
on nobs observations of the solution {y 1 , . . . , y nobs } ∈ (Rm )nobs corresponding to
measurement points (x1 , t1 ), . . . , (xnobs , tnobs ), our interest is to build a probability
description of D which can be used in an efficient way for uncertainty propagation.
To this end, we rely on a bayesian setting, which provides rigorous tools to solve
such problems, namely stochastic inverse ones, taking into account measurement
and/or model uncertainty [Kaipo and Somersalo 2010]. The difference between
the predicted solution and the observed one is supposed to be described by the
following additive relation
y k = U (xk , tk , D) + ek ,
k = 1, . . . , nobs ,
(1)
where the measurement/model error ek ∈ Rm is a realization of a random vector
with probability distribution pe (commonly taken as multinormal). Let ppr be a
prior probability distribution for D, non-informative in the case of very limited
prior knowledge on D. Supposing independent measurements and applying Bayes’
theorem, posterior probability distribution for D follows :
ppost (D) ∝ ppr (D)
n
obs
Y
k=1
pe (y k − U (xk , tk , D)).
(2)
To avoid complex numerical integrations, Monte Carlo Markov chains (MCMC)
are used to generate iterative samples that behave asymptotically as ppost [Gilks et
al. 1996]. From a MCMC sample, one can then estimate moments of D, marginal
distributions of its components from density kernel estimations, or posterior confidence intervals.
We would like to propagate uncertainty on D obtained from bayesian inference into
the solution U of the forward model. The classic approach described above supplies
samples of posterior distribution of D, from which uncertainty can be propagated
by a classic Monte Carlo approach. Despite its robustness, this method presents
a very low convergence rate in the computation of statistics of U with respect to
the number of realizations. Recently, promising methods have been proposed to
deal with uncertainty propagation in hyperbolic problems where stochastic discontinuities can appear in finite time [Lin et al. 2006, Lin et al. 2008, Poette et
al. 2009, Tryoen et al. 2010], relying on a stochastic spectral representation of
the output [Ghanem and Spanos 2003]. A semi-intrusive method has also been
introduced by Abgrall and Congedo [2013], to propagate input data uncertainty of
any probability distribution into hyperbolic models. The purpose of this study is
to couple the latter approach with the bayesian framework ; to this end, the object
to infer is no more the input data vector D, but a description via their conditional
expectancies on a partition of the stochastic domain Ω = ∪P
j=1 Ωj , where P is the
number of stochastic elements. More precisely, we supposed for the time being the
components of
independent, and we rebuild the probability distributions of
R D as
−1
E(Di |Ωj ) = Ωj FD
(ω)dω, for i = 1, . . . , N and j = 1, . . . , P , where FDi is the
i
384
cumulative distribution function of Di . The methodology is assessed on a quasi
1D Euler test case, with subsonic boundary conditions and an uncertainty on the
output pressure.
References
[Kaipo and Somersalo 2010] Kaipo, J. and Somersalo, E., “Statistical and Computational Inverse Problems”, Applied Mathematical Sciences, Vol. 160, Springer,
2010.
[Gilks et al. 1996] Gilks, W., Richardson, S., and Spiegelhalter, D., “Markov Chain
Monte Carlo in Practice”, Chapman & Hall, 1996.
[Lin et al. 2006] Lin, G., Su, C.-H., and Karniadakis, G., “Predicting shock dynamics in the presence of uncertainties”, J. Comput. Phys. 217, no. 1, p. 260–276,
2006.
[Lin et al. 2008] Lin, G., Su, C.-H., and Karniadakis, G., “Stochastic modeling of
random roughness in shock scattering problems : theory and simulations”, Comput. Methods Appl. Mech. Engrg. 197, no. 43-44, p. 3420–3434, 2008.
[Poette et al. 2009] Poette, G., Després, B., and Lucor, D., “Uncertainty quantification for systems of conservation laws”, J. Comput. Phys. 228, no. 7, p.
2443–2467, 2009.
[Tryoen et al. 2010] Tryoen, J., Le Maître, O., Ndjinga M., and Ern, A., “Intrusive
Galerkin methods with upwinding for uncertain nonlinear hyperbolic systems”, J.
Comput. Phys. 229, no. 18, p. 6485–6511, 2010.
[Ghanem and Spanos 2003] Ghanem, R. and Spanos, P., “Stochastic Finite Elements : A Spectral Approach”, Dover, 2nd edition, 2003.
[Abgrall and Congedo 2013] Abgrall, R. and Congedo, P.M., “A semi-intrusive
deterministic approach to uncertainty quantification in non-linear fluid flow problems”, J. Comput. Phys. 235, p. 828–845, 2013.
Joint work with P.M. Congedo, and R. Abgrall.
385
Stefan Turek
TU Dortmund, Applied Mathematics and Numerics, DE
3D Level Set FEM techniques for (non-Newtonian) multiphase flow problems with
application to pneumatic extension nozzles and micro-encapsulation
Minisymposium Session FREE: Tuesday, 11:30 - 12:00, CO2
Multiphase flow problems are very important in many industrial applications, and
their accurate, robust and efficient numerical simulation is object of numerous
research and simulation projects since many years. Particularly in the case of
pneumatic extension nozzles which are often used for the generation of droplets
the accurate description of the interaction between the dispersed liquid phase and
the surrounding gas phase is essential, especially, if uniform droplet sizes are required. In this work implementation details of the Level Set approach into the
3D parallel FEM based open source software package FeatFlow will be shown,
whereas special emphasis will be placed on the surface tension effects and the interface reconstruction which on the one hand guarantees the exact identification of
the interface and on the other hand offers the advantages to exploit the underlying
multilevel structures to perform an efficient, octree fashioned reinitialization of the
Level Set field. Validation of the corresponding 3D code is to be presented with
respect to numerical test cases and experimental data. The corresponding applications involve the classical rising bubble problem for various parameters and
the generation of droplets from a viscous liquid jet in a coflowing surrounding
fluid. Moreover, numerical simulations involving different regulation strategies are
to be presented in order to reveal the possibilities of regulation of the underlying
droplet generation process in terms of the resulting monodisperse droplet sizes
by means of periodic flow rate modulations of the dispersed phase. Preliminary
results of additional extensions to the developed 3D multiphase flow solver such as
non-Newtonian (shear-dependent) rheological models and the Fictitious Boundary
Method (FBM) based particulate flow module are to be presented in the context
of particle encapsulation processes.
Joint work with Otto Mierka.
386
Eugene Tyrtyshnikov
Institute of Numerical Mathematics of Russian Academy of Sciences, RU
Tensor decompositions in the drug design optimization problems
Minisymposium Session LRTT: Tuesday, 11:30 - 12:00, CO3
Tensor decompositions, especially Tensor Train (TT) and Hierarchical Tucker
(HT), are fastly becoming useful and widely used computational instruments in
numerical analysis and numerous applications. As soon as the input vectors are
presented in the TT (HT) format, basic algebraic operations can be efficiently
implemented in the same format, at least in a good lot of practical problems. A
crucial thing is, however, to acquire the input vectors in this format. In many cases
this can be accomplished via the TT-CROSS algorithm, which is a far-reaching
extension of the matrix cross interpolation algorithms. We discuss some properties of the TT-CROSS that allow us to adopt it for the needs of solving a global
optimization problem. After that, we present a new global optimization method
based on special transformations of the scoring functional and TT decompositions
of multi-index arrays of values of the scoring functional.
We show how this new method works in the direct docking problem, which is
a problem of accommodating a ligand molecule into a larger target protein so
that the interaction energy is minimized. The degrees of freedom in this problem
amount to several tens. Most popular techniques are genetic algorithms, Monte
Carlo and molecular dynamic approach. We have found that the new method can
be up to one hundred times faster on typical protein-ligand complexes.
Joint work with Dmitry Zheltkov.
387
André Uschmajew
EPFL, ANCHP, CH
On asymtotic complexity of hierarchical Tucker approximation in L2 Sobolev classes
Minisymposium Session LRTT: Monday, 12:10 - 12:40, CO1
In this talk we would like to bring to attention the asymptic convergence rate
of hierachical Tucker approximations of functions from unit balls in certain periodic Sobolev classes in L2 with respect to the hierarchical rank. In particular, we
consider the isotropic spaces and the spaces of bounded mixed derivatives. The
(almost) exact rates can be determined straightfowardly from the quasi-optimality
of the high-order SVD approximation and known results on bilinear approximation
rates. These latter results are due to Temlyakov. Based on the convergence rate,
the asymptotic complexity to achieve accuracy can be estimated. When d ≥ 3,
the storage complexity is dominated by the storage cost of the transfer tensors.
This is different from the case d = 2 recently discussed by Griebel and Harbrecht,
where the storage of the HOSVD bases in the leaves is most expensive. In any
case, for functions with dominated mixed smoothness the estimates are worse than
using sparse grids, which however has to be expected.
Joint work with Reinhold Schneider.
388
Kristoffer Van der Zee
EINDHOVEN University of Technology, NL
Adaptive Modeling for Partitioned-Domain Concurrent Continuum Models
Minisymposium Session SMAP: Monday, 15:30 - 16:00, CO015
In this contribution adaptive modeling strategies are considered for the control of
modeling errors in so-called partitioned-domain concurrent multiscale models. In
these models, the exact fine model is considered intractable to solve throughout
the entire domain. It is therefore replaced by an approximate multiscale model
where the fine model is only solved in a small subdomain, and a coarse model
is employed in the remainder. We review two approaches to adaptively improve
the approximate model in a general framework assuming that the fine and coarse
model are described by (local) continuum models separated by a sharp interface;
see [1]. In the classical approach [2] an a posteriori error estimate is computed,
and the model is improved in those regions with the largest contributions to this
estimate. In the recent shape-derivative approach [3] the interface between the
fine and coarse model is perturbed so as to decrease a shape functional associated
with the error. Several numerical experiments illustrate the strategies.
[1] K.G. van der Zee, S. Prudhomme and J.T. Oden, Adaptive modeling for
partitioned-domain multiscale continuum models: A posteriori estimates and shapederivative strategies, submitted
[2] J. T. Oden and S. Prudhomme. Estimation of modeling error in computational
mechanics. J. Comput. Phys., 182:496–515, 2002.
[3] H. Ben Dhia, L. Chamoin, J. T. Oden, and S. Prudhomme. A new adaptive
modeling strategy based on optimal control for atomic-to-continuum coupling simulations. Comput. Methods Appl. Mech. Engrg., 200:2675–2696, 2011.
Joint work with Kristoffer G. van der Zee, Serge Prudhomme, and J. Tinsley Oden.
389
Pierre Vandergheynst
EPFL, CH
Compressive Source Separation: an efficient model for large scale multichannel
data processing
Minisymposium Session ACDA: Monday, 16:00 - 16:30, CO122
Hyperspectral imaging (HSI) systems produce large amounts of data and efficient
compression is therefore crucial in their design. However current HSI compression
techniques, such as those based on 3D wavelets, rely on computationally costly
encoding algorithms that are challenging, indeed often impossible, to implement
on embedded sensor systems. Recently, Compressive Sensing (CS) has provided
an efficient alternative to traditional transform coding, allowing the use of very
simple encoders and moving the computational burden to the decoder. The efficiency of CS crucially depends on well-designed sparse signal models as well as
provably correct decoding algorithms. While the literature describes several applications of CS to HSI using direct extensions of 2D image sparse models, few
works attempt to exploit the strong joint spatial and spectral correlations typical
to HSI. We propose and analyze a new model based on the assumption that the
whole hyperspectral signal is composed of a linear combination of few sources, each
of which has a specific spectral signature, and that the spatial abundance maps of
these sources are themselves piecewise smooth and therefore efficiently encoded via
typical sparse models. We derive new sampling schemes exploiting this assumption and give theoretical lower bounds on the number of measurements required
to reconstruct the HSI and recover its source model parameters. This allows us
to segment HSI data into their source abundance maps directly from compressed
measurements. We also propose efficient optimization algorithms and perform
extensive experimentation on synthetic and real datasets, which reveals that our
approach can be used to encode HSI with far less measurements and computational effort than traditional CS methods. Finally we illustrate how our model
can be used for various other applications, for instance molecular spectroscopy or
functional brain data processing.
Figure 1: Abundance maps estimated from measuring only 3 percent of the total
data in a hyperspectral imaging application.
Joint work with Mohammad Golbabaee.
390
Nick Vannieuwenhoven
KU Leuven, BE
Parallel tensor-vector multiplication using blocking
Contributed Session CT3.3: Thursday, 18:00 - 18:30, CO3
Computing the product of a dense order-d tensor with d − 1 vectors, i.e.,
vd = (v1 , v2 , · · · , vd−1 , I)T · A,
is the key or most costly operation in several tensor decomposition algorithms; it is
vital for computing an orthogonal Tucker decomposition (Tucker, 1966) using the
tensor-Krylov method (Savas and Eldén, 2013); for computing the CANDECOMP
/ PARAFAC (CP) decomposition (Carroll and Chang, 1970; Harschman, 1970)
using alternating least squares (ALS) algorithms; and for computing the largest
tensor singular value (Chang, Qi, and Zhou, 2010).
As we push the boundaries to tackle larger problems still, limiting memory consumption and exploiting parallelism becomes unavoidable. In this presentation,
we develop a memory-efficient parallel implementation of the tensor-vector product well-suited to shared-memory architectures. Our approach is founded on two
crucial observations: first, explicitly computing unfoldings, or matricizations, of
the input tensor can be avoided completely; and, second, subdividing the tensor into subtensors and casting the familiar computations into block-form enables
data-level parallelism.
We illustrate the performance of the proposed method through two key algorithms: the ALS algorithm for computing a CP decomposition and the tensorKrylov method for computing an orthogonal Tucker decomposition. The code
was implemented in C++ using Eigen and Intel Threading Building Blocks, and
our preliminary experiments indicate excellent sequential and good parallel performance. We also illustrate how the techniques covered in this presentation can
be utilized to improve the performance of the Tensor Toolbox v2.5’s ttv(T,v,-k)
routine by roughly one order of magnitude.
Joint work with N. Vanbaelen, K. Meerbergen, and R. Vandebril.
391
Maria Varygina
Institute of Computational Modeling SB RAS, RU
Numerical Modeling of Elastic Waves Propagation in Block Media with Thin Interlayers
Contributed Session CT1.9: Monday, 18:00 - 18:30, CO124
Several nature materials such as rock have distinct structurally inhomogeneous
block-hierarchical structure. Block structure appears on different scale levels from
the size of crystal grains to the blocks of rock. Blocks are connected to each
other with thin interlayers of rock with significantly weaker mechanical properties.
Analysis of experimental data of wave propagation in layered media shows that the
interlayers behave non-elastically even under small wave amplitudes. The models
of interlayer material of various levels of complexity taking into account natural
dissipation processes in interlayers based on the rheological method are built [1].
The numerical solution is based on the two-cyclic space-variable decomposition
method in combination with monotone grid-characteristic schemes with balanced
time steps in layers and interlayers. The scheme in the layer does not possess an
artificial energy dissipation and significantly reduces the effect of smoothing the
numerical solution peaks with corresponding refinement of the obtained results.
Parallel algorithms are implemented as complex of programs for supercomputers
with graphics processing units with CUDA technology (Compute Unified Device
Architecture).
The characteristics of wave propagation processes in layered and block media related to the structural inhomogeneity in rocks were studied. The calculations of
planar waves induced by short and long Λ− and Π− impulses on the boundary of
layered medium and the Lamb problem of instant concentrated load on a surface of
half space in planar case were performed. Fig. 1 and Fig. 2 show the dependencies
of the velocity vector on spatial coordinate in a problem of Λ−impulse load. The
impulse with a unit amplitude was induced on the left boundary of computational
domain, the right boundary was fixed. The impulse duration equal to the time
that elastic wave passes through two and a half layers.
The numerical results demonstrate a qualitative difference between the wave pattern in layered medium as compared to a homogeneous medium. This difference
at the initial stage is revealed in the appearance of waves reflected from the interlayers, i.e. the characteristic oscillations behind the loading wave front as it
passes through the interlayer. Eventually stationary wave pattern appears after
multiple reflections behind the head wave front, i.e. the so-called pendulum wave
expermintaly discovered in [2, 3].
Fourier analysis of the displacement of layers seismograms allows to identify the
characteristic frequency of the pendulum wave due to the compliances of interlayers
and their thickness.
This work was supported by the Russian Foundation for Basic Research (grant
no. 11–01–00053) and the Complex Fundamental Research Program no. 18 "Algorithms and Software for Computational Systems of Superhigh Productivity" of
the Presidium of RAS.
392
References
[1] Varygina M.P., Pohabova M.A., Sadovskaya O.V., Sadovskii V.M. Numerical
algorithms for the analysis of elastic waves in block media with thin interlayers
// Numerical methods and programming. – 2011. – T. 12. (In Russian)
[2] Kurlenya M.V., Oparin V.N., Vostrikov V.I. On generation of elastic wave
packet under impulse load in block media. Pendulum waves // Reports of the
Academy of sciences USSR. 1993. T. 333 No. 4. PP. 3-13. (In Russian)
[3] Aleksandrova N.I., Sher E.N., Chernikova A.G. The influence of viscosity of
interlayers on the propagation of low frequent pendulum waves in block hierarchical media // Physical and technical problems of developments of mineral
resources. 2008. No. 3. PP. 3-13. (In Russian)
Figure 1: Velocity behind front wave of incident wave induced by Λ-impulse in
layered medium
Figure 2: Velocity behind front wave of reflected wave induced by Λ-impulse in
layered medium
393
Yuri Vassilevski
Institute of Numerical Mathematics, Russian Academy of Sciences, RU
A numerical approach to Newtonian and viscoplastic free surface flows using dynamic octree meshes
Minisymposium Session FREE: Monday, 14:30 - 15:00, CO2
We present an approach for numerical simulation of free surface flows of Newtonian and viscoplastic incompressible fluids. The approach is based on the level set
method for capturing free surface evolution and features compact finite difference
approximations of fluid and level set equations on locally refined and dynamically
adapted staggered octree grids. A discretization, constitutive relations, a surface
reconstruction, a surface tension forces evaluation: these and other building blocks
of the numerical method providing predictive and efficient simulations will be discussed in the talk. In particular, we shall address a finite difference approximation
of the advective terms on staggered grids which is stable and low dissipative alternative to semi-Lagrangian methods to treat the transport part of the equations.
Numerical examples will demonstrate the performance of the approach for several
benchmark and complex 3D Newtonian and viscoplastic fluids with free surfaces.
References
[1] K.Nikitin, M.Olshanskii, K.Terekhov, Yu.Vassilevski. A numerical method for
the simulation of free surface flows of viscoplastic fluid in 3D. Journal of Computational Mathematics, 29(6), (2011), 605-622.
[2] M.Olshanskii, K.Terekhov, Yu.Vassilevski. An octree-based solver for the incompressible Navier-Stokes equations with enhanced stability and low dissipation. Computers and Fluids, to appear.
Joint work with M.Olshanskii (University of Houston, Moscow State University),
and K.Terekhov (Institute of Numerical Mathematics RAS).
394
Marco Verani
MOX-Department of Mathematics, Politecnico di Milano, IT
Mimetic finite differences for quasi-linear elliptic equations
Contributed Session CT2.5: Tuesday, 15:30 - 16:00, CO016
Nowadays, the mimetic finite difference (MFD) method has become a very popular numerical approach to successfully solve a wide range of problems. This is
undoubtedly connected to its great flexibility in dealing with very general polygonal meshes (see Figure 1 for an example) and its capability of preserving the
fundamental properties of the underlying physical and mathematical models.
In this talk, we approximate the solution of a quasilinear elliptic problem of monotone type by using the MFD method and we prove that the MFD approximate
solution converges, with optimal rate, to the exact solution in a mesh-dependent
energy norm. The resulting nonlinear discrete problem is then solved iteratively via
linearization by applying the Kacanov method. The convergence of the Kacanov
algorithm in the discrete mimetic framework is also proved. Several numerical
experiments confirm the theoretical analysis.
Figure 1: Example of poligonal (exagons) decomposition of the square Ω = (0, 1)2 .
Joint work with Paola F. Antonietti (Politecnico di Milano), and Nadia Bigoni
(Politecnico di Milano).
395
Karen Veroy-Grepl
AICES - RWTH Aachen, DE
On Synergies between the Reduced Basis Method, Proper Orthogonal Decomposition, and Balanced Truncation
Minisymposium Session ROMY: Thursday, 14:30 - 15:00, CO016
In this talk, we present a new method for constructing balanced reduced order
models for parametrized systems. The technique is based on synergies between
three commonly used model order reduction methods: (i ) the Reduced Basis
Method, which provides certified predictions of outputs of parametrized PDEs
through Galerkin projection onto a space of solutions at selected parameter values; (ii ) Proper Orthogonal Decomposition, which effectively derives reduced order
models from the singular value decomposition of the snapshot matrix; and (iii )
Balanced Truncation, which constructs reduced order models that balance observability and controllability. The proposed method constructs the reduced order
model using the most essential aspects of the three methods: the greedy technique
and a posteriori error estimation of the Reduced Basis Method, the method of
snapshots from Proper Orthogonal Decomposition, and the balancing approach of
Balanced Truncation.
Joint work with Martin Grepl.
396
Martin Vetterli
EPFL/IC/LCAV, CH
Inverse Problems Regularized by Sparsity
Public Lecture: Tuesday, 16:30 - 17:30, Rolex Learning Center Auditorium
Sparsity as a modeling principle has been part of signal processing for a long time,
for example, parametric methods are sparse models. Sparsity plays a key role in
non-linear approximation methods, in particular using wavelets and related constructions. And recently, compressed sensing and finite rate of innovation sampling
have shown how to sample sparse signals close to their sparsity levels.
In this talk, we first recall that signal processing lives on the edge of continuousand discrete-time/space processing. That duality of the continuum versus the discrete is also inherent in inverse problems. We then review how sparsity can be
used in solving inverse problems. This can be done when the setting is naturally
sparse, e.g. in source localization, or for solutions that have low-dimensionality
in some basis. After an overview of essential techniques for sparse regularization,
we present several examples where concrete, real life inverse problems are solved
using sparsity ideas.
First, we answer the question “can one hear the shape of a room”, a classic inverse
problem from acoustics. We show a positive answer, and a constructive algorithm
to recover room shape from only a few room impulse responses.
Second, we address the problem of source localization in a graph. Assume a disease or a rumor spreading on a social graph, can one find the source efficiently
with a small set of observers? A constructive and efficient algorithm is described,
together with several practical scenarios.
Third, we consider the question of sensor placement for monitoring and inversion
of diffusion processes. We present a solution for monitoring temperature using low
dimensional modeling and placing a small set of sensors.
The ideas of sparse, regularized inversion are finally applied to the problem of trying to recover the amount of nuclear release from the Fukushima nuclear accident.
We show that using a transport model and the very limited available measurements, we are able to correctly recover Xenon emission, while the Cesium release
remains a challenge.
397
Pedro Vilanova
KAUST, SA
Chernoff-based Hybrid Tau-leap
Contributed Session CT3.2: Thursday, 17:30 - 18:00, CO2
Markovian pure jump processes are used to model many phenomena, for example
biochemical reactions at molecular level, dynamics of wireless communication networks, spread of epidemic diseases in small populations, among others. There exist
algorithms like SSA by Gillespie or Modified Next Reaction Method by Anderson
that simulate a single trajectory exactly, but can be time consuming when many
reactions take place during a short time interval. The approximated Gillespie’s
tau-leap method, on the other hand, can be used to reduce computational time,
but introduces a time discretization error that may lead to non-physical values.
This talk presents a hybrid algorithm for simulating individual trajectories, which
adaptively switches between the SSA and the tau-leap method. The switching
strategy is based on the comparison of the expected inter-arrival time of the SSA
and an adaptive time step size derived from a Chernoff-type bound for estimating
the one-step exit probability. Since this bound is non-asymptotic we do not need
to make any distributional approximation for the tau-leap increments. This hybrid
method allows to control the global exit probability of a simulated trajectory, and
to obtain accurate and computable estimates for the expected value of any smooth
observable of the process with low computational work. We present numerical examples that confirm the theory and show the advantages of this approach over
both, the exact methods and the tau-leap ones that uses pre-leap checks based on
gaussian approximations for the increments. Finally, we will discuss about possible extensions to this method.
Joint work with Alvaro Moraes, and Raul Tempone.
398
Gilles Vilmart
ENS Rennes and INRIA Rennes, FR
Numerical homogenization methods for multiscale nonlinear elliptic problems of
nonmonotone type
Minisymposium Session MSMA: Monday, 15:00 - 15:30, CO3
We study the effect of numerical quadrature in finite element methods for a class
of nonlinear elliptic problems of nonmonotone type. This is a key ingredient to
analyze the so-called finite element heterogeneous multiscale method (FE-HMM)
applied to nonmonotone homogenization problems. We obtain optimal convergence results for the H 1 and L2 norms in dimension d ≤ 3 and for a fully discrete
method taking into account both macro and micro discretizations. We also prove
for sufficiently fine meshes the uniqueness of the numerical solution and the convergence of the Newton method needed in the implementation. In addition, we
show that the coupling of the nonlinear multiscale method with the reduced basis technique (RB-FE-HMM) considerably improves the efficiency by drastically
reducing the number of degrees of freedom.
References
[1] A. Abdulle, Y. Bai, and G. Vilmart, Reduced basis finite element heterogeneous multiscale method for quasilinear homogenization problems, preprint
(2013), 26 pages.
[2] A. Abdulle, Y. Bai, and G. Vilmart, An offline-online homogenization strategy to solve quasilinear two-scale problems at the cost of one-scale problems,
preprint (2013), 13 pages.
[3] A. Abdulle and G. Vilmart, Fully discrete analysis of the finite element heterogeneous multiscale method for quasilinear elliptic homogenization problems, to
appear in Mathematics of Computation (2013), 21 pages.
[4] A. Abdulle and G. Vilmart, A priori error estimates for finite element methods with numerical quadrature for nonmonotone nonlinear elliptic problems,
Numer. Math. 121 (2012), 397-431.
[5] W. E, P. Ming and P. Zhang, Analysis of the heterogeneous multiscale method
for elliptic homogenization problems, J. Amer. Math. Soc. 18 (2005), no. 1,
121–156.
Joint work with Assyr Abdulle and Yun Bai.
399
Gilles Vilmart
ENS Rennes and INRIA Rennes, FR
Multi-revolution composition methods for highly oscillatory problems
Minisymposium Session ASHO: Tuesday, 10:30 - 11:00, CO123
We introduce a new class of multi-revolution composition methods (MRCM) for
the approximation of the N th-iterate of a given near-identity map. When applied
to the numerical integration of highly oscillatory systems of differential equations,
this numerical homogenization technique benefits from the properties of standard
composition methods: it is intrinsically geometric and well-suited for Hamiltonian
or divergence-free equations for instance. We prove error estimates with error constants that are independent of the oscillatory frequency. Numerical experiments,
in particular for the nonlinear Schrödinger equation, illustrate the theoretical results, as well as the efficiency and versatility of the methods.
Joint work with P. Chartier (Rennes), J. Makazaga, and A. Murua (San Sebastian).
400
Martin Vohralik
INRIA Paris-Rocquencourt, FR
Adaptive regularization, linearization, and algebraic solution in unsteady nonlinear
problems
Minisymposium Session STOP: Thursday, 15:30 - 16:00, CO1
We show how computable a posteriori error estimates can be obtained for two
model nonlinear unsteady problems, namely the Stefan problem and the two-phase
porous media flow problem. Regularization of the nonlinear functions, iterative
linearizations, and iterative solutions of the arising linear systems are typically involved in the numerical approximation procedure. We show how the corresponding
error components can be distinguished and estimated separately. A fully adaptive
algorithm, with adaptive choices of the regularization parameter, the number of
nonlinear and linear solver steps, the time step size, and the computational mesh,
is presented. Numerical experiments confirm tight error control and important
computational savings.
We present two examples for the two-phase flow in porous media. In the left part
of Figure 1, we plot our estimators of the different error components as a function
of GMRes iterations for a fixed time and Newton step. We see that our stopping
criteria enable to economize an important number of iterations with respect to the
classical criterion requiring the relative algebraic residual to be smaller than 1e-13.
In the right part of Figure 1, we track the dependence this time with respect to the
Newton iterations. We compare our criteria with the classical one requiring the
difference between two consecutive pressure and saturation approximations to be
smaller than 1e-11. The overall gains achievable thanks to our approach are then
illustrated in Figure 2. In its left part, we plot the number of necessary Newton
iterations on each time step for both the adaptive and classical stopping criteria.
In its right part, the cumulative number of GMRes iterations is given as function
of time. From this last graph, we can conclude that in the adaptive approach
the number of cumulative GMRes iterations is approximately 12-times smaller
compared to that in the classical one. Details can be found in the references [1]
and [2].
[1] Di Pietro D. A., Vohralík M., and Yousef S. Adaptive regularization, linearization, and discretization and a posteriori error control for the two-phase Stefan
problem. HAL Preprint 00690862, submitted for publication, 2012.
[2] Vohralík M. and Wheeler M. F. A posteriori error estimates, stopping criteria,
and adaptivity for two-phase flows. HAL Preprint 00633594 v2, submitted for
publication, 2013.
401
Figure 1: Spatial, temporal, linearization, and algebraic estimators and their sum
as function of the GMRes iterations (left) and of the Newton iterations (right)
Figure 2: Number of Newton iterations on each time step (left) and cumulative
number of GMRes iterations as a function of time (right)
Joint work with D. A. Di Pietro, M. F. Wheeler, and S. Yousef.
402
Heinrich Voss
Hamburg University of Technology, DE
Variational Principles for Nonlinear Eigenvalue Problems
Minisymposium Session NEIG: Thursday, 10:30 - 11:30, CO2
Variational principles are powerful tools when studying the qualitative behavior
and numerical methods for linear self-adjoint operators. Bounds for eigenvalues,
comparison results, interlacing properties, and monotonicity of eigenvalues can be
proved easily with variational characterizations of eigenvalues, to name just a few.
If A is a self-adjoint operator on a Hilbert space H with domain of definition D
and λ1 ≤ λ2 ≤ . . . are the eigenvalues of A below the essential spectrum of A,
then they can be characterized by a minmax principle of Poincaré type
λn =
min
max
V ⊂D, dim V =n
x∈V, x6=0
hAx, xi
hx, xi
or by a maxmin principle of Courant–Fischer–Weyl type
λn =
max
V ⊂H, dim V =n−1
min
x∈D, x⊥V, x6=0
hAx, xi
.
hx, xi
In this talk we discuss generalizations of these variational principles to families of
linear operators depending continuously on an eigenparameter λ.
References
H. Voss, B. Werner. A minimax principle for nonlinear eigenvalue problems
with applications to nonoverdamped systems. Math. Meth. Appl. Sci. 4, 415
– 424 (1982)
H. Voss. A maxmin principle for nonlinear eigenvalue problems with application to a rational spectral problem in fluid–solid vibration. Applications of
Mathematics 48, 607 – 622 (2003)
H. Voss A minmax principle for nonlinear eigenproblems depending continuously on the eigenparameter. Numer. Lin. Algebra Appl. 16, 899 – 913
(2009)
M. Stammberger, H. Voss Variational characterization of eigenvalues of a
nonsymmetric eigenvalue problem in fluid–solid vibrations. Submitted to
Applications of Mathematics
403
Benjamin Wacker
Institute for Numerical and Applied Mathematics, University of Göttingen, DE
A local projection stabilization method for finite element approximation of a magnetohydrodynamic model
Minisymposium Session MMHD: Thursday, 12:00 - 12:30, CO017
In this talk, we consider the equations of incompressible resistive magnetohydrodynamics. Based on a stabilized finite element formulation by S. Badia, R. Codina
and R. Planas for the linearized equations [1], we propose a modification of this
technique by a local projection stabilization finite element method for the approximation of this problem.
The introduced stabilization technique is then discussed by investigating the stability and convergence analysis for the problem’s formulation thoroughly. We
finally compare our numerical analysis with other approximations presented in
the literature.
References
[1] S. Badia, R. Codina and R. Planas. On an unconditionally convergent stabilized finite element approximation of resistive magnetohydrodynamics, Journal of Computational Physics, 234:399-416, 2013.
Joint work with Gert Lube.
404
Shawn Walker
Louisiana State University, US
A New Mixed Formulation For a Sharp Interface Model of Stokes Flow and Moving
Contact Lines
Minisymposium Session GEOP: Wednesday, 11:00 - 11:30, CO122
Two phase fluid flows on substrates (i.e. wetting phenomena) are important in
many industrial processes, such as micro-fluidics and coating flows. These flows
include additional physical effects that occur near moving (three-phase) contact
lines. We present a new 2-D variational (saddle-point) formulation of a Stokesian
fluid with surface tension (see Falk & Walker in the context of Hele-Shaw flow)
that interacts with a rigid substrate. The model is derived by an Onsager type
principle using shape differential calculus (at the sharp-interface, front-tracking
level) and allows for moving contact lines and contact angle hysteresis through a
variational inequality. We prove the well-posedness of the time semi-discrete and
fully discrete (finite element) model and discuss error estimates. Simulation movies
will be presented to illustrate the method. We conclude with some discussion of
a 3-D version of the problem as well as future work on optimal control of these
types of flows.
405
Mirjam Walloth
Institute of Computational Science, University of Lugano, CH
An efficient and reliable residual-type a posteriori error estimator for the Signorini
problem
Contributed Session CT4.7: Friday, 09:20 - 09:50, CO122
Often, in the numerical simulation of real world problems as, e.g., arising from
mechanics or biomechanics, precise information about the regularity of the solution
cannot be obtained easily a priori. In fact, the solution may be more or less
regular in different regions of the computational domain and even singularities
may occur. In this case, increasing the number of degrees of freedom within or
close to a critical region of low regularity can improve the overall accuracy of the
numerically obtained approximation. The detection of such a critical region can
be made feasible by using a posteriori error estimators which do not rely on any
additional regularity assumptions. One of the most common a posteriori error
estimators is the standard residual estimator which is directly derived from the
equivalence of the norm of the error and the dual norm of the residual. For contact
problems this relation is disturbed due to the non-linearity. Thus, additional effort
is required to derive an a posteriori error estimator for contact problems.
Here, we present a new a posteriori error estimator for the linear finite element solution of the Signorini problem in linear elasticity [2]. Inspired by a posteriori error
estimators for the closely related obstacle problem, see e. g. [4, 1, 3] the estimator
is designed for controlling the H 1 -error of the displacements and the H −1 -error of
a suitable approximation of the Lagrange multiplier. The estimator reduces to the
standard residual estimator for linear elasticity, if no contact occurs. The estimator contributions addressing the nonlinearity are related to the contact stresses,
the complementarity condition, and the approximation of the gap function. Remarkably, the first two terms do not contribute in the case of so-called full-contact.
We prove reliability and efficiency of the estimator for two- and three-dimensional
simplicial meshes, ensuring the equivalence with the error up to oscillation terms.
Our theoretical findings are supported by intensive numerical studies. The adaptively refined grids and the relevance of the different error estimator contributions
are analyzed by means of different illustrative numerical experiments in 3D. In
our numerical studies, we quantitatively investigate the convergence of the error
estimator by comparing to the case of uniformly refined grids. Furthermore, for
selected examples in 2D and even in 3D where the contact stresses are known
analytically, we compare the numerically computed contact stresses on adaptively
refined grids with the exact contact stresses. Interestingly, although the proofs of
upper and lower bound are given for meshes of simplices, the numerical studies
show also very good performance of the new residual-type a posteriori error estimator for unstructured meshes consisting of hexahedra, tetrahedra, prisms, and
pyramids.
References:
[1] Fierro, F., Veeser, A.: A posteriori error estimators for regularized total
variation of characteristic functions. SIAM J. Numer. Anal. 41, 2032–2055
(2003)
[2] Krause, R., Veeser, A., Walloth, M.: An efficient and reliable residual-type
406
a posteriori error estimator for the Signorini problem. Preprint, Institute of
Computational Science, University of Lugano, 2012.
[3] Moon, K., Nochetto, R., von Petersdorff, T., Zhang, C.: A posteriori error
analysis for parabolic variational inequalities. M2AN Math. Model. Numer.
Anal. 41, 485–511 (2007)
[4] Veeser, A.: Efficient and reliable a posteriori error estimators for elliptic
obstacle problems. SIAM J. Numer. Anal. 39, 146–167 (2001)
Joint work with Rolf Krause, and Andreas Veeser.
407
Andreas Weinmann
Helmholtz Zentrum München and TU München, DE
Jump-sparse reconstruction by the minimization of Potts functionals
Minisymposium Session ACDA: Monday, 15:00 - 15:30, CO122
This talk is on our recent work concerning Potts and Blake-Zisserman functionals.
We start with L1 Potts functionals
Pγ (u) = γ · k∇uk0 + ku − f k1 .
Here f are given (univariate) data, k∇uk0 counts the number of jumps of u and
γ is a parameter controlling the trade-off between data fidelity and regularity.
We develop a fast algorithm for minimizing discrete L1 Potts functionals. Furthermore, we obtain convergence results for discrete Potts functionals and their
respective minimizers towards their continuous time counterparts. In addition, we
show a nice blind deconvolution property of L1 Potts functionals: Mildly blurred
jump-sparse signals are reconstructed by minimizing the functional.
In the second part of the talk we consider (inverse) Potts problems of the form
P̄γ (u) = γ · k∇uk0 + kAu − f kpp → min .
Here A is a not necessarily square matrix. We present an ADMM based approach
which works very well in practice. Furthermore, we consider a Douglas-Rachford
like splitting approach to the above inverse Potts problem for p = 2 and the
(inverse) Blake-Zisserman problem
X
B̄γs,q (u) = γ ·
min(|ui − ui−1 |q , sq ) + kAu − f k22 → min .
i
Here s is a positive number and q ≥ 1. For the inverse Blake-Zisserman functionals
F̄ = B̄γs,q and the inverse Potts functionals F̄ = P̄γ we consider the corresponding
surrogate functional F̄ (u, v) = F̄ (u) −kAu−Avk22 +ku−vk22 . The iteration un+1 =
argminu F̄ (u, un ) leads to
un+1 = argminu γk∇uk0 + ku − A∗ (Aun − f )k22
in the Potts case. The iteration for the Blake-Zisserman
case is obtained by reP
placing the regularity term k∇uk0 by the sum i min(|ui −ui−1 |q , sq ). This means
that we have to solve (ordinary) Potts or Blake-Zisserman problems with A = id
for data A∗ (Aun − f ). This can be done fast by using dynamic programming. In
contrast to a recent approach of M. Fornasier and R. Ward to the Blake-Zisserman
problem we directly work on the inverse Blake-Zisserman functionals. We show
that for inverse Blake-Zisserman functionals as well as for inverse Potts functionals,
the above iterative algorithm converges towards a local minimum of the respective
functional.
Joint work with Laurent Demaret, and Martin Storath.
408
Steffen Weißer
Saarland University, DE
Challenges in BEM-based Finite Element Methods on general meshes
Contributed Session CT1.8: Monday, 18:30 - 19:00, CO123
In the field of numerical methods for partial differential equations there is an increasing interest for non-simplicial meshes. Several applications in solid mechanics, biomechanics as well as geological science show the need for general elements
within a finite element simulation. Discontinuous Galerkin methods and mimetic
finite difference methods are able to handle such kind of meshes. Nevertheless,
these two strategies yield non-conforming approximations. Recent developments
like the virtual element method [1] overcome these difficulties.
Another new kind of conforming finite element method on general meshes has
been proposed in [2]. This method uses basis functions that fulfill the differential
equation locally. In the local problems constant material parameters and vanishing right hand side are prescribed. Due to this implicit construction, the basis
functions are applicable on polygonal and polyhedral elements, respectively. Let
Ω ⊂ R2 be a polygonal domain and a ∈ L∞ (Ω), f ∈ L2 (Ω), g ∈ H 1/2 (∂Ω). For
the model problem
−div(a∇u) = f
in Ω,
u=g
on ∂Ω
the lowest order basis functions ψz which are dedicated to the nodes z ∈ Nh of
the mesh Kh with elements K are uniquely defined by
−∆ψz = 0 in K for all K ∈ Kh ,
1 z=x
ψz (x) =
,
0 z=
6 x ∈ Nh
ψz linear on each edge of K.
If the material coefficient is approximated by a piecewise constant function, a ≈ aK
on K for K ∈ Kh , the standard bilinear form of the variational formulation can
be rewritten by the use of Green’s first identity over each element such that
Z
X
∂ψz
ds.
aΩ (ψz , ψx ) =
aK
ψx
∂nK
∂K
K∈Kh
Consequently, the integration is reduced to the boundaries of the elements where
the trace of the basis functions is known explicitly. The normal derivative can
be expressed by means of boundary integral operators. In the numerics these
operators are approximated by the use of boundary element methods. Therefore,
the global method is called BEM-based FEM. This strategy has been studied in
several articles concerning convergence [3] as well as residual error estimates for
adaptive mesh refinement [4], for example.
The aim of current research is to extend the ideas for the definition of trial functions
to three space dimensions such that the method can handle polyhedral meshes,
see Figure 1. In the case that the polyhedral elements have triangulated surfaces
there exist already straight forward generalizations. But the challenging part is to
manage the polygonal faces of the polyhedral elements directly.
Furthermore, the question of arbitrary order basis functions is addressed. Following the ideas of [3], an extended ansatz space Vh is defined which admits arbitrary
409
order convergence. With the help of interpolation operators on polygonal meshes
the error estimate
ku − uh kH 1 (Ω) ≤ c hk |u|H k+1 (Ω)
is proven for an exact solution u ∈ H k+1 (Ω) and its approximation uh ∈ Vh .
Finally, all theoretical results are confirmed by several numerical experiments.
References
[1] L. Beirão da Veiga, F. Brezzi, A. Cangiani, G. Manzini, L. D. Marini,
A. Russo: Basic principles of virtual element methods. Mathematical Models
and Methods in Applied Sciences 23, 199, 2013
[2] D. Copeland, U. Langer, D. Pusch: From the boundary element domain decomposition methods to local Trefftz finite element methods on polyhedral
meshes. Domain Decomposition Methods in Science and Engineering XVIII,
315–322, 2009
[3] S. Rjasanow, S. Weißer: Higher order BEM-based FEM on polygonal meshes.
SIAM Journal on Numerical Analysis, 50(5):2379–2399, 2012
[4] S. Weißer: Residual error estimate for BEM-based FEM on polygonal meshes.
Numerische Mathematik, 118:765-788, 2011
Figure 1: Polyhedral mesh of the unite cube
Joint work with Prof. Dr. Sergej Rjasanow.
410
Garth Wells
University of Cambridge, GB
Domain-specific languages and code generation for solving PDEs using specialised
hardware
Minisymposium Session PARA: Monday, 16:00 - 16:30, CO016
The development and use of a domain-specific language coupled with code generation has proved to be very successful for creating high-level, high-performance
finite element solvers. The use of a domain-specific language allows problems to
be expressed compactly in near-mathematical notation, and facilitates the preservation of mathematical abstractions. The latter point is invaluable for automating
the creation of auxiliary problems, such as linearisations or adjoint equations. The
generation of low-level code from expressive, high-level input can offer performance
beyond what one could reasonably achieve using conventional programming techniques. Important in this respect is leveraging domain knowledge that cannot be
provided by a general purpose compiler.
The generation of low-level code from expressive, high-level input has great appeal for specialised hardware, such as now wide-spread co-processor technology.
Recent hardware shifts the burden onto the developer and demands a high level
of software expertise. To address this, recent and ongoing investigations into generating target-specific code for solving PDEs using the FEniCS Project toolchain
are presented. GPU code is generated from Unified Form Language input, and it
is shown how different strategies differ dramatically in performance depending on
the equation type and finite element type. To counter this, a formulation that is
parameterised over the equation and finite element type is presented. In this way,
a code generator can narrow the search space for efficient formulations and strategies. It also offers solver level shielding against future hardware and programming
model changes.
411
Thomas Wick
The University of Texas at Austin, ICES, US
A fluid-structure interaction framework for reactive flows in thin channels
Minisymposium Session NFSI: Thursday, 12:00 - 12:30, CO122
We study the reactive flow in a thin strip where the geometry changes take place
due to reactions. Specifically, we consider precipitation dissolution processes taking place at the lateral boundaries of the strip. The geometry changes depend
on the concentration of the solute in the bulk (trace of the concentration) which
makes the problem a free-moving boundary problem. The numerical computations are challenging in view of the nonlinearities in the description of the reaction
rates. In addition to this, the movement of the boundary depends on the unknown
concentration (and hence part of the solution) and the computation of the coupled model remains a delicate issue. Our aim is to develop appropriate numerical
techniques for the computation of the solutions of the coupled convection-diffusion
equation and equation describing the geometry changes. The key idea at this point
consists in using a fluid-structure interaction framework to formulate and to solve
the problem at hand. We use the arbitrary Lagrangian-Eulerian framework and a
monolithic solution algorithm for the numerical treatment. Temporal discretization is based on finite differences whereas spatial discretization makes use of a
Galerkin finite element scheme. The nonlinear problem is treated with Newton’s
method. The performance is demonstrated with the help of several interesting
numerical tests.
Joint work with Kundan Kumar, and Mary F. Wheeler.
412
Olof B. Widlund
Courant Institute, US, US
Two-level overlapping Schwarz methods for some saddle point problems
Minisymposium Session PSPP: Thursday, 10:30 - 11:30, CO3
About fifteen years ago, Axel Klawonn and Luca Pavarino explored the possibility
of using two-level overlapping Schwarz methods for a variety of saddle-point problems. It is the purpose of this contribution to provide a theoretical justification
for some of these results. Our work is inspired by earlier joint work with Clark
R. Dohrmann on almost incompressible elasticity solved by mixed finite elements
with discontinuous pressure approximations. A report on some recent numerical
experiments will also be given.
Joint work with Luca F. Pavarino.
413
Tobias Wiesner
Technische Universität München, DE
Algebraic multigrid (AMG) methods for saddle-point problems arising from mortarbased finite element discretizations
Minisymposium Session PSPP: Thursday, 11:30 - 12:00, CO3
The development of novel discretization schemes and solution algorithms for fully
nonlinear contact mechanics problems, i.e. including finite deformations, finite
frictional sliding and possibly nonlinear material behavior, has seen a great thrust
of research progress over the last decade. With regard to discretization schemes,
mortar finite element methods have proven to outperform traditional collocation
methods (e.g. node-to-segment) in terms of both robustness and accuracy. While
penalty and related methods remain a generally accepted and widely used choice
for constraint enforcement, the numerical efficiency of Lagrange multiplier methods
has been significantly improved in recent years.
In general, the Lagrange multiplier based formulation of contact mechanics problems leads to saddle-point type systems, with the additional Lagrange multiplier
degrees of freedom enforcing typical contact conditions (such as tied contact, unilateral contact and frictional sliding). Although there exist discretization strategies
such as the so-called dual Lagrange multiplier approach ([1],[2]), which allow for
simple algebraic condensation procedures and thus circumvent the saddle-point
structure of linear systems, it is still worth to deal with the more challenging
saddle-point type problems.
For the sake of simplicity, so-called mesh tying (or tied contact) problems are considered first ([3]). Here, a mortar finite element discretization generates algebraic
equations of the form


  
KN1 N1 KN1 M
0
0
0
dN1
fN1
T
 KMN1 KMM
  fM 
0
0
−M
d
M


  
 0

  
0
KSS
KSN2
DT 

  dS  =  fS 
 0


dN2  fN2 
0
KN2 S KN2 N2
0
λ
0
0
−M
D
0
0
with d the displacement and f the force vector. Obviously the Lagrange multipliers
λ couple the two distinct blocks at the mesh tying interface where M denotes the
master side and S the slave side degrees of freedom with the corresponding mortar
matrix blocks M and D.
The big advantage of the saddle-point formulation is the clean distinction between
the different physical variables (e.g. structural displacements) and constraint variables (Lagrange multipliers) both in the structure of the block matrix and in the
solution vector. This makes it rather easy to consider the underlaying physics
when developing special saddle-point precondtioners for these type of problems,
especially if the different physically and mathematically motivated variables are
formulated in different coordinate systems. So, in case of mesh tying and structural
contact problems the structural displacements are usually formulated in Cartesian
coordinates whereas the contact constraints are formulated in tangential and normal coordinates relative to the contact interface.
This talk is devoted to the design of robust Algebraic Multigrid (AMG) preconditioners that can be used within iterative solvers for linear systems arising from
contact and mesh tying problems in saddle-point formulation. Multigrid methods
are known to be among the best preconditioning techniques at least for symmetric
414
positive definite systems [4]. However, linear systems with saddle-point structure
are still challenging for multigrid methods and make special modifications necessary [5]. The idea is to use multigrid methods to build coarse level representations
of the fine-level problem which preserve the saddle-point structure. Then, on each
multigrid level basic Schur-complement based multigrid smoothers are applied.
The focus of this talk will be on aggregation strategies and multigrid transfer
operators for the Lagrange multipliers.
We compare different variants of transfer operators and Schur-complement based
multigrid smoothing methods by means of examples arising from mortar-based
finite element discretizations in contact mechanics.
References
[1] Popp, A., Gitterle, M., Gee, M.W. and Wall, W.A.: "A dual mortar appraoch
for 3D finite deformation contact with consistent linearization", Internatio-nal
Journal for Numerical Methods in Engineering, 84, 543-571 (2010).
[2] Popp, A., Wohlmuth, B.I., Gee, M.W. and Wall, W.A.: "Dual quadratic mortar finite element methods for 3D finite deformation contact", SIAM Journal
on Scientific Computing, 34, B421-B446 (2012).
[3] Puso, M.A.: "A 3D mortar method for solid mechanics", International Journal
for Numerical Methods in Engineering, 59, 315-336 (2004)
[4] Vanek, P., Mandel, J. and Brezina, M.: "Algebraic Multigrid by Smoothed
Aggregation for Second and Fourth Order Elliptic problems", Computing, 56,
179-196 (1996)
[5] Adams, M.F.: "Algebraic multigrid methods for constrained linear systems
with applications to contact problems in solid mechanics", Numerical Linear
Algebra with Applications, 11(2-3), 141-153 (2004)
Joint work with A. Popp, W.A. Wall, and M.W. Gee.
415
Barbara Wohlmuth
Technische Universität München, DE
Interfaces, corners and point sources
Plenary Session: Monday, 09:00 - 09:50, Rolex Learning Center Auditorium
In this talk, we address the convergence of finite element approximations for cases
where the solution is locally smooth but has possibly globally a reduced regularity.
Typical examples are transmission problems, domains with re-entrant corners,
heterogeneous coefficients and right hand sides being not in the H 1 -dual space.
These situations occur quite often in the mathematical modelling of multi-physics
applications. As examples for interface problems we name structure-acoustic interaction or the coupling between free flow and porous media equations. The
permeability in the Darcy equation of porous media models is often assumed to
be piecewise constant but involves highly heterogeneous coefficients. Non-convex
domains with re-entrant corners occur in the numerical simulation of technical
applications. Finally, dimension reduced partial differential equation systems play
an important role in the mathematical modelling of physical effects on different
scales, e.g., fractures of co-dimension one in porous media systems, networks of
co-dimension two. Although these simplified models seem to be very attractive
from the computational point of view, for coupled problems they result in a solution of reduced regularity. From the mathematical point of view, transmission
problems with piecewise smooth solutions or pdes in a distributional sense with
singular solutions arise. Standard remedies to handle these type of problems are
graded meshes or enrichment, both techniques result in extra implementational
work and computational cost.
Here we provide optimality results for interface quantities such as the flux and show
that globally no pollution occurs in case of point loads. Although the solution is
globally not in H 1 , we observe on a sequence of uniformly refined meshes optimal
L2 -a priori convergence on subdomains excluding the point sources.
In the case of heterogeneous coefficients or re-entrant corners, the situation is more
complex. Then the well-known pollution effect is observed, and the convergence
can be extremely poor. This also holds true not only for the L2 norm on subdomains excluding the cross-points and corners but also for other quantities of
interests such as the stress intensity factors or eigenvalues. Here we introduce
a purely local energy correction function and modify locally the bilinear form.
We provide a multi-level algorithm to pre-compute the modification parameters,
existence and optimal a priori results. Numerical examples in 2D illustrate that
we can recover full optimality in case of uniform meshes for re-entrant corners,
heterogeneous coefficients and linear elasticity. As quantities of interest we select
the convergence of eigenvalues, the flux across an interface and the stress intensity
factor.
[1] M. Melenk, H. Rezaijafari, B. Wohlmuth: Quasi-optimal a priori estimates
for fluxes in mixed finite element methods and applications to the Stokes–Darcy
coupling, IMA J. Numer. Anal., 2013 doi:10.1093/imanum/drs048
[2] H. Egger, U. Rüde, B. Wohlmuth: Energy-corrected finite element methods
for corner singularities, to appear in SIAM J. Numer. Anal.
[3] T. Köppl, B. Wohlmuth: Optimal a priori error estimates for an elliptic
problem with Dirac right-hand side, submitted 2013
416
[4] C. Waluga, B. Wohlmuth: Quasi-optimal trace error estimates and a posteriori error estimation for the interior penalty method, submitted 2012
Joint work with M. Melenk (TU Wien), H. Egger (TU Darmstadt), U. Rüde
(FAU) and with F. Benesch, T. Horger, M. Huber, T. Köppl, H. Rezaijafari, and
C. Waluga (TU München).
417
Winnifried Wollner
University of Hamburg, DE
Adjoint Consistent Gradient Computation with the Damped Crank-Nicolson Method
Minisymposium Session FEPD: Monday, 15:00 - 15:30, CO017
The talk is concerned with a damped version of the Crank-Nicolson (CN) Method
for the solution of parabolic partial differential equations. As it is well known,
the CN-Methods needs to be damped in order to cope with irregular initial data,
due to the missing smoothing property. Since the adjoint of the CN-Method is
again a CN-Method, shifted by half a timestep, it is not surprising that a similar problem occurs for the adjoint time stepping scheme. In this talk, we will
derive an adjoint consistent damped CN-Scheme that ensures sufficient damping
of the dual problem. The necessity of these modifications will be discussed along
the use of the dual-weighted residual (DWR) method for the adaptive solution
of the Black-Scholes equation. The consequences for optimization problems with
parabolic PDEs will be discussed.
Joint work with C. Goll, and R. Rannacher.
418
Olaf Wünsch
University of Kassel, Institute of Mechanics, DE
Numerical simulation of viscoelastic fluid flow in confined geometries
Minisymposium Session MANT: Tuesday, 11:00 - 11:30, CO017
The flow of highly viscous fluids in many technical apparatus is dominated through
the wall influence. The fluid adheres at the wall and the shear stresses of the high
viscosity transport the information of the wall far into the fluid when the Reynolds
number is small. Most of these technical geometries show symmetrical behavior
like centerlines. In the case of Newtonian material the simulation can apply the
geometrical symmetries in order to reduce the numerical costs. The calculation
domain ends at the symmetrical line and special boundary conditions must be
used.
For viscoelastic fluids this procedure is venturous. In dependence of the chosen
viscoelastic material model and the parameters the symmetry of the velocity field
can be broken. An important quantity is the elongation viscosity and its behavior
for high elongation rates. If the critical value of the determining number is exceeded, the flow becomes unsymmetrical. Experimental investigations in different
geometries are documented in literature.
This paper deals with numerical simulations techniques to calculate such viscoelastic fluid flows in confined geometries. Basis of the calculations are the balance of
momentum in connection with the balance of continuity. The fluid is modeled
by a modified Maxwell-typed differential equation for the stress tensor T with an
anisotropic molecule mobility tensor Q
O
T + λM T + Q(T)T = 2ηP D + 2λN ηP D.
Here D is the rate of strain tensor, λM , λN , ηP denote material parameter and , O
are special time derivatives.
The numerical treatment of viscoelastic, incompressible fluid flow differs from
Newtonian flow [1]. In the latter case the type of the differential equations are
elliptical, or for instationary flow even parabolic. For Maxwell-typed equations
the classification is complex and depends on the solution. On this account the
use of stabilization techniques are necessary in order to get significant solutions.
Different methods are reported in the literature. In the viscous formulation the
viscoelastic stress tensor is expanded by a Newtonian part in order to increase
the elliptical behavior. This is useful for low numbers of the characteristic dimensionless number, the Deborah number. Other methods, like EVSS, DEVSS or the
Log-Conformation-Tensor method are more successful.
The set of differential equations are discretized by finite-volume-method. We use
the opensource library OpenFoam in order to develop a robust solving algorithm.
Numerical simulations are presented for different stabilization techniques and for
different viscoelastic models with the focus on asymmetrical fluid flows in symmetrical geometries. The influence of increasing Deborah number is shown exemplarily
in figure 1 and figure 2. In the so-called cross slot geometry the fluid flows from
left and right to top and bottom. In the case of Newtonian fluid the flow is exactly
symmetrical in contrast to the viscoelastic fluid.
[1] A. Al-Baldawi, Berichte des Instituts für Mechanik, kassel university press
GmbH, Kassel, 2012
419
Figure 1: Symmetrical velocity field of a Newtonian fluid in a cross slot geometry
Figure 2: Unsymmetrical velocity field of a Giesekus fluid in a cross slot geometry
Joint work with A. Al-Baldawi.
420
Huidong Yang
Postdoc, AT
Numerical Methods for Fluid-Structure Interaction Problems with a Mixed Elasticity Form in Hemodynamics
Minisymposium Session NFSI: Thursday, 14:30 - 15:00, CO122
In this talk, a nearly incompressible elasticity model coupled with an incompressible fluid model for some fluid-structure interaction (FSI) problems in hemodynamics under a three dimensional (3D) configuration is presented. A mixed
displacement-pressure formulation is employed in modeling the structure, that
overcomes a possible Poisson locking phenomenon. The fluid is modeled by the
incompressible Navier-Stokes equations. Implicit first order methods are employed
for discretizing the fluid and structure sub-problems in time: a semi-implicit Euler scheme for the fluid and a Newmark-β scheme for the structure. A proper
least-square finite element stabilization parameter for the elasticity formulation
depending on time discretization parameters, mesh discretization parameters and
material coefficients is designed. In such a framework, an extention of the FSI
problems within two-layer composite vessels is investigated. Such vessels are assumed to possess jumping material coefficients (e.g., density, Young’s modulus
and Poisson ratio) and to vary thicknesses from one layer to another. Numerical
experiments demonstrate sensitivities of FSI solutions with respect to the material
coefficients, the thicknesses of two layers, and the time discretization parameters.
For solving the coupled FSI system, we employ a class of partitioned solvers which
are interpreted as Gauss-Seidel iterations applied to a reduced system with fluid
velocity and structure displacement unknowns on the interface only. The performance of the algorithm relying on robust and efficient algebraic multigrid methods
for the fluid and structure sub-problems is studied.
421
Hamdullah Yücel
Max Planck Institute for Dynamics of Complex Technical Systems, DE
Distributed Optimal Control Problems Governed by Coupled Convection Dominated PDEs with Control Constraints
Contributed Session CT2.9: Tuesday, 15:30 - 16:00, CO124
Many real-life applications such as the fluid dynamic problems in the presence of
body forces, chemically reactive flows and electrochemical interaction flows lead to
optimization problems governed by systems of convection diffusion partial differential equations (PDEs) with nonlinear reaction mechanisms. Such problems are
strongly coupled as inaccuracies in one unknown directly affect all other unknowns.
Prediction of these unknowns is very important for the safe and economical operation of biochemical and chemical engineering processes. In addition, when convection dominates diffusion, the solutions of these PDEs typically exhibit layers
on small regions where the solution has large gradients. Hence, special numerical
techniques are required, which take the structure of the convection into. The integration of discretization and optimization is important for the overall efficiency
of the solution process. Recently, discontinuous Galerkin (DG) methods became
an alternative to the finite difference, finite volume and continuous finite element
methods for solving wave dominated problems like convection diffusion equations
since they possess higher accuracy.
This talk will focus on an application of DG methods for distributed optimal
control problems governed by coupled convection dominated PDEs with control
constraints. We use a residual based a posteriori error estimator to reduce these
oscillations around the boundary and/or interior layers. The matrix system generated by Newton iterations is solved with an appropriate preconditioner.
Joint work with Martin Stoll, and Peter Benner.
422
Yongjin Zhang
Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg,
DE
Reduced-order modeling and ROM-based optimization of batch chromatography
Contributed Session CT4.4: Friday, 08:50 - 09:20, CO015
Batch chromatography, as a crucial separation and purification tool, is widely
employed in food, fine chemical and pharmaceutical industries. The optimal operation of batch chromatography is of practical importance since it allows to exploit
the full economic potential of the process and to reduce the separation cost. The
dynamic behavior of the chromatographic process is described by a complex system
of partial differential equations (PDEs),
(
∂cz
∂ 2 cz
∂cz
1− ∂qz
0 < x < L,
∂t + ∂t = −v ∂x + Dz ∂x2 ,
(1)
∂qz
Eq
=
κ
(q
−
q
),
0
≤
x
≤
L,
z z
z
∂t
where cz , qz are the concentrations of the component z (z = A, B) in the liquid and
Q
solid phase, respectively, v = A
the convection velocity, Q the feed volumetric
c
flow-rate, Ac the cross-sectional area of the column, the column porosity, t the
time coordinate, x the axial coordinate along the column with the length L, κz
the mass-transfer coefficient, Dz = PvLe the axial dispersion coefficient and P e
the Péclet number. The adsorption equilibrium qzEq is described by the isotherm
equations with the bi-Langmuir type,
qzEq = fz (cA , cB ) := (
Hz1
Hz2
+
)cz ,
1 + KA1 cA + KB1 cB
1 + KA2 cA + KB2 cB
(2)
where Hzj and Kzj are the Henry constants and thermodynamic coefficients, respectively. The initial and boundary conditions are given as,

 cz (t = 0, x) = 0, qz (t = 0, x) = 0, 0 ≤ x ≤ L,
∂cz
v
F
(3)
∂x |x=0 = Dz (cz (t, x = 0) − cz χ[0,tin ] (t)),
 ∂c
z
|
=
0,
∂x x=L
where cF
z are the feed concentrations of component z, tin is the injection time,
and χ[0,tin ] is the characteristic function,
1, if t ∈ [0, tin ],
χ[0,tin ] (t) =
(4)
0, otherwise.
Note that Q and tin are often considered as the operating variables, denoted as
µ := (Q, tin ), which play the role of parameters in this system. Consequently, the
system is nonlinear, time-dependent and with non-affine parameter dependency.
The nonlinearity of the system is reflected by (2). To capture the system dynamics precisely, a large number of degrees of freedom must be introduced for the
discretization of the PDEs. Many efforts have been made for the optimization
of batch chromatography over the past several decades. They are usually based
on the finely discretized full order model (FOM). Such an expensive FOM must
be repeatedly solved during the optimization process. As a result, addressing an
optimization problem is often time-consuming due to such a multi-query context.
Parametric model order reduction (PMOR) is an efficient tool for reducing a large
parametric system to a small one, while preserving the dominant dynamics of
423
the FOM and the accuracy of the input-output relationship. The reduced basis
method is a robust PMOR technique and has been widely used in many applications. In this work, the reduced basis method [1, 2] is introduced and applied
to the simulation of batch chromatography. An adaptive technique is proposed
to speed up the generation of the reduced basis. The FOM is derived by using
the finite volume discretization, whereby the conservation property of the system
is preserved. The resulting reduced-order model (ROM) is used to get a rapid
evaluation of the input-output relationships for the underlying optimization. The
construction of the ROM is automatically managed by a greedy algorithm with
an inexpensive error estimator.
Numerical experiments are carried out to show the efficiency and reliability of the
ROM. Table 1 shows the average run-time of the simulation over 100 random sample points of µ, and the maximal error defined as Max.error = maxµ∈Pval ||Y F (µ)−
Y R (µ)||. Y F (µ) and Y R (µ) are the outputs evaluated by using the FOM and the
ROM respectively, at a given parameter µ. The average run-time of detailed simulation is sped up by a factor of 52. The optimization results are shown in Table
2. The optimal solution of the ROM-based optimization converges to that of the
FOM-based optimization. Furthermore, the run-time of solving the FOM-based
optimization is significantly reduced. The factor of speedup is 64.
Table 1: Run-time comparison of the detailed simulation and the reduced simulation over a test set Pval with 100 random sample points.
Methods
Max. error Average run-time [s]
Detailed simulation (N = 1000)
–
69.5446
Reduced simulation (N = 40, M = 61) 2.5 × 10−4
1.3359
Table 2: Comparison of the optimization based on the ROM and the FOM.
Methods
Obj. (P r) Opt. solution (µ) Iterat. Run-time [h]
FOM-based Opt. 0.02032 (0.07983, 1.05544) 211
8.5371
ROM-based Opt. 0.02028 (0.07982, 1.05247) 211
0.1332
References
[1] A. T. Patera and G. Rozza. Reduced basis approximation and a posteriori error
estimation for parametrized partial differential equations. Version 1.0, Copyright
MIT 2006, to appear in (tentative rubric) MIT Pappalardo Graduate Monographs in Mechanical Engineering.
[2] M. Drohmann, B. Haasdonk, and M. Ohlberger. Reduced basis approximation
for nonlinear parametrized evolution equations based on empirical operator interpolation. SIAM J. Sci. Comput., 34(2):937–969, 2012.
Joint work with Peter Benner, Lihong Feng, and Suzhou Li.
424
Alexander Zlotnik
National Research University Higher School of Economics, RU
The Crank-Nicolson scheme with splitting and discrete transparent boundary conditions for the Schrödinger equation on an infinite strip
Minisymposium Session TIME: Thursday, 12:00 - 12:30, CO015
The time-dependent Schrödinger equation with several variables is important in
quantum mechanics, atomic and nuclear physics, wave physics, nanotechnologies,
etc. Often it should be solved in unbounded space domains. In particular, the
generalized 2D time-dependent Schrödinger equation with variable coefficients on
a semi-infinite strip appears in microscopic description of low-energy nuclear fission
dynamics.
Several approaches are developed and studied to solve problems of such kind,
in particular, see [1]. One of them exploits the so-called discrete transparent
boundary conditions (TBCs) at artificial boundaries [2]. Its advantages are the
complete absence of spurious reflections, reliable computational stability, clear
mathematical background and rigorous stability theory.
The Crank-Nicolson finite-difference scheme with the discrete TBCs in the case of
a strip or semi-infinite strip was studied in detail in [3, 4, 5]. But the scheme is
implicit so that solving of a specific complex system of linear algebraic equations
is required at each time level. Efficient methods to solve such systems are well
developed by the moment in real but not complex situation.
The splitting technique is widely used to simplify solving of the time-dependent
Schrödinger and related equations, in particular, see [6]. We apply the Strang-type
splitting with respect to the potential to the Crank-Nicolson scheme with rather
general approximate TBC in the form of the Dirichlet-to-Neumann map. For the
resulting method, we prove the uniform in time L2 -stability under a condition on
an operator S in the approximate TBC.
To construct the discrete TBC, we consider the splitting scheme on an infinite mesh
in the infinite strip. Its uniform in time L2 -stability together with the mass conservation law are proved. We find that an operator Sref in the discrete TBC is the
same as for the original Crank-Nicolson scheme in [4], and it satisfies above mentioned condition so that the uniform in time L2 -stability of the resulting method
is guaranteed. The operator Sref is written in terms of the discrete convolution in
time and the discrete Fourier expansion in direction y perpendicular to the strip.
Due to the splitting, an effective direct algorithm using FFT in y is developed to
implement the method with the discrete TBC for general potential (while other
coefficients are y-independent).
The corresponding numerical results on the tunnel effect for rectangular barriers are presented together with the practical error analysis in C and L2 norms
confirming the good error properties of the splitting scheme.
Notice that the results are rather easily generalized to the case of a multidimensional parallelepiped infinite or semi-infinite in one of the directions.
This study is accomplished jointly with B. Ducomet (CEA, France) and I. Zlotnik
(MPEI, Russia). The results are presented partly in [7].
References
[1] X. Antoine, A. Arnold, C. Besse, M. Ehrhardt and A. Schädle, A review
of transparent and artificial boundary conditions techniques for linear and
425
nonlinear Schrödinger equations. Commun. Comp. Phys. 4 (4) (2008) 729796.
[2] M. Ehrhardt and A. Arnold, Discrete transparent boundary conditions for
the Schrödinger equation. Riv. Mat. Univ. Parma. 6 (2001) 57-108.
[3] A. Arnold, M. Ehrhardt and I. Sofronov, Discrete transparent boundary conditions for the Schrödinger equation: fast calculations, approximation and
stability. Comm. Math. Sci. 1 (2003) 501-556.
[4] B. Ducomet and A. Zlotnik, On stability of the Crank-Nicolson scheme with
approximate transparent boundary conditions for the Schrödinger equation.
Part I. Comm. Math. Sci. 4 (2006) 741-766.
[5] B. Ducomet and A. Zlotnik, On stability of the Crank-Nicolson scheme with
approximate transparent boundary conditions for the Schrödinger equation.
Part II. Comm. Math. Sci. 5 (2007) 267-298.
[6] C. Lubich, From quantum to classical molecular dynamics. Reduced models
and numerical analysis. EMS: Zürich, 2008.
[7] B. Ducomet, A. Zlotnik and I. Zlotnik, The splitting in potential
Crank-Nicolson scheme with discrete transparent boundary conditions for
the Schrödinger equation on a semi-infinite strip. (2013), submitted.
http://arxiv.org/abs/1303.3471
426
Walter Zulehner
Johannes Kepler University Linz, AT
Operator Preconditioning for a Mixed Method of Biharmonic Problems on Polygonal Domains
Minisymposium Session CTNL: Tuesday, 11:30 - 12:00, CO015
The first and the second boundary value problem of the biharmonic operator are
simple model problems in elasticity for the bending of a clamped plate and a simply supported plate, respectively. For these model problems, mixed second-order
formulations are considered which are equivalent to the original fourth-order formulation without additional assumptions on the polygonal domain such as convexity. Based on the mapping properties of the involved operators and their discrete
counterparts resulting from a mixed finite element method efficient preconditioners
are constructed and numerical experiments are shown.
427
Index of Speakers
af Klinteberg, Ludvig, 5, 17
Aizinger, Vadym, 4, 18
Akman, Tuğba, 14, 19
Algarni, Said, 16, 20
Amsallem, David, 12, 22
Antil, Harbir, 10, 23
Arjmand, Doghonay, 8, 24
Artina, Marco, 7, 26
Augustin, Christoph, 5, 27
Avalishvili, Gia, 14, 28
Azijli, Iliass, 8, 30
Azzimonti, Laura, 8, 32
Aßmann, Ute, 4, 34
Codina, Ramon, 12, 79
Cohen, Albert, 10, 80
Colciago, Claudia, 13, 81
Crestetto, Anais, 10, 83
Crouseilles, Nicolas, 10, 84
D’Ambrosio, Raffaele, 5, 85
Damanik, Hogenrich, 7, 86
Danilov, Alexander, 4, 88
Davenport, Mark, 4, 89
de la Cruz, Raúl, 4, 90
Debrabant, Kristian, 3, 92
Dede, Luca, 10, 93
Dementyeva , Ekaterina, 14, 94
den Ouden, Dennis, 8, 96
Deparis, Simone, 3, 98
Despres, Bruno, 10, 99
Di Pietro, Daniele, 14, 100
Dimitriu, Gabriel, 14, 102
Dolgov, Sergey, 8, 103
Donatelli, Marco, 10, 104
Bachmayr, Markus, 10, 35
Badia, Santiago, 7, 37
Bai, Yun, 4, 38
Baker, Ruth, 7, 39
Ballani, Jonas, 7, 40
Bartels, Soeren, 7, 41
Basting, Steffen, 7, 42
Bause, Markus, 14, 43
Berger, Lorenz, 5, 45
Berrut, Jean-Paul, 5, 46
Billaud Friess, Marie, 8, 47
Blumenthal, Adrian, 16, 48
Bonelle, Jerome, 7, 49
Bonizzoni, Francesca, 14, 50
Börm, Steffen, 5, 52
Braack, Malte, 3, 53
Budac, Ondrej, 8, 54
Burman, Erik, 3, 55, 7, 56
Bustinza, Rommel, 8, 57
Ehler, Martin, 3, 105
Ehrlacher, Virginie, 7, 106, 4, 107
Einkemmer, Lukas, 12, 108
Elfverson, Daniel, 3, 110
Engblom, Stefan, 14, 112
Engwer, Christian, 3, 114
Ern, Alexandre, 13, 116
Falcó, Antonio, 4, 117
Feistauer, Miloslav, 14, 118
Fishelov, Dalia, 14, 119
Flueck, Michel, 13, 121
Freitag, Melina, 13, 122
Frolov, Maxim, 8, 123
Furmanek, Petr, 16, 124
Caboussat, Alexandre, 7, 58, 5, 59
Caiazzo, Alfonso, 7, 60
Cances, Eric, 3, 61
Cancès, Clément, 3, 62
Cancès, Eric, 7, 63
Capatina, Daniela, 13, 64, 10, 65
Cattaneo, Laura, 5, 66
Cecka, Cris, 3, 68
Cervone, Antonio, 3, 69
Chen, Xingyuan, 10, 70
Chen, Peng, 16, 72
Chinesta, Francisco, 12, 73
Chkifa, Moulay Abdellah, 14, 75
Christophe, Alexandra, 5, 76
Chrysafinos, Konstantinos, 3, 78
Gastaldi, Lucia, 7, 125
Gauckler, Ludwig, 13, 127
Gerbeau, Jean-Frédéric, 13, 128
Gergelits, Tomas, 14, 129
Ghattas, Omar, 12, 131
Giraud, Luc, 10, 132
Golbabaee, Mohammad , 3, 133
Gonzalez, Maria, 8, 134
Gorkem, Simsek, 8, 136
Grandchamp, Alexandre, 5, 138
Grandperrin, Gwenol, 13, 139
428
Greff, Isabelle, 5, 140
Gross, Sven, 7, 142
Grote, Marcus, 12, 144
Guglielmi, Nicola, 13, 146
Kleiss, Stefan, 5, 207
Knobloch, Petr, 16, 208
Kolev, Tzanio, 12, 209
Konshin, Igor, 5, 211
Kosík, Adam, 16, 213
Koskela, Antti, 13, 215
Krahmer, Felix, 3, 216
Kramer, Stephan, 13, 218, 7, 219
Kray, Marie, 14, 220
Kreiss, Gunilla, 5, 222
Krendl, Wolfgang, 16, 224
Kressner, Daniel, 12, 225
Kroll, Jochen, 10, 226
Krukier, Lev, 14, 227
Kucera, Vaclav, 8, 229
Kuzmin, Dmitri, 16, 230
Hachem, Elie, 7, 147
Hadrava, Martin, 8, 148
Hairer, Ernst, 3, 150
Haji-Ali, Abdul-Lateef , 16, 151
Harbrecht, Helmut, 10, 153
Hegland, Markus, 10, 154
Heine, Claus-Justus, 5, 155, 7, 156
Henning, Patrick, 7, 157
Herrero, Henar, 14, 158
Hess, Martin, 14, 160
Hesthaven, Jan, 12, 161
Heumann, Holger, 13, 163
Himpe, Christian, 13, 164
Hintermueller, Michael, 10, 165, 7, 166
Hochbruck, Marlis, 12, 167
Hoel, Haakon, 16, 168
Hoffman, Johan, 10, 170
Holman, Jiří, 16, 171
Huckle, Thomas, 10, 173
Lafitte, Pauline, 10, 231
Lakkis, Omar, 3, 232
Lang, Jens, 13, 233
Lassila, Toni, 12, 234
Le Maitre, Olivier, 12, 235
Lee, Sanghyun, 4, 236
Lee, Jeonghun, 16, 238
Lejon, Annelies, 5, 239
Lilienthal, Martin, 16, 241
Lim, Lek-Heng, 3, 242
Lin, Ping, 12, 243
Linke, Alexander, 8, 244
Long, Quan, 14, 245
Louda, Petr, 16, 246
Luce, Robert, 10, 248
Luh, Lin-Tian , 5, 249
Lukin, Vladimir, 5, 251
Icardi, Matteo, 8, 174
Idema, Reijer, 14, 176
Ishizuka, Hiroki, 5, 178
Jannelli, Alessandra, 8, 180
Janssen, Bärbel, 5, 182
Jaraczewski, Manuel, 14, 183
Jarlebring, Elias, 12, 185
Jiranek, Pavel, 7, 186
John, Lorenz, 16, 187
Jolivet, Pierre, 4, 188
Juntunen, Mika, 5, 189
Macedo, Francisco, 5, 253
Madhavan, Pravin, 8, 254
Maier, Immanuel, 16, 255
Makridakis, Charalambos, 4, 257
Mali, Olli, 14, 258
Matthies, Gunar, 8, 259
Mehl, Miriam, 12, 260
Meinecke, Lina, 14, 262
Melis, Ward, 16, 264
Michiels, Wim, 13, 266
Miedlar, Agnieszka, 13, 268
Migliorati, Giovanni, 5, 269
Miyajima, Shinya, 8, 270
Mula, Olga, 12, 272
Murata, Naofumi, 14, 274
Muslu, Gulcin Mihriye, 8, 275
Kadir, Ashraful, 16, 190
Kalise, Dante, 3, 191
Kamijo, Kenichi, 16, 192
Karasozen, Bulent, 14, 193
Kazeev, Vladimir, 10, 194
Keslerova, Radka, 7, 195, 16, 197
Kestler, Sebastian, 4, 199
Khoromskaia, Venera, 3, 200
Khoromskij, Boris, 8, 201
Kieri, Emil, 16, 202
Kirby, Michael, 14, 204
Kirchner, Alana, 3, 205
Klawonn, Axel, 13, 206
429
Sangalli, Giancarlo , 4, 337
Savostyanov, Dmitry, 8, 338
Scheichl, Robert, 10, 339
Schieweck, Friedhelm, 4, 340
Schillings, Claudia, 12, 341
Schnass, Karin, 3, 342
Schneider, Reinhold , 7, 343
Schratz, Katharina, 13, 344
Sepúlveda, Mauricio, 8, 345
Shapeev, Alexander, 3, 346
Sharma, Natasha, 13, 347
Sheng, Zhiqiang, 3, 348
Simian, Corina, 16, 349
Simoncini, Valeria, 10, 350
Skowera, Jonathan, 3, 351
Smears, Iain, 3, 352
Smetana, Kathrin, 4, 353
Smirnova, Alexandra, 13, 354
Stamm, Benjamin, 3, 356
Steinig, Simeon, 4, 357
Stenberg, Rolf, 10, 358
Stohrer, Christian, 8, 359
Stoll, Martin, 7, 361
Strakos, Zdenek, 10, 362
Sumitomo, Hiroaki, 5, 364
Szepessy, Anders, 16, 366
Nadir, Bayramov, 8, 276
Negri, Federico, 16, 277
Nguyen, Thi Trang, 8, 278
Nikitin, Kirill, 3, 280
Nore, Caroline, 13, 281
Ogita, Takeshi, 8, 282
Ohlberger, Mario, 12, 284
Ojala, Rikard, 8, 286
Olshanskii, Maxim, 10, 287, 13, 288
Ortner, Christoph, 3, 289
Ouazzi, Abderrahim, 8, 290
Ozaki, Katsuhisa, 8, 291
Papez, Jan, 5, 293
Pavarino, Luca , 12, 294
Pekmen, Bengisen, 5, 295
Pena, Juan Manuel, 8, 297
Perotto, Simona, 3, 298
Perugia, Ilaria, 10, 299
Peter, Steffen, 4, 300
Pfefferer, Johannes, 4, 301
Picasso, Marco, 10, 302
Pieper, Konstantin, 3, 303
Pořízková, Petra, 16, 304
Possanner, Stefan, 7, 305
Pousin, Jerome, 5, 306
Powell, Catherine, 7, 308
Prokop, Vladimír, 5, 309
Puscas, Maria Adela, 12, 310
Tamellini, Lorenzo, 14, 367
Tani, Mattia, 16, 369
Tempone, Raul, 12, 370
ten Thije Boonkkamp, Jan, 8, 371
Tesei, Francesco, 16, 372
Tews, Benjamin, 4, 374
Tezer-Sezgin, Münevver, 5, 375
Thalhammer, Mechthild, 7, 377
Tobiska, Lutz, 4, 378, 10, 379
Touma, Rony, 8, 380
Tricerri, Paolo, 13, 382
Tryoen, Julie, 8, 384
Turek, Stefan, 7, 386
Tyrtyshnikov, Eugene, 7, 387
Qingguo Hong, Qingguo, 13, 311
Rademacher, Andreas, 16, 312
Reguly, Istvan, 4, 314
Reigstad, Gunhild Allard, 16, 316
Reimer, Knut, 16, 318
Repin, Sergey, 16, 319
Richter, Thomas, 12, 320
Rozgic, Marco, 14, 321
Rozza, Gianluigi, 13, 323
Rupp, Karl, 3, 325
Ruprecht, Daniel, 5, 327
Uschmajew, André, 3, 388
Van der Zee, Kristoffer, 4, 389
Vandergheynst, Pierre, 4, 390
Vannieuwenhoven, Nick, 14, 391
Varygina, Maria, 5, 392
Vassilevski, Yuri, 4, 394
Verani, Marco, 8, 395
Veroy-Grepl, Karen, 13, 396
Sadovskaya, Oxana, 14, 329
Sadovskii, Vladimir, 14, 331
Saffar Shamshirgar, Davood, 5, 333
Sahin, Mehmet, 7, 334
Samaey, Giovanni, 16, 335
Sandberg, Mattias, 16, 336
430
Vetterli, Martin, 8, 397
Vilanova, Pedro, 14, 398
Vilmart, Gilles, 4, 399, 7, 400
Vohralik, Martin, 13, 401
Voss, Heinrich, 12, 403
Widlund, Olof B., 12, 413
Wiesner, Tobias, 12, 414
Wohlmuth, Barbara, 3, 416
Wollner, Winnifried, 4, 418
Wünsch, Olaf, 7, 419
Wacker, Benjamin, 12, 404
Walker, Shawn, 10, 405
Walloth, Mirjam, 16, 406
Weinmann, Andreas , 4, 408
Weißer, Steffen, 5, 409
Wells, Garth, 4, 411
Wick, Thomas, 12, 412
Yang, Huidong, 13, 421
Yücel, Hamdullah, 8, 422
Zhang, Yongjin, 16, 423
Zlotnik, Alexander, 12, 425
Zulehner, Walter, 7, 427
431