Colloquium
We usually meet (with a few exceptions, please see below) on Fridays at 10:00 am (Eastern Time). If you are interested in joining, please fill out the Registration Form. For questions please contact Harbir Antil (hantil@gmu.edu).
Fall 2020
Date  Speaker  Affiliation  Title  

Date  Speaker, Affiliation, Title  
Friday, August 28, 2020 
Enrique Zuazua  University of Erlangen–Nuremberg (FAU)  Turnpike control and deep learning ... more ... less  
Fri, Aug 28, 2020 
Enrique Zuazua, University of Erlangen–Nuremberg (FAU) Turnpike control and deep learning ... more ... less 

Abstract: The turnpike principle asserts that in long time horizons optimal control strategies are nearly of a steady state nature. In this lecture we shall survey on some recent results on this topic and present some its consequences on deep supervised learning. This lecture will be based in particular on recent joint work with C: Esteve, B. Geshkovski and D. Pighin. arxiv 

Abstract: The turnpike principle asserts that in long time horizons optimal control strategies are nearly of a steady state nature. In this lecture we shall survey on some recent results on this topic and present some its consequences on deep supervised learning. This lecture will be based in particular on recent joint work with C: Esteve, B. Geshkovski and D. Pighin. arxiv 

Friday, September 04, 2020 
Rainald Löhner  George Mason University  Modeling and Simulation of Viral Propagation in the Built Environment ... more ... less  
Fri, Sep 04, 2020 
Rainald Löhner, George Mason University Modeling and Simulation of Viral Propagation in the Built Environment ... more ... less 

Abstract: This talk will begin by summarizing mechanical characteristics of virus contaminants and the transmission via droplets and aerosols. The ordinary and partial differential equations describing the physics of these processes with high fidelity will be presented. We shall also describe the appropriate numerical schemes to solve these problems. We will conclude the talk with several realistic examples of the built environments, such as TSA Queues, Hospital Rooms. DOI 

Abstract: This talk will begin by summarizing mechanical characteristics of virus contaminants and the transmission via droplets and aerosols. The ordinary and partial differential equations describing the physics of these processes with high fidelity will be presented. We shall also describe the appropriate numerical schemes to solve these problems. We will conclude the talk with several realistic examples of the built environments, such as TSA Queues, Hospital Rooms. DOI 

Friday, September 11, 2020 
Fioralba Cakoni  Rutgers University  Spectral Problems in Inverse Scattering for Inhomogeneous Media ... more ... less  
Fri, Sep 11, 2020 
Fioralba Cakoni, Rutgers University Spectral Problems in Inverse Scattering for Inhomogeneous Media ... more ... less 

Abstract: The inverse scattering problem for inhomogeneous media amounts to inverting a locally compact nonlinear operator, thus presenting difficulties in arriving at a solution. Initial efforts to deal with the nonlinear and illposed nature of the inverse scattering problem focused on the use of nonlinear optimization methods. Although efficient in many situations, their use suffers from the need for strong a priori information in order to implement such an approach. In addition, recent advances in material science and nanostructure fabrications have introduced new exotic materials for which full reconstruction of the constitutive parameters from scattering data is challenging or even impossible. In order to circumvent these difficulties, a recent trend in inverse scattering theory has focused on the development of new methods, in which the amount of a priori information needed is drastically reduced but at the expense of obtaining only limited information of the scatterers. Such methods come under the general title of qualitative approach in inverse scattering theory; they yield mathematically justified and computationally simple reconstruction algorithms by investigating properties of the linear scattering operator to decode nonlinear information about the scattering object. In this spirit, a possible approach is to exploit spectral properties of operators associated with scattering phenomena which carry essential information about the media. The identified eigenvalues must satisfy two important properties: 1) can be determined from the scattering operator, and 2) are related to geometrical and physical properties of the media in an understandable way. In this talk we will discuss some old and new eigenvalue problems arising in scattering theory for inhomogeneous media. We will present a twofold discussion: on one hand relating the eigenvalues to the measurement operator (to address the first property) and on the other hand viewing them as the spectrum of appropriate (possibly nonselfadjoint) partial differential operators (to address the second property). Numerical examples will be presented to show what kind of information these eigenvalues, and more generally the qualitative approach, yield on the unknown inhomogeneity. 

Abstract: The inverse scattering problem for inhomogeneous media amounts to inverting a locally compact nonlinear operator, thus presenting difficulties in arriving at a solution. Initial efforts to deal with the nonlinear and illposed nature of the inverse scattering problem focused on the use of nonlinear optimization methods. Although efficient in many situations, their use suffers from the need for strong a priori information in order to implement such an approach. In addition, recent advances in material science and nanostructure fabrications have introduced new exotic materials for which full reconstruction of the constitutive parameters from scattering data is challenging or even impossible. In order to circumvent these difficulties, a recent trend in inverse scattering theory has focused on the development of new methods, in which the amount of a priori information needed is drastically reduced but at the expense of obtaining only limited information of the scatterers. Such methods come under the general title of qualitative approach in inverse scattering theory; they yield mathematically justified and computationally simple reconstruction algorithms by investigating properties of the linear scattering operator to decode nonlinear information about the scattering object. In this spirit, a possible approach is to exploit spectral properties of operators associated with scattering phenomena which carry essential information about the media. The identified eigenvalues must satisfy two important properties: 1) can be determined from the scattering operator, and 2) are related to geometrical and physical properties of the media in an understandable way. In this talk we will discuss some old and new eigenvalue problems arising in scattering theory for inhomogeneous media. We will present a twofold discussion: on one hand relating the eigenvalues to the measurement operator (to address the first property) and on the other hand viewing them as the spectrum of appropriate (possibly nonselfadjoint) partial differential operators (to address the second property). Numerical examples will be presented to show what kind of information these eigenvalues, and more generally the qualitative approach, yield on the unknown inhomogeneity. 

Friday, September 18, 2020 
Shawn Walker  Louisiana State University  Mathematical Modeling and Numerics for Nematic Liquid Crystals ... more ... less  
Fri, Sep 18, 2020 
Shawn Walker, Louisiana State University Mathematical Modeling and Numerics for Nematic Liquid Crystals ... more ... less 

Abstract: I start with an overview of nematic liquid crystals (LCs), including their basic physics, applications, and how they are modeled. In particular, I describe different models, such as OseenFrank, Landaude Gennes, and the Ericksen model, as well as their numerical discretization. In addition, I give the advantages and disadvantages of each model. For the rest of the talk, I will focus on Landaude Gennes (LdG) and Ericksen. Next, I will highlight parts of the analysis of these models and how it relates to numerical analysis, with specific emphasis on finite element methods (FEMs) to compute energy minimizers; much of this work is joint with various coauthors which I will review. I will illustrate the methods we have developed by presenting numerical simulations in two and three dimensions including nonorientable line fields (LdG model). Finally, I will conclude with some current problems in modeling and simulating LCs and an outlook to future directions. 

Abstract: I start with an overview of nematic liquid crystals (LCs), including their basic physics, applications, and how they are modeled. In particular, I describe different models, such as OseenFrank, Landaude Gennes, and the Ericksen model, as well as their numerical discretization. In addition, I give the advantages and disadvantages of each model. For the rest of the talk, I will focus on Landaude Gennes (LdG) and Ericksen. Next, I will highlight parts of the analysis of these models and how it relates to numerical analysis, with specific emphasis on finite element methods (FEMs) to compute energy minimizers; much of this work is joint with various coauthors which I will review. I will illustrate the methods we have developed by presenting numerical simulations in two and three dimensions including nonorientable line fields (LdG model). Finally, I will conclude with some current problems in modeling and simulating LCs and an outlook to future directions. 

Friday, September 25, 2020 
CarolaBibiane Schönlieb  University of Cambridge  Multitasking inverse problems: more together than alone ... more ... less  
Fri, Sep 25, 2020 
CarolaBibiane Schönlieb, University of Cambridge Multitasking inverse problems: more together than alone ... more ... less 

Abstract: Inverse imaging problems in practice constitute a pipeline of tasks that starts with image reconstruction, involves registration, segmentation, and a prediction task at the end. The idea of multitasking inverse problems is to make use of the full information in the data in every step of this pipeline by jointly optimising for all tasks. While this is not a new idea in inverse problems, the ability of deep learning to capture complex prior information paired with its computational efficiency renders an allinone approach practically possible for the first time. 

Abstract: Inverse imaging problems in practice constitute a pipeline of tasks that starts with image reconstruction, involves registration, segmentation, and a prediction task at the end. The idea of multitasking inverse problems is to make use of the full information in the data in every step of this pipeline by jointly optimising for all tasks. While this is not a new idea in inverse problems, the ability of deep learning to capture complex prior information paired with its computational efficiency renders an allinone approach practically possible for the first time. 

Friday, October 02, 2020 
Drew P. Kouri  Sandia National Laboratories  Randomized Sketching for LowMemory Dynamic Optimization ... more ... less  
Fri, Oct 02, 2020 
Drew P. Kouri, Sandia National Laboratories Randomized Sketching for LowMemory Dynamic Optimization ... more ... less 

Abstract: In this talk, we develop a novel limitedmemory method to solve dynamic optimization problems. The memory requirements for such problems often present a major obstacle, particularly for problems with PDE constraints such as optimal flow control, full waveform inversion, and optical tomography. In these problems, PDE constraints uniquely determine the state of a physical system for a given control; the goal is to find the value of the control that minimizes an objective or cost functional. While the control is often low dimensional, the state is typically more expensive to store. To reduce the memory requirements, we employ randomized matrix approximation to compress the state as it is generated. The compressed state is then used to compute approximate gradients and to apply the Hessian to vectors. The approximation error in these quantities is controlled by the target rank of the compressed state. This approximate first and secondorder information can readily be used in any optimization algorithm. As an example, we develop a sketched trustregion method that adaptively learns the target rank using a posteriori error information and provably converges to a stationary point of the original problem. To conclude, we apply our randomized compression to the optimal control of a linear elliptic PDE and the optimal control of fluid flow past a cylinder. 

Abstract: In this talk, we develop a novel limitedmemory method to solve dynamic optimization problems. The memory requirements for such problems often present a major obstacle, particularly for problems with PDE constraints such as optimal flow control, full waveform inversion, and optical tomography. In these problems, PDE constraints uniquely determine the state of a physical system for a given control; the goal is to find the value of the control that minimizes an objective or cost functional. While the control is often low dimensional, the state is typically more expensive to store. To reduce the memory requirements, we employ randomized matrix approximation to compress the state as it is generated. The compressed state is then used to compute approximate gradients and to apply the Hessian to vectors. The approximation error in these quantities is controlled by the target rank of the compressed state. This approximate first and secondorder information can readily be used in any optimization algorithm. As an example, we develop a sketched trustregion method that adaptively learns the target rank using a posteriori error information and provably converges to a stationary point of the original problem. To conclude, we apply our randomized compression to the optimal control of a linear elliptic PDE and the optimal control of fluid flow past a cylinder. 

Friday, October 09, 2020 
Kevin Carlberg  Nonlinear model reduction: using machine learning to enable rapid simulation of extremescale physics models ... more ... less  
Fri, Oct 09, 2020 
Kevin Carlberg, Nonlinear model reduction: using machine learning to enable rapid simulation of extremescale physics models ... more ... less 

Abstract: Physicsbased modeling and simulation has become indispensable across many applications in science and engineering, ranging from autonomousvehicle control to designing new materials. However, achieving high predictive fidelity necessitates modeling fine spatiotemporal resolution, which can lead to extremescale computational models whose simulations consume months on thousands of computing cores. This constitutes a formidable computational barrier: the cost of truly highfidelity simulations renders them impractical for important timecritical applications (e.g., rapid design, control, realtime simulation) in engineering and science. In this talk, I will present several advances in the field of nonlinear model reduction that leverage machinelearning techniques ranging from convolutional autoencoders to LSTM networks to overcome this barrier. In particular, these methods produce lowdimensional counterparts to highfidelity models called reducedorder models (ROMs) that exhibit 1) accuracy, 2) low cost, 3) physicalproperty preservation, 4) guaranteed generalization performance, and 5) error quantification. 

Abstract: Physicsbased modeling and simulation has become indispensable across many applications in science and engineering, ranging from autonomousvehicle control to designing new materials. However, achieving high predictive fidelity necessitates modeling fine spatiotemporal resolution, which can lead to extremescale computational models whose simulations consume months on thousands of computing cores. This constitutes a formidable computational barrier: the cost of truly highfidelity simulations renders them impractical for important timecritical applications (e.g., rapid design, control, realtime simulation) in engineering and science. In this talk, I will present several advances in the field of nonlinear model reduction that leverage machinelearning techniques ranging from convolutional autoencoders to LSTM networks to overcome this barrier. In particular, these methods produce lowdimensional counterparts to highfidelity models called reducedorder models (ROMs) that exhibit 1) accuracy, 2) low cost, 3) physicalproperty preservation, 4) guaranteed generalization performance, and 5) error quantification. 

Friday, October 16, 2020 
Noemi Petra  University of California, Merced  Optimal design of largescale Bayesian linear inverse problems under reducible model uncertainty: good to know what you don't know ... more ... less  
Fri, Oct 16, 2020 
Noemi Petra, University of California, Merced Optimal design of largescale Bayesian linear inverse problems under reducible model uncertainty: good to know what you don't know ... more ... less 

Abstract: Optimal experimental design (OED) refers to the task of determining an experimental setup such that the measurements are most informative about the underlying parameters. This is particularly important in situations where experiments are costly or timeconsuming, and thus only a small number of measurements can be collected. In addition to the parameters estimated by an inverse problem, the governing mathematical models often involve simplifications, approximations, or modeling assumptions, resulting in additional uncertainty. These additional uncertainties must be taken into account in the experimental design process; failing to do so could result in suboptimal designs. In this talk, we consider optimal design of infinitedimensional Bayesian linear inverse problems governed by uncertain forward models. In particular, we seek experimental designs that minimize the posterior uncertainty in the primary parameters, while accounting for the uncertainty in secondary (nuisance) parameters. We accomplish this by deriving a marginalized Aoptimality criterion and developing an efficient computational approach for its optimization. We illustrate our approach for estimating an uncertain timedependent source in a contaminant transport model with an uncertain initial state as secondary uncertainty. Our results indicate that accounting for additional model uncertainty in the experimental design process is crucial. References: This presentation is based on the following paper https://arxiv.org/abs/1308.4084 and manuscript https://arxiv.org/abs/2006.11939. 

Abstract: Optimal experimental design (OED) refers to the task of determining an experimental setup such that the measurements are most informative about the underlying parameters. This is particularly important in situations where experiments are costly or timeconsuming, and thus only a small number of measurements can be collected. In addition to the parameters estimated by an inverse problem, the governing mathematical models often involve simplifications, approximations, or modeling assumptions, resulting in additional uncertainty. These additional uncertainties must be taken into account in the experimental design process; failing to do so could result in suboptimal designs. In this talk, we consider optimal design of infinitedimensional Bayesian linear inverse problems governed by uncertain forward models. In particular, we seek experimental designs that minimize the posterior uncertainty in the primary parameters, while accounting for the uncertainty in secondary (nuisance) parameters. We accomplish this by deriving a marginalized Aoptimality criterion and developing an efficient computational approach for its optimization. We illustrate our approach for estimating an uncertain timedependent source in a contaminant transport model with an uncertain initial state as secondary uncertainty. Our results indicate that accounting for additional model uncertainty in the experimental design process is crucial. References: This presentation is based on the following paper https://arxiv.org/abs/1308.4084 and manuscript https://arxiv.org/abs/2006.11939. 

Friday, October 23, 2020 
Boyan Lazarov  Lawrence Livermore National Laboratory  Largescale topology optimization ... more ... less  
Fri, Oct 23, 2020 
Boyan Lazarov, Lawrence Livermore National Laboratory Largescale topology optimization ... more ... less 

Abstract: Topology optimization has gained the status of being the preferred optimization tool in the mechanical, automotive, and aerospace industries. It has undergone tremendous development since its introduction in 1988, and nowadays, it has spread to many other disciplines such as Acoustics, Optics, and Material Design. The basic idea is to distribute material in a predefined domain by minimizing a selected objective and fulfilling a set of constraints. The procedure consists of repeated system analyses, gradient evaluation steps by adjoint sensitivity analysis, and design updates based on mathematical programming methods. Regularization techniques ensure the existence of a solution. The result of the topology optimization procedure is a bitmap image of the design. The ability of the method to modify every pixel/voxel results in design freedom unavailable by any other alternative approach. However, this freedom comes with the requirement of using the computational power of large parallel machines. Incorporating a model accounting for exploitation and manufacturing variations in the optimization process and the high contrast between the material phases increase further the computational cost. Thus, this talk focuses on methods for reducing the computational complexity, ensuring manufacturability of the optimized design and efficient handling of the high contrast of the material properties. The development will be demonstrated in airplane wing design, compliant mechanisms, heat sinks, material microstructures for additive manufacturing, and photonic devices. 

Abstract: Topology optimization has gained the status of being the preferred optimization tool in the mechanical, automotive, and aerospace industries. It has undergone tremendous development since its introduction in 1988, and nowadays, it has spread to many other disciplines such as Acoustics, Optics, and Material Design. The basic idea is to distribute material in a predefined domain by minimizing a selected objective and fulfilling a set of constraints. The procedure consists of repeated system analyses, gradient evaluation steps by adjoint sensitivity analysis, and design updates based on mathematical programming methods. Regularization techniques ensure the existence of a solution. The result of the topology optimization procedure is a bitmap image of the design. The ability of the method to modify every pixel/voxel results in design freedom unavailable by any other alternative approach. However, this freedom comes with the requirement of using the computational power of large parallel machines. Incorporating a model accounting for exploitation and manufacturing variations in the optimization process and the high contrast between the material phases increase further the computational cost. Thus, this talk focuses on methods for reducing the computational complexity, ensuring manufacturability of the optimized design and efficient handling of the high contrast of the material properties. The development will be demonstrated in airplane wing design, compliant mechanisms, heat sinks, material microstructures for additive manufacturing, and photonic devices. 

Friday, October 30, 2020 
Martin J. Gander  University of Geneva  Seven Things I would have liked to know when starting to work on Domain Decomposition ... more ... less  
Fri, Oct 30, 2020 
Martin J. Gander, University of Geneva Seven Things I would have liked to know when starting to work on Domain Decomposition ... more ... less 

Abstract: It is not easy to start working in a new field of research. I will give a personal overview over seven things I would have liked to know when I started working on domain decomposition (DD) methods:


Abstract: It is not easy to start working in a new field of research. I will give a personal overview over seven things I would have liked to know when I started working on domain decomposition (DD) methods:


Friday, November 06, 2020 
Siddhartha Mishra  ETH Zürich  Deep Learning and Computations of PDEs ... more ... less  
Fri, Nov 06, 2020 
Siddhartha Mishra, ETH Zürich Deep Learning and Computations of PDEs ... more ... less 

Abstract: We present recent results on the use of deep learning techniques in the context of computing different aspects of PDEs. The first part of the talk will be on novel supervised learning algorithms for efficient computation of parametric PDEs with applications to Uncertainty quantification and PDE constrained optimization. The second part of the talk will be focussed on a recently proposed class of unsupervised learning algorithms, Physics Informed Neural Networks (PINNs) and we describe their application to compute solutions for the forward problem for highdimensional PDE as well as for the data assimilation inverse problems for PDEs. 

Abstract: We present recent results on the use of deep learning techniques in the context of computing different aspects of PDEs. The first part of the talk will be on novel supervised learning algorithms for efficient computation of parametric PDEs with applications to Uncertainty quantification and PDE constrained optimization. The second part of the talk will be focussed on a recently proposed class of unsupervised learning algorithms, Physics Informed Neural Networks (PINNs) and we describe their application to compute solutions for the forward problem for highdimensional PDE as well as for the data assimilation inverse problems for PDEs. 

Friday, November 13, 2020 
Jianfeng Lu  Duke University  Solving Eigenvalue Problems in High Dimension ... more ... less  
Fri, Nov 13, 2020 
Jianfeng Lu, Duke University Solving Eigenvalue Problems in High Dimension ... more ... less 

Abstract: The leading eigenvalue problem of a differential operator arises in many scientific and engineering applications, in particular quantum manybody problems. Due to the curse of dimensionality, conventional algorithms become impractical due to the huge computational and memory complexity. In this talk, we will discuss some of our recent works on novel approaches for eigenvalue problems in high dimension, using techniques from randomized algorithms, coordinate methods, and deep learning. (joint work with Jiequn Han, Yingzhou Li, Zhe Wang and Mo Zhou). 

Abstract: The leading eigenvalue problem of a differential operator arises in many scientific and engineering applications, in particular quantum manybody problems. Due to the curse of dimensionality, conventional algorithms become impractical due to the huge computational and memory complexity. In this talk, we will discuss some of our recent works on novel approaches for eigenvalue problems in high dimension, using techniques from randomized algorithms, coordinate methods, and deep learning. (joint work with Jiequn Han, Yingzhou Li, Zhe Wang and Mo Zhou). 

Friday, November 20, 2020 
Ramnarayan Krishnamurthy  MathWorks  HandsOn Workshop  Deep Learning in MATLAB ... more ... less  
Fri, Nov 20, 2020 
Ramnarayan Krishnamurthy, MathWorks HandsOn Workshop  Deep Learning in MATLAB ... more ... less 

Abstract: Artificial Intelligence techniques like deep learning are introducing automation to the products we build and the way we do business. These techniques can be used to solve complex problems related to images, signals, text and controls. In this handson workshop, you will write code and use MATLAB Online to:
Register here 

Abstract: Artificial Intelligence techniques like deep learning are introducing automation to the products we build and the way we do business. These techniques can be used to solve complex problems related to images, signals, text and controls. In this handson workshop, you will write code and use MATLAB Online to:
Register here 

Friday, November 27, 2020 
Thanksgiving Break  
Fri, Nov 27, 2020 
Thanksgiving Break


Friday, December 04, 2020 
Rayanne Luke  University of Delaware  Parameter Identification for Tear Film Thinning and Breakup ... more ... less  
Fri, Dec 04, 2020 
Rayanne Luke, University of Delaware Parameter Identification for Tear Film Thinning and Breakup ... more ... less 

Abstract: Millions of Americans experience dry eye syndrome, a condition that decreases quality of vision and causes ocular discomfort. A phenomenon associated with dry eye syndrome is tear film breakup (TBU), or the formation of dry spots on the eye. The dynamics of the tear film can be studied using fluorescence imaging. Many parameters affecting tear film thickness and fluorescent intensity distributions within TBU are difficult to measure directly in vivo. We estimate breakup parameters by fitting computed results from thin film fluid PDE models to experimental fluorescent intensity data gathered from normal subjects’ tear films in vivo. Both evaporation and the Marangoni effect can cause breakup. The PDE models include these mechanisms in combination and separately. The parameters are determined by a nonlinear least squares minimization between computed and experimental fluorescent intensity, and they indicate the relative importance of each mechanism. Optimal values for computed breakup variables that cannot be measured in vivo fall near or within accepted experimental ranges for the general corneal region. Our results are a step towards characterizing the mechanisms that cause a wide range of breakup instances and help medical professionals to better understand tear film function and dry eye syndrome. 

Abstract: Millions of Americans experience dry eye syndrome, a condition that decreases quality of vision and causes ocular discomfort. A phenomenon associated with dry eye syndrome is tear film breakup (TBU), or the formation of dry spots on the eye. The dynamics of the tear film can be studied using fluorescence imaging. Many parameters affecting tear film thickness and fluorescent intensity distributions within TBU are difficult to measure directly in vivo. We estimate breakup parameters by fitting computed results from thin film fluid PDE models to experimental fluorescent intensity data gathered from normal subjects’ tear films in vivo. Both evaporation and the Marangoni effect can cause breakup. The PDE models include these mechanisms in combination and separately. The parameters are determined by a nonlinear least squares minimization between computed and experimental fluorescent intensity, and they indicate the relative importance of each mechanism. Optimal values for computed breakup variables that cannot be measured in vivo fall near or within accepted experimental ranges for the general corneal region. Our results are a step towards characterizing the mechanisms that cause a wide range of breakup instances and help medical professionals to better understand tear film function and dry eye syndrome. 

Stephan Wojtowytsch  Princeton University  
Stephan Wojtowytsch, Princeton University

Summer 2020
Date  Speaker  Affiliation  Title  

Date  Speaker, Affiliation, Title  
Friday, August 07, 2020 
Marta D'Elia  Sandia National Laboratories  A unified theoretical and computational nonlocal framework: generalized vector calculus and machinelearned nonlocal models ... more ... less  
Fri, Aug 07, 2020 
Marta D'Elia, Sandia National Laboratories A unified theoretical and computational nonlocal framework: generalized vector calculus and machinelearned nonlocal models ... more ... less 

Abstract: Nonlocal models provide an improved predictive capability thanks to their ability to capture effects that classical partial differential equations fail to capture. Among these effects we have multiscale behavior (e.g. in fracture mechanics) and anomalous behavior such as super and subdiffusion. These models have become incredibly popular for a broad range of applications, including mechanics, subsurface flow, turbulence, heat conduction and image processing. However, their improved accuracy comes at a price of many modeling and numerical challenges. In this talk I will first address the problem of connecting nonlocal and fractional calculus by developing a unified theoretical framework that enables the identification of a broad class of nonlocal models. Then, I will present two recently developed machinelearning techniques for nonlocal and fractional model learning. These physicsinformed, datadriven, tools allow for the reconstruction of model parameters or nonlocal kernels. Several numerical tests in one and two dimensions illustrate our theoretical findings and the robustness and accuracy of our approaches. 

Abstract: Nonlocal models provide an improved predictive capability thanks to their ability to capture effects that classical partial differential equations fail to capture. Among these effects we have multiscale behavior (e.g. in fracture mechanics) and anomalous behavior such as super and subdiffusion. These models have become incredibly popular for a broad range of applications, including mechanics, subsurface flow, turbulence, heat conduction and image processing. However, their improved accuracy comes at a price of many modeling and numerical challenges. In this talk I will first address the problem of connecting nonlocal and fractional calculus by developing a unified theoretical framework that enables the identification of a broad class of nonlocal models. Then, I will present two recently developed machinelearning techniques for nonlocal and fractional model learning. These physicsinformed, datadriven, tools allow for the reconstruction of model parameters or nonlocal kernels. Several numerical tests in one and two dimensions illustrate our theoretical findings and the robustness and accuracy of our approaches. 

Friday, July 31, 2020 
Eric Cyr  Sandia National Laboratories  A LayerParallel Approach for Training Deep Neural Networks ... more ... less  (password: i?1tm&g3) 
Fri, Jul 31, 2020 
Eric Cyr, Sandia National Laboratories A LayerParallel Approach for Training Deep Neural Networks ... more ... less 

Abstract: Deep neural networks are a powerful machine learning tool with the capacity to “learn” complex nonlinear relationships described by large data sets. Despite their success training these models remains a challenging and computationally intensive undertaking. In this talk we will present a new layerparallel training algorithm that exploits a multigrid scheme to accelerate both forward and backward propagation. Introducing a parallel decomposition between layers requires inexact propagation of the neural network. The multigrid method used in this approach stiches these subdomains together with sufficient accuracy to ensure rapid convergence. We demonstrate an order of magnitude wallclock time speedup over the serial approach, opening a new avenue for parallelism that is complementary to existing approaches. Results for this talk can be found in [1,2]. We will also present related work concerning parallelintime optimization algorithms for PDEconstrained optimization. [1] S. Guenther, L. Ruthotto, J. B. Schroder, E. C. Cyr, N. R. Gauger, LayerParallel Training of Deep Residual Neural Networks, SIMODs, Vol. 2 (1), 2020. 

Abstract: Deep neural networks are a powerful machine learning tool with the capacity to “learn” complex nonlinear relationships described by large data sets. Despite their success training these models remains a challenging and computationally intensive undertaking. In this talk we will present a new layerparallel training algorithm that exploits a multigrid scheme to accelerate both forward and backward propagation. Introducing a parallel decomposition between layers requires inexact propagation of the neural network. The multigrid method used in this approach stiches these subdomains together with sufficient accuracy to ensure rapid convergence. We demonstrate an order of magnitude wallclock time speedup over the serial approach, opening a new avenue for parallelism that is complementary to existing approaches. Results for this talk can be found in [1,2]. We will also present related work concerning parallelintime optimization algorithms for PDEconstrained optimization. [1] S. Guenther, L. Ruthotto, J. B. Schroder, E. C. Cyr, N. R. Gauger, LayerParallel Training of Deep Residual Neural Networks, SIMODs, Vol. 2 (1), 2020. 

Friday, July 24, 2020 
Ratna Khatri  Naval Research Lab  Fractional Deep Neural Network via Constrained Optimization ... more ... less  
Fri, Jul 24, 2020 
Ratna Khatri, Naval Research Lab Fractional Deep Neural Network via Constrained Optimization ... more ... less 

Abstract: In this talk, we will introduce a novel algorithmic framework for a deep neural network (DNN) which allows us to incorporate history (or memory) into the network. This DNN, called FractionalDNN, can be viewed as a timediscretization of a fractional in time nonlinear ordinary differential equation (ODE). The learning problem then is a minimization problem subject to that fractional ODE as constraints. We test our network on datasets for classification problems. The key advantages of the fractionalDNN are a significant improvement to the vanishing gradient issue due to the memory effect, and a better handling of nonsmooth data due to the network's ability to approximate nonsmooth functions. 

Abstract: In this talk, we will introduce a novel algorithmic framework for a deep neural network (DNN) which allows us to incorporate history (or memory) into the network. This DNN, called FractionalDNN, can be viewed as a timediscretization of a fractional in time nonlinear ordinary differential equation (ODE). The learning problem then is a minimization problem subject to that fractional ODE as constraints. We test our network on datasets for classification problems. The key advantages of the fractionalDNN are a significant improvement to the vanishing gradient issue due to the memory effect, and a better handling of nonsmooth data due to the network's ability to approximate nonsmooth functions. 

Birgul Koc  Virginia Tech  DataDriven Variational Multiscale Reduced Order Models ... more ... less  
Birgul Koc, Virginia Tech DataDriven Variational Multiscale Reduced Order Models ... more ... less 

Abstract: We propose a new datadriven reduced order model (ROM) framework that centers around the hierarchical structure of the variational multiscale (VMS) methodology and utilizes data to increase the ROM accuracy at a modest computational cost. The VMS methodology is a natural fit for the hierarchical structure of the ROM basis: In the first step, we use the ROM projection to separate the scales into three categories: (i) resolved large scales, (ii) resolved small scales, and (iii) unresolved scales. In the second step, we explicitly identify the VMSROM closure terms, i.e., the terms representing the interactions among the three types of scales. In the third step, instead of ad hoc modeling techniques used in VMS for standard numerical methods (e.g., finite element), we use available data to model the VMSROM closure terms. Thus, instead of phenomenological models used in VMS for standard numerical discretizations (e.g., eddy viscosity models), we utilize available data to construct new structural VMSROM closure models. Specifically, we build ROM operators (vectors, matrices, and tensors) that are closest to the true ROM closure terms evaluated with the available data. We test the new datadriven VMSROM in the numerical simulation of the 1D Burgers equation and the 2D flow past a circular cylinder. The numerical results show that the datadriven VMSROM is significantly more accurate than standard ROMs. 

Abstract: We propose a new datadriven reduced order model (ROM) framework that centers around the hierarchical structure of the variational multiscale (VMS) methodology and utilizes data to increase the ROM accuracy at a modest computational cost. The VMS methodology is a natural fit for the hierarchical structure of the ROM basis: In the first step, we use the ROM projection to separate the scales into three categories: (i) resolved large scales, (ii) resolved small scales, and (iii) unresolved scales. In the second step, we explicitly identify the VMSROM closure terms, i.e., the terms representing the interactions among the three types of scales. In the third step, instead of ad hoc modeling techniques used in VMS for standard numerical methods (e.g., finite element), we use available data to model the VMSROM closure terms. Thus, instead of phenomenological models used in VMS for standard numerical discretizations (e.g., eddy viscosity models), we utilize available data to construct new structural VMSROM closure models. Specifically, we build ROM operators (vectors, matrices, and tensors) that are closest to the true ROM closure terms evaluated with the available data. We test the new datadriven VMSROM in the numerical simulation of the 1D Burgers equation and the 2D flow past a circular cylinder. The numerical results show that the datadriven VMSROM is significantly more accurate than standard ROMs. 

Friday, July 17, 2020 
Maziar Raissi  University of Colorado Boulder  Hidden Physics Models ... more ... less  (password: 1P&@+!5v) 
Fri, Jul 17, 2020 
Maziar Raissi, University of Colorado Boulder Hidden Physics Models ... more ... less 

Abstract: A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviors expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the longstanding developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: (1) designing dataefficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear differential equations, to extract patterns from highdimensional data generated from experiments, and (2) designing novel numerical algorithms that can seamlessly blend equations and noisy multifidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations. 

Abstract: A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviors expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the longstanding developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: (1) designing dataefficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear differential equations, to extract patterns from highdimensional data generated from experiments, and (2) designing novel numerical algorithms that can seamlessly blend equations and noisy multifidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations. 

Friday, July 10, 2020 
John Harlim  The Pennsylvania State University  Learning Missing Dynamics through Data ... more ... less  (password: 7v.#=9%N) 
Fri, Jul 10, 2020 
John Harlim, The Pennsylvania State University Learning Missing Dynamics through Data ... more ... less 

Abstract: Recent success of machine learning has drawn tremendous interest in applied mathematics and scientific computations. In this talk, I would address the classical closure problem that is also known as model error, missing dynamics, or reducedordermodeling in various community. Particularly, I will discuss a general framework to compensate for the model error. The proposed framework reformulates the model error problem into a supervised learning task to approximate very highdimensional target functions, involving the MoriZwanzig representation of the projected dynamical systems. Connection to traditional parametric approaches will be clarified as specifying the appropriate hypothesis space for the target function. Theoretical convergence and numerical demonstration on modeling problems arising from PDE's will be discussed. 

Abstract: Recent success of machine learning has drawn tremendous interest in applied mathematics and scientific computations. In this talk, I would address the classical closure problem that is also known as model error, missing dynamics, or reducedordermodeling in various community. Particularly, I will discuss a general framework to compensate for the model error. The proposed framework reformulates the model error problem into a supervised learning task to approximate very highdimensional target functions, involving the MoriZwanzig representation of the projected dynamical systems. Connection to traditional parametric approaches will be clarified as specifying the appropriate hypothesis space for the target function. Theoretical convergence and numerical demonstration on modeling problems arising from PDE's will be discussed. 

Friday, July 03, 2020 
no colloquium  
Fri, Jul 03, 2020 
no colloquium  
Friday, June 26, 2020 
Mahamadi Warma  George Mason University  Fractional PDEs and their controllability properties: What is so far known and what is still unknown? ... more ... less  
Fri, Jun 26, 2020 
Mahamadi Warma, George Mason University Fractional PDEs and their controllability properties: What is so far known and what is still unknown? ... more ... less 

Abstract: In this talk, we are interested to fractional PDEs (elliptic, parabolic and hyperbolic) associated with the fractional Laplace operator. After introducing some reallife phenomena where these problems occur, we shall give a complete overview on the subject. The similarities and the differences of these fractional PDEs with the classical local PDEs with be discussed. Concerning the control theory of fractional PDEs, we will give a complete overview of the topic. More precisely, we will introduce the known important results so far obtained and we will enumerate several related important problems that have been not yet investigated by the Mathematics community. The talk will be delivered for a wide audience avoiding unnecessary technicalities. 

Abstract: In this talk, we are interested to fractional PDEs (elliptic, parabolic and hyperbolic) associated with the fractional Laplace operator. After introducing some reallife phenomena where these problems occur, we shall give a complete overview on the subject. The similarities and the differences of these fractional PDEs with the classical local PDEs with be discussed. Concerning the control theory of fractional PDEs, we will give a complete overview of the topic. More precisely, we will introduce the known important results so far obtained and we will enumerate several related important problems that have been not yet investigated by the Mathematics community. The talk will be delivered for a wide audience avoiding unnecessary technicalities. 

Friday, June 19, 2020 
Thomas M. Surowiec  PhilippsUniversität Marburg  Optimization of Elliptic PDEs with Uncertain Inputs: Basic Theory and Numerical Stability ... more ... less  
Fri, Jun 19, 2020 
Thomas M. Surowiec, PhilippsUniversität Marburg Optimization of Elliptic PDEs with Uncertain Inputs: Basic Theory and Numerical Stability ... more ... less 

Abstract: Systems of partial differential equations subject to random parameters provide a natural way of incorporating noisy data or model uncertainty into a mathematical setting. The associated optimal decisionmaking problems, whose feasible sets are at least partially governed by the solutions of these random PDEs, are infinite dimensional stochastic optimization problems. In order to obtain solutions that are resilient to the underlying uncertainty, a common approach is to use risk measures to model the user’s risk preference. The talk will be split into two main parts: Basic Theory and Numerical Stability. In the first part, we propose a minimal set of technical assumptions needed to prove existence of solutions and derive optimality conditions. For the second part of the talk, we consider a specific class of stochastic optimization problems motivated by the application to PDEconstrained optimization. In particular, we are interested in finding answers to such questions as: How do the solutions behave in the largedata limit? Can we derive statements on the rate of convergence as the samplesize increases and meshsize decreases? After reviewing several notions of probability metrics and their usage in stability analysis of stochastic optimization problems, we present qualitative and quantitative stability results. These results demonstrate the parametric dependence of the optimal values and optimal solutions with respect to changes in the underlying probability measure. These statements provide us with answers to the questions posed above for a class of riskneutral PDEconstrained problems. 

Abstract: Systems of partial differential equations subject to random parameters provide a natural way of incorporating noisy data or model uncertainty into a mathematical setting. The associated optimal decisionmaking problems, whose feasible sets are at least partially governed by the solutions of these random PDEs, are infinite dimensional stochastic optimization problems. In order to obtain solutions that are resilient to the underlying uncertainty, a common approach is to use risk measures to model the user’s risk preference. The talk will be split into two main parts: Basic Theory and Numerical Stability. In the first part, we propose a minimal set of technical assumptions needed to prove existence of solutions and derive optimality conditions. For the second part of the talk, we consider a specific class of stochastic optimization problems motivated by the application to PDEconstrained optimization. In particular, we are interested in finding answers to such questions as: How do the solutions behave in the largedata limit? Can we derive statements on the rate of convergence as the samplesize increases and meshsize decreases? After reviewing several notions of probability metrics and their usage in stability analysis of stochastic optimization problems, we present qualitative and quantitative stability results. These results demonstrate the parametric dependence of the optimal values and optimal solutions with respect to changes in the underlying probability measure. These statements provide us with answers to the questions posed above for a class of riskneutral PDEconstrained problems. 

Friday, June 12, 2020 
Ira B. Schwartz  US Naval Research Laboratory  Fear in Networks: How social adaptation controls epidemic outbreaks ... more ... less  
Fri, Jun 12, 2020 
Ira B. Schwartz, US Naval Research Laboratory Fear in Networks: How social adaptation controls epidemic outbreaks ... more ... less 

Abstract: Disease control is of paramount importance in public health, with total eradication as the ultimate goal. Mathematical models of disease spread in populations are an important component in implementing effective vaccination and treatment campaigns. However, human behavior in response to an outbreak of disease has only recently been included in the modeling of epidemics on networks. In this talk, I will review some of the mathematical models and machinery used to describe the underlying dynamics of rare events in finite population disease models, which include human reactions on what are called adaptive networks. A new model that includes a dynamical systems description of the force of the noise that drives the disease to extinction. Coupling the effective force of noise with vaccination as well as human behavior reveals how to best utilize stochastic disease controlling resources such as vaccination and treatment programs. Finally, I will also present a general theory to derive the most probable paths to extinction for heterogeneous networks, which leads to a novel optimal control to extinction. This research has been supported by the Office of Naval Research, Air Force of Scientific Research and the National Institutes of Health, and done primarily in collaboration with Jason Hindes, Brandon Lindley, and Leah Shaw. About the speaker:Trained and educated as both an applied mathematician (University of Marylan, Ph.D.) and physicist (University of Hartford, BS), Dr. Schwartz and his collaborators, post doctoral fellows and students have impacted a diverse array of applications in the field of nonlinear science. Dr. Schwartz has over 120 refereed publications in areas such as physics, mathematics, biology and chemistry. The main underlying theme in the applications field has been the mathematical and numerical techniques of nonlinear dynamics and chaos, and most recently, nonlinear stochastic analysis and control of cooperative and networked dynamical systems. Dr. Schwartz has been written up several times in Science and Scientific American magazines, has given invited and plenary talks at international applied mathematics, physics, and engineering conferences, and he is one of the founding organizers of the biennial SIAM conference on Dynamical Systems. Several of his discoveries developed in nonlinear science are currently patented, including collaborative robots, synchronized coupled lasers, and chaos tracking and control for which he was awarded the US Navy Tech Transfer award. Dr. Schwartz is an elected fellow of the American Physical Society and the current vicechair of the SIAM Dynamical Systems Group. 

Abstract: Disease control is of paramount importance in public health, with total eradication as the ultimate goal. Mathematical models of disease spread in populations are an important component in implementing effective vaccination and treatment campaigns. However, human behavior in response to an outbreak of disease has only recently been included in the modeling of epidemics on networks. In this talk, I will review some of the mathematical models and machinery used to describe the underlying dynamics of rare events in finite population disease models, which include human reactions on what are called adaptive networks. A new model that includes a dynamical systems description of the force of the noise that drives the disease to extinction. Coupling the effective force of noise with vaccination as well as human behavior reveals how to best utilize stochastic disease controlling resources such as vaccination and treatment programs. Finally, I will also present a general theory to derive the most probable paths to extinction for heterogeneous networks, which leads to a novel optimal control to extinction. This research has been supported by the Office of Naval Research, Air Force of Scientific Research and the National Institutes of Health, and done primarily in collaboration with Jason Hindes, Brandon Lindley, and Leah Shaw. About the speaker:Trained and educated as both an applied mathematician (University of Marylan, Ph.D.) and physicist (University of Hartford, BS), Dr. Schwartz and his collaborators, post doctoral fellows and students have impacted a diverse array of applications in the field of nonlinear science. Dr. Schwartz has over 120 refereed publications in areas such as physics, mathematics, biology and chemistry. The main underlying theme in the applications field has been the mathematical and numerical techniques of nonlinear dynamics and chaos, and most recently, nonlinear stochastic analysis and control of cooperative and networked dynamical systems. Dr. Schwartz has been written up several times in Science and Scientific American magazines, has given invited and plenary talks at international applied mathematics, physics, and engineering conferences, and he is one of the founding organizers of the biennial SIAM conference on Dynamical Systems. Several of his discoveries developed in nonlinear science are currently patented, including collaborative robots, synchronized coupled lasers, and chaos tracking and control for which he was awarded the US Navy Tech Transfer award. Dr. Schwartz is an elected fellow of the American Physical Society and the current vicechair of the SIAM Dynamical Systems Group. 

Friday, June 05, 2020 
Patrick O’Neil  BlackSky  Applications of Deep Learning to Large Scale Remote Sensing ... more ... less  
Fri, Jun 05, 2020 
Patrick O’Neil, BlackSky Applications of Deep Learning to Large Scale Remote Sensing ... more ... less 

Abstract: With the proliferation of Earth imaging satellites, the rate at which satellite imagery is acquired has outpaced the ability to manually review the data. Therefore, it is critical to develop systems capable of autonomously monitoring the globe for change. At BlackSky, we use a host of deep learning models, deployed in Amazon Web Services, to process all images downlinked from our Globals constellation of imaging satellites. In this talk, we will discuss some of these models and challenges we face when building remote sensing machine learning models at scale. 

Abstract: With the proliferation of Earth imaging satellites, the rate at which satellite imagery is acquired has outpaced the ability to manually review the data. Therefore, it is critical to develop systems capable of autonomously monitoring the globe for change. At BlackSky, we use a host of deep learning models, deployed in Amazon Web Services, to process all images downlinked from our Globals constellation of imaging satellites. In this talk, we will discuss some of these models and challenges we face when building remote sensing machine learning models at scale. 

Friday, May 29, 2020 
Akwum Onwunta  University of Maryland, College Park  Fast solvers for optimal control problems constrained by PDEs with uncertain inputs ... more ... less  
Fri, May 29, 2020 
Akwum Onwunta, University of Maryland, College Park Fast solvers for optimal control problems constrained by PDEs with uncertain inputs ... more ... less 

Abstract: Optimization problems constrained by deterministic steadystate partial differential equations (PDEs) are computationally challenging. This is even more so if the constraints are deterministic unsteady PDEs since one would then need to solve a system of PDEs coupled globally in time and space, and timestepping methods quickly reach their limitations due to the enormous demand for storage [5]. Yet, more challenging than the aforementioned are problems constrained by unsteady PDEs involving (countably many) parametric or uncertain inputs. A viable solution approach to optimization problems with stochastic constraints employs the spectral stochastic Galerkin finite element method (SGFEM). However, the SGFEM often leads to the socalled curse of dimensionality, in the sense that it results in prohibitively high dimensional linear systems with tensor product structure [1, 2, 4]. Moreover, a typical model for an optimal control problem with stochastic inputs (OCPS) will usually be used for the quantification of the statistics of the system response – a task that could in turn result in additional enormous computational expense. It is worth pursuing computationally efficient ways to simulate OCPS using SGFEMs since the Galerkin approximation provides a favorable framework for error estimation [3]. In this talk, we consider two prototypical model OCPS and discretize them with SGFEM. We exploit the underlying mathematical structure of the discretized systems at the heart of the optimization routine to derive and analyze low rank iterative solvers and robust blockdiagonal preconditioners for solving the resulting stochastic Galerkin systems. The developed solvers are quite efficient in the reduction of temporal and storage requirements of the highdimensional linear systems [1, 2]. Finally, we illustrate the effectiveness of our solvers with numerical experiments. Keywords: Stochastic Galerkin system, iterative methods, PDEconstrained optimization, saddlepoint system, lowrank solution, preconditioning, Schur complement. References:
Akwum Onwunta is a postdoctoral research associate at the University of Maryland, College Park (UMCP). Before joining UMCP, he had worked at Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany as a scientific researcher and at Deutsche Bank, Frankfurt, as a Marie Curie research fellow / quantitative risk analyst. He holds a PhD in Mathematics from Otto von Guericke University, Magdeburg, Germany. 

Abstract: Optimization problems constrained by deterministic steadystate partial differential equations (PDEs) are computationally challenging. This is even more so if the constraints are deterministic unsteady PDEs since one would then need to solve a system of PDEs coupled globally in time and space, and timestepping methods quickly reach their limitations due to the enormous demand for storage [5]. Yet, more challenging than the aforementioned are problems constrained by unsteady PDEs involving (countably many) parametric or uncertain inputs. A viable solution approach to optimization problems with stochastic constraints employs the spectral stochastic Galerkin finite element method (SGFEM). However, the SGFEM often leads to the socalled curse of dimensionality, in the sense that it results in prohibitively high dimensional linear systems with tensor product structure [1, 2, 4]. Moreover, a typical model for an optimal control problem with stochastic inputs (OCPS) will usually be used for the quantification of the statistics of the system response – a task that could in turn result in additional enormous computational expense. It is worth pursuing computationally efficient ways to simulate OCPS using SGFEMs since the Galerkin approximation provides a favorable framework for error estimation [3]. In this talk, we consider two prototypical model OCPS and discretize them with SGFEM. We exploit the underlying mathematical structure of the discretized systems at the heart of the optimization routine to derive and analyze low rank iterative solvers and robust blockdiagonal preconditioners for solving the resulting stochastic Galerkin systems. The developed solvers are quite efficient in the reduction of temporal and storage requirements of the highdimensional linear systems [1, 2]. Finally, we illustrate the effectiveness of our solvers with numerical experiments. Keywords: Stochastic Galerkin system, iterative methods, PDEconstrained optimization, saddlepoint system, lowrank solution, preconditioning, Schur complement. References:
Akwum Onwunta is a postdoctoral research associate at the University of Maryland, College Park (UMCP). Before joining UMCP, he had worked at Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany as a scientific researcher and at Deutsche Bank, Frankfurt, as a Marie Curie research fellow / quantitative risk analyst. He holds a PhD in Mathematics from Otto von Guericke University, Magdeburg, Germany. 

Friday, May 22, 2020 
Jianghao Wang  MathWorks  Practical Deep Learning in the Classroom ... more ... less  
Fri, May 22, 2020 
Jianghao Wang, MathWorks Practical Deep Learning in the Classroom ... more ... less 

Abstract: Deep learning is quickly becoming embedded in everyday applications. It’s becoming essential for students to adopt this technology, almost regardless of what their future jobs are. We will highlight some of the mathematics needed to construct and understand deep learning solutions. About the speaker:Jianghao Wang is the deep learning academic liaison at MathWorks. In her role, Jianghao supports deep learning research and teaching in academia. Before joining MathWorks, Jianghao obtained her Ph.D. in Statistical Climatology from the University of Southern California and B.S. in Applied Mathematics from Nankai University. 

Abstract: Deep learning is quickly becoming embedded in everyday applications. It’s becoming essential for students to adopt this technology, almost regardless of what their future jobs are. We will highlight some of the mathematics needed to construct and understand deep learning solutions. About the speaker:Jianghao Wang is the deep learning academic liaison at MathWorks. In her role, Jianghao supports deep learning research and teaching in academia. Before joining MathWorks, Jianghao obtained her Ph.D. in Statistical Climatology from the University of Southern California and B.S. in Applied Mathematics from Nankai University. 