Title: Multiscale Organization of Intelligent Active Matter

Speaker: Igor Aronson (Penn State)

Abstract: Active matter (collectives of self-propelled particles) spontaneously organizes into large-scale coherent swarms. Living organisms use communications and information processing to enhance their evolutionary competitiveness. This feature is mostly lacking in synthetic active matter, e.g., simple microrobots, while it could significantly improve their functionality and efficiency. Using a simple description of self-propelled interacting particles (so-called Vicsek model) complimented by coupling to a signaling field, we have shown that chemical communication with decision-making at individual agents’ level enables a multi-scale organization. In the case of agents with long-range communications, we considered the self-organization of coupled self-propelled nonlinear oscillators emitting acoustic waves. We discovered spontaneous assembly into localized droplets and collectively propagating snake- and larvae-like solutions. These structures demonstrate collective navigation in heterogeneous environments and threat detection. Our results provide insights into the emerging functionality of synthetic non-equilibrium systems such as micro robotic swarms capable of processing information and making decisions.

Title: Communication-Avoiding Algorithms for Linear Algebra, Machine Learning and Beyond

Speaker: James Demmel (UC Berkeley)

Abstract: Algorithms have two costs: arithmetic and communication, i.e. moving data between levels of a memory hierarchy or processors over a network. Communication costs (measured in time or energy per operation) greatly exceed arithmetic costs, so our goal is to design algorithms that minimize communication. We survey some known algorithms that communicate asymptotically less than their classical counterparts, for a variety of linear algebra and machine learning problems, often attaining lower bounds. We also discuss recent work on automating the design and implementation of these algorithms, starting from a simple specification as nested loops.

Title: Numerical Approximation of Thermo–Mechanical Problems

Speaker: Noel Walkington (Carnegie Mellon University)

Abstract: Many problems of contemporary interest involve thermal effects and phase changes. Three dimensional printing of metallic components, and the release of green houses gasses beneath thawing permafrost are prototypical examples. This talk will review how classical thermodynamics enters into the structural properties of of the partial differential equations modeling some thermo–mechanical problems, and the challenges that arise in the numerical simulation of their solutions.

Title: Using operator preconditioning in the design of reduced basis methods

Speaker: Ludmil T Zikatanov (NSF)

Abstract: This work is on using preconditioned conjugate gradient method in Hilbert space to construct reduced basis and efficiently compute rational approximations of functions of operators.

Title: Phase in Space: Spatio-temporal Cortical Dynamics

Speaker:Bard Ermentrout (U. Pittsburgh)

Abstract: The ability of neuroscientists to record large regions of the brain at high temporal resolution has demonstrated that neuronal oscillations are not synchronized, but rather, organized into spatio-temporal patterns such as plane- and rotating waves. In typical experiments phase gradients are computed from filtered local field potentials, thus, a natural mathematical framework is coupled phase equations. Indeed, when interactions in spatially distributed oscillatory media are “weak”, it is possible to reduce the dynamics to a system of phase equations for the phase u(x,t): u_t = w(x) + int_D K(x-y) H[u(y,t)-u(x,t)] where w(x) represents heterogeneities, D, is some one- or two-dimensional domain, K(x) is a coupling kernel and H[u] is the phase-interaction function. In this talk, I will discuss the existence and stability of rotating waves when D is an annulus. I will show that as the inner radius shrinks, rigid rotating waves lose existence through a saddle-node and this results in the birth of co-called chimeras. I will also describe some recent work on boundary effects and how they are sufficient to lead to target-like patterns even when w(x)=0. Finally I will suggest some possible computational roles for waves in cognition. This work is joint with Yujie Ding, Andrea Welsh, and Yiqing Liu.

Title: The mean-field limit of systems of biological neurons

Speaker: P-E Jabin (Penn State)

Abstract: We investigate the mean-field limit of large networks of interacting biological neurons. The neurons are represented by the so-called integrate and fire models that follow the membrane potential of each neurons and captures individual spikes. However we do not assume any structure on the graph of interactions but consider instead any connection weights between neurons that obey a generic mean-field scaling. We are able to extend the concept of extended graphons, introduced in Jabin-Poyato-Soler, by introducing a novel notion of discrete observables in the system. This is a joint work with D. Zhou.

Title: Learning Interaction Kernels in Interacting Particle- and Agent-based Systems

Speaker: Mauro Maggioni (Johns Hopkins University)

Abstract: We consider systems of interacting agents or particles, which are commonly used for modeling across the sciences. Oftentimes the laws of interaction between the agents are quite simple, for example they depend only on pairwise interactions, and only on pairwise distance in each interaction. We consider the following inference problem for a system of interacting particles or agents: given only observed trajectories of the agents in the system, can we learn what the laws of interactions are? We would like to do this without assuming any particular form for the interaction laws, i.e. they might be “any” function of pairwise distances. We consider this problem both the mean-field limit (i.e. the number of particles going to infinity) and in the case of a finite number of agents, with an increasing number of observations, albeit in this talk we will mostly focus on the latter case. We cast this as an inverse problem, and present a solution in the simplest yet interesting case where the interaction is governed by an (unknown) function of pairwise distances. We discuss when this problem is well-posed, we construct estimators for the interaction kernels with provably good statistically and computational properties, and discuss extensions to second-order systems, more general interaction kernels, stochastic systems, and to the setting where the variables (e.g. pairwise distance) on which the interaction kernel depends are not known a priori. This is joint work with F. Lu, J. Feng, P. Martin, J.Miller, S. Tang and M. Zhong.

Title: Faces of entropy: Connecting Materials Science, Biology and Quantum Information Science

Speaker: Maria Emelianenko (George Mason University)

Abstract: Conceptually, entropy is often thought of as a measure of randomness in a system. But it comes in different flavors depending on the type of a “network” this concept is applied to. This talk will focus on connections between thermodynamic, information theory and graph theoretic definitions of entropy and survey recent theoretical and applied developments. In particular, we will talk about: (1) the role of microstructure entropy in describing coarsening processes in materials and quantifying their mechanical performance, (2) relative entropy of biological networks and protein sequences, and (3) applications to quantum information science.

Title: Weak Galerkin Finite Element Methods

Speaker: Junping Wang (NSF)

Abstract: The speaker will discuss a weak Galerkin finite element method for the Navier-Stokes equation. Motivating examples include the convection-dominated convection diffusion equations and the Oseen equation with strong convection. The solution existence and uniqueness will also be discussed under certain conditions.

Title: Analysis of microswimmers: from one to many

Speaker: Laurel Ohm (UW-Madison)

Abstract: This talk will consider PDE questions about swimming at the microscopic level on two scales: (1) A single swimming filament and (2) A suspension of interacting swimmers. First, we consider a classical elastohydrodynamic model of an immersed inextensible filament undergoing planar motion in $\mathbb{R}^3$. We mention our recent PDE results and highlight how this analysis can help to better understand undulatory swimming at low Reynolds number. This includes the development of numerical methods to simulate inextensible swimmers in Newtonian and viscoelastic media. Second, we consider a kinetic model of rodlike microswimmers in Stokes flow and highlight the stabilizing effect of swimming on the uniform, isotropic equilibrium. In particular, we prove results on Landau damping, generalized Taylor dispersion, and enhanced dissipation due to swimming.

Title: Computational Mathematics Applied to Drug Dosing

Speaker: Helen Moore (University of Florida)

Abstract: A variety of computational mathematics methods are used throughout the drug development process. I will focus on methods used to determine drug dosing, both at the population level and patient-specific level. I will also say a few words about the need for more applied mathematics in the biopharma industry, and how mathematicians might find relevant job listings.

Title: Cryptography in the age of quantum computing

Speaker: Kirsten Eisentraeger (Penn State)

Abstract: Computational problems that can be solved exponentially faster on a quantum computer than on a classical computer have mostly been number theoretic. It turns out that some of these problems, like factoring and the discrete log problem, are also required to be computationally difficult for certain cryptosystems to be secure. Hence RSA and Elliptic Curve Cryptography, that are based on the hardness of these problems, are not secure against quantum computers. In this talk I will discuss some recently proposed cryptosystems that have been suggested as alternatives to RSA and Elliptic Curve Cryptography. These fall into two categories, lattice-based systems and systems based on supersingular isogenies. We will discuss their security, both classically and against quantum computers.

Title: Deep neural network initialisation: Nonlinear activations impact on the Gaussian process

Speaker: Jared Tanner (University of Oxford)

Abstract: Randomly initialised deep neural networks are known to generate a Gaussian process for their pre-activation intermediate layers. We will review this line of research with extensions to deep networks having structured random entries such as block-sparse or low-rank weight matrices. We will then discuss how the choice of nonlinear activations impacts the evolution of the Gaussian process. Specifically we will discuss why sparsifying nonlinear activations such as soft thresholding are unstable, we will show conditions to overcome such issues, and we will show how non-sparsifying activations can be improved to be more stable when acting on a data manifold. This work is joint with Michael Murray (UCLA), Vinayak Abrol (IIIT Delhi), Ilan Price (DeepMind), and from Oxford: Alireza Naderi, Thiziri Nait Saada, Nicholas Daultry Ball, Adam C. Jones, and Samuel C.H. Lam.

Title: Deep Neural Networks and Finite Elements

Speaker: Jinchao Xu (Penn State/Kaust)

Abstract:In this talk,  I will report a new joint work with Juncai He on the connection between finite element and deep neural network (DNN) functions.  In our earlier works, we reported that any linear finite element function in any dimension can be expressed in terms of a DNN using ReLU activation function.   How to generalize this result to finite element function of arbitrary order has been an open problem.  In this talk, we will report a solution to this open problem.   Namely, we will show that any finite element function of any order on very general grids in any dimension can be expressed in terms of a type of DNN using some appropriately chosen activation functions.   Furthermore, we will show that our new DNN can generate finite element functions of arbitrary order and can also generate any global polynomials of arbitrary degrees.

Title: Enhancing Accuracy in Deep Learning Using Random Matrix Theory

Speaker: Leonid Berlyand (Penn State)

Abstract:We discuss applications of random matrix theory (RMT) to the training of deep neural networks (DNNs). Our focus is on pruning of DNN parameters, guided by the Marchenko-Pastur spectral approach. Our numerical results show that this pruning leads to a drastic reduction of parameters while not reducing the accuracy of DNNs and CNNs. Moreover, pruning the fully connected DNNs actually increases the accuracy and decreases the variance for random initializations. We next show how these RMT techniques can be used to remove 20% of parameters from state-of-the-art DNNs such as Resnet and VIT while reducing accuracy by at most 2% and, at some instances, even increasing accuracy.

Finally, we provide a theoretical understanding of these results by proving the Pruning Theorem that establishes a rigorous relation between the accuracy of the pruned and non-pruned DNNs.

Joint work with E. Sandier (U. Paris 12), Y. Shmalo (PSU student) and L. Zhang (Jiao Tong U.)

Title: Hybrid Thermodynamically Consistent Models for Fluid-Structure Interactions

Speaker: Qi Wang(University of South Carolina)

Abstract:We present an innovative computational framework for exploring the dynamics
of interactions between fluid and solid particles/structures within a viscous fluid medium.
This framework leverages the phase-field-embedding method. In this approach, each solid
component, whether it’s rigid or elastic, is defined by a volume-preserving phase field. The
movement of both solid particles and the fluid is governed by a unified velocity within the
fluid-solid ensemble for passive particles. Active particles, on the other hand, are driven not
only by this unified velocity but also by their self-propelling velocities. To model the exclusive
volume interactions among particles and between particles and boundaries, we employ
repulsive potential forces at a coarser scale. These forces effectively account for the repulsion
and collision effects. For active solid particles, the drag experienced by the surrounding
fluid is assumed to be proportional to their self-propelling velocity, accounting for the resistance
they encounter while moving through the fluid. Rigid particles maintain their structural
integrity through the enforcement of a zero velocity gradient tensor within their spatial
domains, necessitating the introduction of a constraining stress tensor. Elastic particles, in
contrast, are governed by a quasi-linear constitutive equation that describes the elastic stress
within their respective domains. This allows us to model the deformation of elastic particles
accurately. We track the motion of these solid particles by monitoring the dynamics of their
centers of mass. This approach enables us to develop a hybrid, thermodynamically consistent
hydrodynamic model for both rigid and elastic particles. This model adheres to the
generalized Onsager principle and is applicable across the entire computational domain. To
solve this thermodynamically consistent model for elastic particles numerically, we develop
a structure-preserving numerical algorithm. In the limit of infinite elastic modulus, this algorithm
reduces to the model for rigid particles. Finally, to validate the effectiveness, accuracy,
and stability of our proposed scheme and to showcase the capabilities of our computational
framework, we have conducted several numerical experiments.

Title: Personalized modeling of glioblastoma using physics-informed neural networks

Speaker: John Lowengrub (UC Irvine)

Abstract: Predicting the infiltration of glioblastoma (GBM) from medical MRI scans is crucial for understanding tumor growth dynamics and designing personalized radiotherapy treatment plans. Mathematical models of GBM growth can complement the data in the prediction of spatial distributions of tumor cells. However, this requires estimating patient-specific parameters of the model from clinical data, which is a challenging inverse problem due to limited temporal data and the limited time between imaging and diagnosis. We propose a method that uses Physics-Informed Neural Networks (PINNs) to estimate patient-specific parameters of a reaction-diffusion PDE model of GBM growth from a single 3D structural MRI snapshot. PINNs embed both the data and the PDE into a loss function, thus integrating theory and data. Key innovations include the identification and estimation of characteristic non dimensional parameters, a pre-training step that utilizes the non-dimensional parameters and a fine-tuning step to determine the patient specific parameters. Additionally, the diffuse domain method is employed to handle the complex brain geometry within the PINN framework. Our method is validated both on synthetic and patient datasets, and shows promise for real-time parametric inference in the clinical setting for personalized GBM treatment.

 

Skip to toolbar