Bay Area Scientific Computing Day Saturday, December 3, 2016
BASCD is an annual one-day meeting focused on fostering interactions and collaborations between researchers in the fields of scientific computing and computational science and engineering from the San Francisco Bay Area. The event provides junior researchers a venue to present their work to the local community, and for the Bay Area scientific and computational science and engineering communities at large to interchange views on today’s multidisciplinary computational challenges and state-of-the-art developments.
The event this year will be organized by the Institute for Computational and Mathematical Engineering (ICME) at Stanford.
Schedule
BASCD 2016 will be on Saturday, December 3, 2016. The temporary schedule is as follows.
Time | Event | Speaker |
---|---|---|
09:00-09:20 | Registration | |
09:20-09:30 | Welcome Remarks | |
09:30-09:55 | On the hyperbolicity of Grad's moment system in gas kinetic theory | Yuwei Fan |
09:55-10:20 | Serial Femtosecond Crystallography at the Linac Coherent Light Source | Chun Hong Yoon |
10:20-10:45 | Deep Learning for Turbulence Modeling | Julia Ling |
10:45-11:00 | Coffee Break | |
11:00-11:25 | A Hybrid Adaptive Compressible/Low-Mach-Number Method | Emmanuel Motheau |
11:25-11:50 | Fast Randomized Methods for Data-Intensive Convex Optimization | Mert Pilanci |
11:50-12:15 | The Landscape of Empirical Risk for Non-convex Losses | Song Mei |
12:15-13:30 | Lunch Break | |
13:30-13:55 | A Convex Relaxation Framework for Strategic Bidding in Electricity Markets | Mahdi Ghamkhari |
13:55-14:20 | Rational Krylov Methods for Solving Nonlinear Eigenvalue Problems | Roel Van Beeumen |
14:20-14:45 | Adaptively Compressed Polarizability Operator for Accelerating Large Scale Ab Initio Phonon Calculations | Ze Xu |
14:45-15:00 | Coffee Break | |
15:00-15:25 | Low Rank Tensor Approximation of Multivariate Functions for Quantum Chemistry Applications | Prashant Rai |
15:25-15:50 | Massively Parallel Ab Initio Plasma Simulations Using the Particle-in-cell Method | Paulo Alves |
15:50-16:15 | Performance Advantages of Using a Burst Buffer for Scientific Workflows | Andrey Ovsyannikov |
All talks will be 20 minutes with 5 minutes for Q&A.
Speakers
Yuwei Fan
Math, Stanford
Title: On the hyperbolicity of Grad's moment system in gas kinetic theory
Abstract: In gas kinetic theory, it is well-known that Grad's moment system is not globally hyperbolic. And I. Muller et. al.(1998) point out that Grad's 13-moment system is not globally hyperbolic for 1D flow. In this talk, we will point out for 3D case, for each equilibrium state, none of its neighborhoods is contained in the hyperbolicity region. This observation offers an explanation why the well-known Grad's moment system does not succeed in the past 60 years.
Slides: PDF
Chun Hong Yoon
Data analysis Group, SLAC/LCLS
Title: Serial Femtosecond Crystallography at the Linac Coherent Light Source
Abstract: Serial femtosecond crystallography (SFX) is a diffractive imaging technique for determining the molecular structure of proteins with a femtosecond laser pulse (1/1,000,000,000,000,000 of a second) to outrun the onset of radiation damage. Structural information about biological macromolecules near the atomic scale provides important insight into the functions of these molecules and enables better drug design. To date, scientists have relied on a known structure of a similar protein to solve the phase problem. We have recently demonstrated de novo phasing (i.e. without a model of a similar structure) of a selenobiotyl-streptavidin structure at the Linac Coherent Light Source (LCLS). I will talk about our scientific results and the computational challenges we face at the LCLS.
Slides: PDF
Julia Ling
Sandia National Labs
Title: Deep Learning for Turbulence Modeling
Abstract: Turbulence models are broadly used in industry for aerodynamic simulations, heat transfer simulations, and engine simulations. However, because these simulations are often inaccurate, they are not always useful in the design cycle. The turbulence models in these simulations are thirty to forty years old and were originally based on sparse experimental data and strong assumptions. With the increasing availability of large, high-fidelity data sets, there is now the possibility of using deep learning to provide a more accurate turbulence model. This talk will discuss how to apply machine learning to a physical system: how to embed domain knowledge and constraints, and how to provide the subject matter experts with feedback.
Emmanuel Motheau
CRD, Lawrence Berkeley National Lab
Title: A Hybrid Adaptive Compressible/Low-Mach-Number Method
Abstract: Flows in which the primary features of interest do not rely on high-frequency acoustic effects, but in which long-wavelength acoustics play a nontrivial role, present a computational challenge. Integrating the entire domain with low Mach number methods would remove all acoustic wave propagation, while integrating the entire domain with the fully compressible equations would be prohibitively expensive due to the CFL time step constraint. For example, thermoacoustic instabilities might require fine resolution of the fluid/chemistry interaction but not require fine resolution of acoustic effects, yet one does not want to neglect the long-wavelength wave propagation and its interaction with the larger domain. The proposed talk will present a new hybrid algorithm that has been developed to address these type of phenomena. In this new approach, the fully compressible Navier-Stokes equations are solved on the entire domain, while their low Mach number counterparts are solved on a subregion of the domain with higher spatial resolution. The coarser acoustic grid communicates inhomogeneous divergence constraints to the finer low Mach number grid, so that the low Mach number method allows the long-wavelength acoustics. We will demonstrate the effectiveness of the new method on practical cases such as the aeroacoustics generated by the vortex formation in an unstable low-Mach mixing layer.
Mert Pilanci
Math+X, Stanford
Title: Fast Randomized Methods for Data-Intensive Convex Optimization
Abstract: With the advent of massive data sets, statistical learning and information processing techniques are expected to enable unprecedented possibilities for better decision making. However, existing algorithms for mathematical optimization, which are the core component in these techniques, often prove ineffective for scaling to the extent of all available data. This talk focuses on random projection methods in the context of general convex optimization and statistical estimation problems to address this challenge. First, I'll describe the theoretical relation between complexity and optimality of solutions. Then, I'll provide a general information-theoretic lower bound on any method that is based on random projection, which surprisingly shows that the most widely used form of random projection is, in fact, statistically sub-optimal. I will finally present a novel method, which iteratively refines the solutions to achieve statistical optimality and generalize our method to optimizing arbitrary convex functions of a large data matrix. The proposed method, called the Newton Sketch, is a faster randomized version of the well-known Newton's Method with linear computational complexity in the input data. Newton Sketch enables solving large scale optimization and statistical inference problems orders-of-magnitude faster than existing methods.
Song Mei
ICME, Stanford
Title: The Landscape of Empirical Risk for Non-convex Losses
Abstract: Most high-dimensional estimation and prediction methods propose to minimize the empirical risk that is written as a sum of losses associated to each data point. In this paper we focus on the case of non-convex losses, that is practically important but still poorly understood. Classical empirical process theory implies uniform convergence of the empirical risk to the population risk. While uniform convergence implies consistency of the resulting M-estimator, it does not ensure that the latter can be computed efficiently. Empirical risk minimization is often carried out via first-order algorithms such as gradient descent and its generalizations. In order to capture the complexity of computing M-estimators, we propose to study the landscape of the empirical risk, namely its stationary points and their properties. We establish uniform convergence of the gradient and Hessian of the empirical risk to their population counterparts, as soon as the number of samples becomes larger than the number of unknown parameters (modulo logarithmic factors). We demonstrate that –in several examples– this result implies a complete characterization of the landscape of the empirical risk, and of the convergence properties of descent algorithms.
Slides: PDF
Mahdi Ghamkhari
CS, UC Davis
Title: A Convex Relaxation Framework for Strategic Bidding in Electricity Markets
Abstract: The state-of-the-art approach for solving strategic bidding problem in electricity markets is to solve a mixed-integer linear program (MILP). However, as the size of power network, power scheduling horizon and uncertainty in the participants’ bids increase, the computational time of solving the MILP grows exponentially. In this talk, we present a convex relaxation framework to reformulate the strategic bidding problem as a convex optimization problem. The proposed framework is based on Schmudgen's Positivstellensatz in semi-algebraic geometry. For the sake of efficiency, we highly customize and relax the polynomials in Positivestellensatz Theorem. Further, to compensate for the loss of optimality that are caused by the relaxations, we also propose a heuristic algorithm that when combined with the above relaxation technique results in both high optimality and computational efficiency. Extensive numerical studies on IEEE 30-Bus power networks show that our approach gives a close-to-optimal bidding solution, in a computation time that increases only linearly when the scheduling horizon or uncertainty increase."
Slides: PDF
Roel Van Beeumen
CRD, Lawrence Berkeley National Laboratory
Title: Rational Krylov Methods for Solving Nonlinear Eigenvalue Problems
Abstract: We present an overview of rational Krylov methods for solving the nonlinear eigenvalue problem (NLEP): A(λ)x = 0. For many years linearizations are used for solving polynomial eigenvalue problems. On the other hand, for the general nonlinear case, A(λ) can first be approximated by a (rational) matrix polynomial and then a convenient linearization is used. The major disadvantage of linearization based methods is the growing memory and orthogonalization costs with the iteration count, i.e., in general they are proportional to the degree of the polynomial. Therefore, we introduce a new framework of Compact Rational Krylov (CORK) methods which maximally exploit the structure of the linearization pencils. In this way, the extra memory and orthogonalization costs due to the linearization of the original eigenvalue problems are negligible for large-scale problems.
Slides: PDF
Ze Xu
Math, UC Berkeley
Title: Adaptively Compressed Polarizability Operator for Accelerating Large Scale Ab Initio Phonon Calculations
Abstract: Phonon calculations based on first principle electronic structure theory, such as the Kohn-Sham density functional theory, have wide applications in physics, chemistry and material science. The computational cost of first principle phonon calculations typically scales steeply as O(N_e^4), where N_e is the number of electrons in the system. In this talk, we introduce a new method to reduce the computational complexity of computing the full dynamical matrix, and hence the phonon spectrum, to O(N_e^3). The key concept for achieving this is to compress the polarizability operator adaptively with respect to the perturbation of the potential due to the change of the atomic configuration. Such adaptively compressed polarizability operator (ACP) allows accurate computation of the phonon spectrum. The reduction of complexity only weakly depends on the size of the band gap, and the method is applicable to insulators as well as semiconductors with small band gaps. Furthermore, the performance of the ACP formulation can be improved by splitting the polarizability into a near singular component that is statically compressed, and a smooth component that is adaptively compressed. The new split representation maintains the O(N_e^3) complexity, and accelerates nearly all components of the ACP formulation, including Chebyshev interpolation of energy levels, iterative solution of Sternheimer equations, and convergence of Dyson equations. We demonstrate the effectiveness of our method using one-dimensional and two-dimensional model problems in insulating and metallic regimes, as well as its accuracy for real molecules and solids.
Slides: PDF
Prashant Rai
Sandia National Labs
Title: Low Rank Tensor Approximation of Multivariate Functions for Quantum Chemistry Applications
Abstract: Tensor approximation methods have been known to successfully deal with high dimensional objects by tackling the issue of curse of dimensionality (i.e. exponential growth of parameters with dimension). In the context of high dimensional function approximation, coefficients of a multivariate function on approximation bases can be stored in a d-dimensional array i.e. coefficient tensor (d being the dimension of the function). A low rank decomposition of this tensor can then be seen as the corresponding low rank approximation of the function. Exploiting low rank structure, if it exists, requires estimation of parameters that scale only linearly with dimension. This presentation will introduce a tensor based interpretation of high dimensional functions followed by its decomposition in several low rank formats. It will then introduce algorithms to construct these approximations and will conclude with their application on real life examples ranging from uncertainty propagation to quantum chemistry.
Slides: PDF
Paulo Alves
SLAC National Accelerator Laboratory
Title: Massively Parallel Ab Initio Plasma Simulations Using the Particle-in-cell Method
Abstract: Plasmas make up over 99% of the visible Universe. Their dynamics govern the evolution of stars and galaxies, as well as the acceleration of the most energetic particles in the Universe. In the laboratory, plasmas offer a platform for compact radiation and particle sources, and hold potential of providing an unlimited energy supply via controlled nuclear fusion. Numerical simulations play a central role in plasma physics research, providing insight into the complex and highly nonlinear nature of plasma dynamics. Here, we will present the particle-in-cell (PIC) simulation method, which provides a first-principles ab initio description of plasma systems. This method describes the detailed trajectories of charged plasma particles that move according to self-consistent electric and magnetic fields that the particles themselves collectively produce. The detailed microphysics captured by this simulation method is essential to understanding highly relativistic plasmas and their interaction with strong electromagnetic fields. This particle-based simulation method, however, is computationally very demanding, requiring enormous computational resources to model laboratory and astrophysically relevant systems. In order to harness the full potential of the World’s leading supercomputers, massively parallel PIC codes must employ sophisticated optimization strategies, including dynamic load balancing and specialized algorithms for particular supercomputing architectures (e.g. BlueGene/Q, GPUs). Here, we will discuss the main features of our highly optimized massively parallel PIC code, OSIRIS, which has demonstrated efficient strong scaling of 80% in up to 1.6M cores on Sequoia at Lawrence Livermore National Laboratory. We will end by discussing some of the most outstanding scientific challenges in plasma physics research that PIC codes have the potential to uncover.
Slides: PDF
Andrey Ovsyannikov
NESAP, Lawrence Berkeley National Laboratory
Title: Performance Advantages of Using a Burst Buffer for Scientific Workflows
Abstract: Emerging exascale systems have the ability to accelerate the time-to-discovery for scientific workflows. However, as these workflows become more complex, their generated data has grown at an unprecedented rate, making I/O constraints challenging. To address this problem advanced memory hierarchies, such as burst buffers, have been proposed as intermediate layers between the compute nodes and the parallel file system. In this paper, we utilize Cray DataWarp burst buffer coupled with in-transit processing mechanisms, to demonstrate the advantages of advanced memory hierarchies in preserving traditional coupled scientific workflows. We consider the Chombo-Crunch package for modeling of subsurface flows coupled with the VisIt visualization and analysis software and ancillary post-processing.
Slides: PDF
Registration
While BASCD is a free event, registration is required in order to attend. Please register by the deadline: Monday, November 21, 2016.
Location
BASCD 2016 will take place on the Stanford campus in the Jen-Hsun Huang Engineering Center in the Science and Engineering Quad.
Address
Huang Engineering Center 475 Via Ortega, Mackenzie Room (3rd floor) Stanford, CA 94305
Other information
- Stanford Google Map
- Visiting the Stanford campus
- Visitor parking (parking is generally free on Saturday, please check posted signs)
- Parking and circulation map