Skip to Main Content

Mathematics & Statistics: Current Awareness & Alerts

Guide to research for advanced undergraduate, graduate students and mathematicians

Recent Trans AMS Articles

Search Alerts

You can browse new articles indexed in our databases, and for some databases, also create and save searches in our databases and get alerts when new references that match your searches are added.

  • e-CMP - Electronic Notification Service for Current Mathematical Publications (for AMS members)
    emails bibliographic entries from each CMP issue, usually every three weeks, in your choice of up to three Mathematics Subject Classifications.
  • MathSciNet free awareness tools:
    Current Publications - search by MR Subject Classification to browse books, proceedings & articles reviewed or indexed within the past 1 to 6 months. 
    Current Journals shows indexed journal issues week by week for seven calendar weeks.
  • Web of Science
    Set up a Web of Knowledge profile to create and receive search and cited ref alerts. This profile will also work with INSPEC alerts.

Journal Article/Table-of-Contents Alerts

Create profiles to receive email alerts as new articles become available.

Recent SIAM Review Articles

  • Risk-Adaptive Approaches to Stochastic Optimization: A SurveyThis link opens in a new windowFeb 6, 2025
    SIAM Review, Volume 67, Issue 1, Page 3-70, March 2025.
    Abstract.Uncertainty is prevalent in engineering design and data-driven problems and, more broadly, in decision making. Due to inherent risk-averseness and ambiguity about assumptions, it is common to address uncertainty by formulating and solving conservative optimization models expressed using measures of risk and related concepts. We survey the rapid development of risk measures over the last quarter century. From their beginning in financial engineering, we recount their spread to nearly all areas of engineering and applied mathematics. Solidly rooted in convex analysis, risk measures furnish a general framework for handling uncertainty with significant computational and theoretical advantages. We describe the key facts, list several concrete algorithms, and provide an extensive list of references for further reading. The survey recalls connections with utility theory and distributionally robust optimization, points to emerging applications areas such as fair machine learning, and defines measures of reliability.
  • Neighborhood Watch in Mechanics: Nonlocal Models and ConvolutionThis link opens in a new windowFeb 6, 2025
    SIAM Review, Volume 67, Issue 1, Page 176-193, March 2025.
    Abstract.This paper is intended to serve as a low-hurdle introduction to nonlocality for graduate students and researchers with an engineering mechanics or physics background who did not have a formal introduction to the underlying mathematical basis. We depart from simple examples motivated by structural mechanics to form a physical intuition and demonstrate nonlocality using concepts familiar to most engineers. We then show how concepts of nonlocality are at the core of one of the most active current research fields in applied mechanics, namely, in phase-field modeling of fracture. From a mathematical perspective, these developments rest on the concept of convolution in both its discrete and its continuous forms. The previous mechanical examples may thus serve as an intuitive explanation of what convolution implies from a physical perspective. In the supplementary material we highlight a broader range of applications of the concepts of nonlocality and convolution in other branches of science and engineering by generalizing from the examples explained in detail in the main body of the article.
  • The Troublesome Kernel: On Hallucinations, No Free Lunches, and the Accuracy-Stability Tradeoff in Inverse ProblemsThis link opens in a new windowFeb 6, 2025
    SIAM Review, Volume 67, Issue 1, Page 73-104, March 2025.
    Abstract.Methods inspired by artificial intelligence (AI) are starting to fundamentally change computational science and engineering through breakthrough performance on challenging problems. However, the reliability and trustworthiness of such techniques is a major concern. In inverse problems in imaging, the focus of this paper, there is increasing empirical evidence that methods may suffer from hallucinations, i.e., false, but realistic-looking artifacts; instability, i.e., sensitivity to perturbations in the data; and unpredictable generalization, i.e., excellent performance on some images, but significant deterioration on others. This paper provides a theoretical foundation for these phenomena. We give mathematical explanations for how and when such effects arise in arbitrary reconstruction methods, with several of our results taking the form of “no free lunch” theorems. Specifically, we show that (i) methods that overperform on a single image can wrongly transfer details from one image to another, creating a hallucination; (ii) methods that overperform on two or more images can hallucinate or be unstable; (iii) optimizing the accuracy-stability tradeoff is generally difficult; (iv) hallucinations and instabilities, if they occur, are not rare events and may be encouraged by standard training; and (v) it may be impossible to construct optimal reconstruction maps for certain problems. Our results trace these effects to the kernel of the forward operator whenever it is nontrivial, but also apply to the case when the forward operator is ill-conditioned. Based on these insights, our work aims to spur research into new ways to develop robust and reliable AI-based methods for inverse problems in imaging.
  • Graph Neural Networks and Applied Linear AlgebraThis link opens in a new windowFeb 6, 2025
    SIAM Review, Volume 67, Issue 1, Page 141-175, March 2025.
    Abstract.Sparse matrix computations are ubiquitous in scientific computing. Given the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NNs). Unfortunately, multilayer perceptron (MLP) NNs are typically not natural for either graph or sparse matrix computations. The issue lies with the fact that MLPs require fixed-sized inputs, while scientific applications generally generate sparse matrices with arbitrary dimensions and a wide range of different nonzero patterns (or matrix graph vertex interconnections). While convolutional NNs could possibly address matrix graphs where all vertices have the same number of nearest neighbors, a more general approach is needed for arbitrary sparse matrices, e.g., those arising from discretized partial differential equations on unstructured meshes. Graph neural networks (GNNs) are one such approach suitable to sparse matrices. The key idea is to define aggregation functions (e.g., summations) that operate on variable-size input data to produce data of a fixed output size so that MLPs can be applied. The goal of this paper is to provide an introduction to GNNs for a numerical linear algebra audience. Concrete GNN examples are provided to illustrate how many common linear algebra tasks can be accomplished using GNNs. We focus on iterative and multigrid methods that employ computational kernels such as matrix-vector products, interpolation, relaxation methods, and strength-of-connection measures. Our GNN examples include cases where parameters are determined a priori as well as cases where parameters must be learned. The intent of this paper is to help computational scientists understand how GNNs can be used to adapt machine learning concepts to computational tasks associated with sparse matrices. It is hoped that this understanding will further stimulate data-driven extensions of classical sparse linear algebra tasks.