Cancelling your registration will remove your access to the event. If you proceed, you will no longer be able to participate or access event-related materials.
Deleting your account will remove your access to the event.
Need Technical Assistance? ✉ tech@vfairs.com
View Info
Hide Info
Terry Rockafellar, Department of Mathematics, University of Washington–Seattle
The strong variational sufficient condition identifies isolated locally optimal solutions with properties that can be beneficial in computations. Tilt stability is known to be one of those properties, but there are new insights now. In a very general setting, strong variational sufficiency corresponds to the combination of tilt stability and continuous dependence of the local solution on the multiplier-inducing parameters in the problem formulation. This falls short of full stability, in which the parametric dependence is Lipschitz continuous, but on the other hand suggests a platform from which full stability might be understood and characterized in different ways than before.
View Info
Hide Info
Zhaosong Lu, Department of Industrial and Systems Engineering, University of Minnesota
We consider stochastic and finite-sum optimization problems with deterministic constraints. Existing methods typically focus on finding an approximate stochastic solution that ensures the expected constraint violations and optimality conditions meet a prescribed accuracy. However, such an approximate solution can possibly lead to significant constraint violations. To address this issue, we propose variance-reduced first-order methods that treat the objective and constraints differently. Under suitable assumptions, our proposed methods achieve stronger approximate stochastic solutions with complexity guarantees that more reliably satisfy the constraints compared to existing methods. This is joint work with
Sanyou Mei (HKUST ) and Yifeng Xiao (UMN).
View Info
Hide Info
Trang Nguyen, Department of Mathematics and Statistics, South Dakota State University
This talk is devoted to advanced optimal control problems discontinuous constrained differential inclusions of the sweeping type involving the duration of the dynamic process into optimization, which are challenging and under-investigated in control theory while being highly important for various applications. To attack such problems, we use the method of discrete approximation, married to advanced tools of variational analysis, enables this research to derive necessary optimality conditions for the original problem with a wide range of applications, including nanoparticle dynamics, marine surface vehicles, and robotics models. Key contributions also include the development of numerical algorithms to solve these optimization problems, using the Python Dynamic Optimization library GEKKO to simulate solutions to the posed robotics problems in the case of any fixed number of robots.
View Info
Hide Info
Bryce Alan Christopherson, Department of Mathematics & Statistics, University of North Dakota
In 2019, Frankle and Carbin provided an interesting conjecture known as the lottery ticket hypothesis, which supposes that large random neural networks usually contain a ’lucky’ sub-network that is ’just as good’ as the larger network. A stronger form of the conjecture was later proven, revealing that pruning allows for universal approximation in the same fashion as training. It has been an open question how to reliably find such subnetworks. In this talk, we show that the standard edge-popup algorithm provided by Ramanujan et. al. is enough to do this in sufficiently sparse networks and, based on some observations from this, provide a variant of the edge-popup algorithm that allows for strong lottery ticket extraction in any network.
View Info
Hide Info
Thanh Phat Vo, Department of Mathematics & Statistics, University of North Dakota
Tilt stability describes the property of a local minimizer of an objective function that remains stable under small linear perturbations. In this talk, we review the concept and present illustrative examples, then characterize tilt-stable local minimizers using tools from generalized differentiation and Moreau–Yosida regularization. We then examine the consequences of tilt stability for the convergence analysis of a range of first- and second-order numerical algorithms, and conclude with applications to practical models—including nonconvex least squares regression, Student’s t-regression with an ℓ0-penalty, and image restoration problems that demonstrate the efficiency of the proposed methods.
View Info
Hide Info
Nguyen-Truc-Dao Nguyen, Department of Mathematics and Statistics, San Diego State University
Abstract: This talk is devoted to combining model predictive control (MPC) and deep learning methods, specifically neural networks, to solve high-dimensional optimization and control problems. MPC is a popular method for real-life process control in various fields, but its computational requirements can often become a bottleneck. In contrast, deep learning algorithms have shown effectiveness in approximating high-dimensional systems and solving reinforcement learning problems. By leveraging the strengths of both MPC and neural networks, we aim to improve the efficiency of solving MPC problems. The talk also discusses the optimal control problem in MPC and how it can be divided into smaller time horizons to reduce computational costs. Additionally, we focus on enhancing MPC through two approaches: a machine learning-based feedback controller and a machine learning-enhanced planner, which involve implementing neural networks and iLQR. Overall, this talk provides insights into the potential of combining MPC and deep learning methods to tackle complex control problems across various fields, with applications to robotics.
View Info
Hide Info
Sina Kazemdehbashi, Department of Industrial and Systems Engineering, Wayne State University
Natural and human-made disasters can cause severe devastation and claim thousands of lives worldwide. Therefore, developing efficient methods for disaster response and management is a critical task for relief teams. One of the most essential components of effective response is the rapid collection of information about affected areas, damages, and victims. More data translates into better coordination, faster rescue operations, and ultimately, more lives saved. However, in some disasters, such as earthquakes, the communication infrastructure is often partially or completely destroyed, making it extremely difficult for victims to send distress signals and for rescue teams to locate and assist them in time. Unmanned Aerial Vehicles (UAVs) have emerged as valuable tools in such scenarios. In particular, a fleet of UAVs can be dispatched from a mobile station to the affected area to facilitate data collection and establish temporary communication networks. Nevertheless, real-world deployment of UAVs faces several challenges, with adverse weather conditions especially wind-being among the most significant. To address this, we develop a novel mathematical framework to determine the optimal location of a mobile UAV station while explicitly accounting for the heterogeneity of the UAVs and the effect of wind. Our approach extends classical single-facility location problems by incorporating heterogeneous dynamic sets that represent varying UAV speed. In particular, we generalize the Fermat-Torricelli and Sylvester problems to introduce the Sylvester-Fermat-Torricelli (SFT) problem, which captures complex factors such as wind influence, UAV heterogeneity, and back-and-forth motion within a unified framework. The proposed framework enhances the practicality of UAV-based disaster response planning by accounting for real-world factors such as wind and UAV heterogeneity. Experimental results demonstrate that it can reduce wasted operational time by up to 84%, making post-disaster missions significantly more efficient and effective.
View Info
Hide Info
Rafal Goebel, Department of Mathematics and Statistics, Loyola University Chicago
For the purpose of this talk, a hybrid algorithm may represent a numerical, perhaps stochastic, simulation of a hybrid dynamical system: a system that blends differential equations or inclusions with difference equations and inclusions and with constraints on the resulting “flows” and “jumps,” and such a hybrid dynamical system may result from combining a continuous-time gradient or subdifferential flow with instantaneous resets and switching which are usually modeled as discrete-time dynamics. Of interest is the asymptotic behavior of such algorithms, like recovering the asymptotic behavior of solutions to the underlying hybrid system, which may be related to convergence to minimizers. Tools like chain transitivity, Conley’s decomposition, and a total Lyapunov function are extended from continuous-time and discrete-time dynamics to the hybrid setting, and applied to hybrid algorithms. Basic tools from variational analysis facilitate these extensions. This is joint work with A.R. Teel and R.G. Sanfelice.
View Info
Hide Info
Khoa Vu, Department of Mathematics, Wayne State University
The talk discusses second-order necessary and sufficient optimality conditions for local minimizers in rather general classes of non-smooth unconstrained and constrained optimization problems in finite dimensional spaces. The established conditions are expressed in terms of second-order subdifferentials of lower semicontinuous functions and mainly concern prox-regular objectives that cover a large territory in nonsmooth optimization and its applications. Our tools are based on the machinery of variational analysis and second-order generalized differentiation. The obtained general results are applied to problems of nonlinear programming, where the derived second-order optimality conditions are new even for problems with twice continuously differential data, being expressed there in terms of the classical Hessian matrices.
View Info
Hide Info
Sri Lalitha Nuthulapati, Department of Computer Science, North Dakota State University
Early detection of Chronic Kidney Disease (CKD) is critical for preventing irreversible damage, yet traditional diagnostic approaches often fail to identify asymptomatic cases. We propose LMSA-Net, a hybrid deep learning framework combining Quanvolutional Neural Networks (QuCNet), Levenberg–Marquardt Networks (LMNet), and Shuffle Attention (SA-Net) to achieve accurate CKD detection and personalized lifestyle recommendations. QuCNet captures highdimensional local feature interactions, LMNet efficiently optimizes network weights, and SA Net models spatial and channel-wise dependencies for richer representations. The model leverages feature normalization, data augmentation, and bootstrapping to enhance generalization on the UCI CKD dataset. LMSA-Net predicts disease presence with 96.56% accuracy, 96.95% sensitivity, and 97.87% specificity, while generating actionable, patient-specific guidance on diet, exercise, and behavior modifications. Our framework bridges quantum-inspired convolutions, attention mechanisms, and classical optimization, offering a scalable AI driven approach for preventive CKD management.
View Info
Hide Info
Tuyen Tran, Department of Mathematics and Statistics, Loyola University Chicago
In this talk, we first investigate fundamental qualitative properties of the generalized multi-source Weber problem formulated using the Minkowski gauge function. Then, we apply Nesterov’s smoothing and the adaptive Boosted Difference of Convex functions Algorithm (BDCA) to solve both the unconstrained and constrained versions of the generalized multi-source Weber problems. These algorithms are tested in Matlab with real and artificial data sets. We conduct a comprehensive evaluation of the new algorithms and provide insights into its efficiency.
View Info
Hide Info
Yuan Gao, Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia Okanagan
A celebrated result in convex analysis says that a set has a potential if and only if it is cyclically monotone. This characterization can be generalized to any finitevalued kernel c(., .), without requiring linearity or continuity. However, the equivalence fails if the kernel is allowed to take infinite values. In this talk, we explore potentials for such infinite-valued kernels under the assumption of c-path boundedness, a stronger condition than cyclical monotonicity. We start with a general existence theorem for potentials, requiring no topological assumptions on the spaces or the kernel. We then turn to separable metric spaces and kernels that are continuous on their domain, where c-path boundedness and the existence of a potential coincide. Finally, we introduce the notion of c-path bounded extension and use it to prove the existence of potentials for a special class of kernels on R2.
View Info
Hide Info
Babatunde Aluko, Department of Statistics, University of Kentucky
The International Epidemiology Databases to Evaluate AIDS (IeDEA) is a global research consortium that provides extensive HIV/AIDS data worldwide. In this study, we propose multistate models (MSMs) to characterize HIV progression across clinical stages while addressing data complexities, including interval-censored and clustered event history data where we propose a Stochastic Expectation-Maximization (Stochastic EM) algorithm to reduce computation intensity. We use simulation to evaluate the performance of these proposed methods and apply the method to Central-Africa IeDEA data to evaluate the impact of the World Health Organization’s 2015 Treat-All Policy.
View Info
Hide Info
Henry Wolkowicz, Department of Combinatorics and Optimization, University of Waterloo - Canada
(Joint work with: Woosuk L. Jung, David Torregrosa-Belen.)
Preconditioning is essential in iterative methods for solving linear systems. It is also the implicit objective in updating approximations of Jacobians in optimization methods, e.g., in quasi-Newton methods. We study a nonclassic matrix condition number, the omega condition number, omega for short. omega is the ratio of: the arithmetic and geometric eans of the singular values, rather than the largest and smallest for the classical kappa-condition number. The simple functions in omega allow one to exploit first order optimality conditions. We use this fact to derive explicit formulae for (i) omega-optimal low rank updating of generalized Jacobians arising in the context of nonsmooth Newton methods; and (ii) omega-optimal preconditioners of special structure for iterative methods for linear systems. In the latter context, we analyze the benefits of omega for (a) improving the clustering of eigenvalues; (b) reducing the number of iterations; and (c) estimating the actual condition of a linear system. Moreover we show strong theoretical connections between the omega-optimal preconditioners and incomplete Cholesky factorizations, and highlight the misleading effects arising from the inverse invariance of kappa. Our results confirm the efficacy of using the omega-condition number compared to the kappa-condition number.
View Info
Hide Info
Ju Sun, Department of Computer Science & Engineering, University of Minnesota
Despite the sweeping success and impact of deep learning in numerous domains, imposing explicit constraints is relatively new but increasingly pressing in deep learning (DL), driven by, for example, trustworthy AI that performs robust optimization over complex perturbation sets and scientific and engineering applications that require respect for physical laws and constraints. In this talk, we will (1) survey DL problems with nontrivial constraints across science, engineering, and medicine, (2) highlight the NCVX computing framework we have recently built, which provides deterministic solvers to solve constrained DL problems, and (3) invite the optimization community to solve the stochastic constrained DL problems.
View Info
Hide Info
Mai Quynh Nghi Nguyen, Department of Mathematics, Wayne State University
In this talk, we investigate a hybrid model combined by a parabolic differential equation and a parabolic hemivariational inequality (so-called differential hemivariational inequality of parabolic–parabolic type) in general infinite dimensional spaces which includes the history-dependent operator. The solvability of initial value problems as well as the periodic problems of the hemivariational inequality and the differential hemivariational inequality have been proved. In application, we study a contact problem with normal compliance driven by a history-dependent dynamical system.
View Info
Hide Info
Abdulwasiu Ibrahim, College of Public Health, Kent State University
The COVID-19 pandemic caused large-scale disruptions in employment and household income in the United States, with significant potential consequences for population mental health. From a quantitative perspective, these disruptions can be modeled as exogenous shocks to a coupled socioeconomic–psychosocial system. This study quantifies the relationship between pandemic-related employment loss, income change, and serious psychological distress, while examining subgroup differences across demographic strata. Using data from 4,772 adults in the 2019 and 2021 waves of the Panel Study of Income Dynamics, we measured serious psychological distress with the Kessler-6 (K6) scale and categorized income change as decreased, stable, or increased. Employment loss was defined as being employed in 2019 but not in 2021. Covariates included sex, race/ethnicity, baseline age group, and pre pandemic income level. Survey-weighted logistic regression models were applied to account for the complex design of the dataset. Employment loss was associated with more than twice the odds of serious psychological distress (OR = 2.26, 95% CI: 1.29–3.96), while income change was not statistically significant. Adults aged 30–59 and 60+ reported lower odds of distress compared to those under 30, and an interaction effect indicated that Hispanic adults had lower relative odds of distress following employment loss compared to White adults. No significant interactions were observed for sex or pre-pandemic income tertile. These findings highlight the psychosocial role of work beyond financial considerations, reveal demographic variation in responses to employment disruption, and emphasize the need for culturally responsive and equitable mental health interventions.
View Info
Hide Info
Wilkreffy Santos, Department of Mathematics, Wayne State University
In this work, we have introduced an inexact approach to the Boosted Difference of Convex Functions Algorithm (BDCA) for solving nonconvex and nondifferentiable problems involving the difference of two convex functions (DC functions). Specifically, when the first DC component is differentiable and the second may be nondifferentiable, BDCA utilizes the solution from the subproblem of the DC Algorithm (DCA) to define a descent direction for the objective function. A monotone linesearch is then performed to find a new point that improves the objective function relative to the subproblem solution. This approach enhances the performance of DCA. However, if the first DC component is nondifferentiable, the BDCA direction may become an ascent direction, rendering the monotone linesearch ineffective. To address this, we have proposed an Inexact nonmonotone Boosted Difference of Convex Algorithm (InmBDCA). This algorithm incorporates two main features of inexactness: First, the subproblem therein is solved approximately allowing us for a controlled relative error tolerance in defining the linesearch direction. Second, an inexact nonmonotone linesearch scheme is used to determine the step size for the next iteration. Under suitable assumptions, we have demonstrated that InmBDCA is well-defined, with any accumulation point of the sequence generated by InmBDCA being a critical point of the problem. We also have provided iteration-complexity bounds for the algorithm. To complement the theoretical development, we have included numerical illustrations aimed at verifying that the inexact solutions obtained by BDCA and nmBDCA satisfy the conditions required by InmBDCA.
View Info
Hide Info
Thi Phung, Department of Mathematics, Wayne State University
This study extends the work of Cao et al. (2025) by investigating a new class of optimal control problems governed by discontinuous and nonconvex sweeping processes with variable time horizons. In contrast to the approximation approach employed in the earlier work, where optimal solutions were obtained via strongly convergent sequences, the present research establishes, in a rigorous analytical framework, necessary optimality conditions for sweeping processes. These conditions are derived through advanced techniques in variational analysis and generalized differentiation. The derived conditions are subsequently applied to construct optimal solutions for a class of motion models of practical significance. Furthermore, an algorithm is developed to compute such solutions effectively, thereby bridging theoretical results with computational implementation.
View Info
Hide Info
Sandeep Singhal, School of Medicine & Health Sciences, University of North Dakota
Artificial intelligence (AI) is reshaping cancer research by enabling faster, more precise, and deeper understanding of tumor biology. Digital pathology, where traditional microscope slides are transformed into high resolution images, creates a powerful foundation to apply AI for both clinical care and scientific discovery. My research combines machine learning with multi-omics integration to analyze these images alongside molecular and clinical data. By training advanced models to detect subtle cellular patterns, tissue structures, and biomarkers, we can identify early predictors of treatment response, reveal diverse prognostic markers, and link imaging features to key biological pathways. For the scientific community, this work delivers AI driven pipelines that bridge pathology, multi-omics, and clinical outcomes. For patients, it supports earlier and more accurate diagnosis, guides personalized treatment decisions, reduces the risk of ineffective therapies, and promotes more equitable cancer care. Ultimately, this research advances precision oncology by using digital pathology as a central platform to integrate AI, biology, and medicine.
View Info
Hide Info
Hongda Li, Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia Okanagan
This talk is devoted to the study of accelerated proximal gradient methods where the sequence that controls the momentum term doesn’t follow Nesterov’s rule. We propose a relaxed weak accelerated proximal gradient (R-WAPG) method, a generic algorithm that unifies the convergence results for strongly convex and convex problems where the extrapolation constant is characterized by a sequence that is much weaker than Nesterov’s rule. Our R-WAPG provides a unified framework for several notable Euclidean variants of FISTA and verifies their convergences. In addition, we provide the convergence rate of the strongly convex objective with a constant momentum term. Without using the idea of restarting, we also reformulate R-WAPG as “Free R-WAPG” so that it doesn’t require any parameter. Explorative numerical experiments were conducted to show its competitive advantages
View Info
Hide Info
Artem Novozhilov, Department of Mathematics, North Dakota State University
In biology, as in many other sciences, there were multiple attempts to formulate the basic regularities of population dynamics in the form of some extreme principles. Arguably one of the most well known such principles is the Fundamental theorem of natural selection by R. Fisher, who stated that the rate of increase in the mean fitness of a population is equal to its additive genetic variance in fitness at that time. From a modeling prospective, it is possible to look at the extreme principles as the optimization criteria that govern the evolution of a given dynamical system. Therefore, we are faced with a problem, given some optimization criterion, how to find a reasonable mathematical model of population community, that is an extremum point in the space of all admissible dynamical systems. In this talk we will consider a special case of this general optimization problem applied to the so-called permanent replicator and Lotka–Volterra systems and will propose a heuristic algorithm to solve such problems. The algorithm is given by a sequence of steps, at each of which a linear programming problem is solved. This is a joint work with A.S. Bratus and S. Drozhzhin (Transport University, Russia).
View Info
Hide Info
Daniel Tuyisenge, Dr. Bing Zhang Department of Statistics College of Arts & Sciences, University of Kentucky
Ensuring data quality is essential for producing accurate and reliable federal statistics. A common approach is to use ratio edits, which are widely employed in economic and establishment surveys to identify records with implausible relationships between variables. In multivariate settings, traditional methods often rely on Mahalanobis distance; however, this approach can perform poorly in high dimensions and does not provide interpretable bounds for each variable. To address these limitations, this work presents a framework for multivariate ratio edits based on parametric and nonparametric tolerance intervals (TIs), which ensure a specified proportion of inliers with a given confidence level. Parametric methods construct rectangular and simultaneous multivariate TIs under the multivariate normal model, using trimming to reduce the influence of outliers. In contrast, nonparametric methods employ Tukey depth and Statistically Equivalent Blocks to obtain distribution-free tolerance regions. Monte Carlo simulations are conducted to evaluate Type I/II error rates and volume efficiency across contamination regimes, dimensions, and TI parameters. Results indicate that rectangular central TIs excel under mild contamination, while simultaneous central TIs improve robustness under heavier contamination. Overall, these TI-based edits are interpretable, reproducible, computationally efficient, and readily integrated into federal data processing pipelines, thereby enhancing quality assurance.
View Info
Hide Info
Vera Zeidan, Department of Mathematics, Michigan State University
Consider the following optimal control problem (P) involving a perturbed sweeping process (D) with joint endpoint constraints set S and governed by a moving, prox-regular, and possibly unbounded sweeping set, C(t), defined as the intersection of finitely many sub-levels of C1,1- real-valued functions (ψi(t, x))ri =1:
(P) minimize J(x(0), x(T))
over (x, u) ∈ W1,1([0, T],Rn) × U :
(D) {x˙ (t) ∈ f(t, x(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T],
(x(0), x(T)) ∈ S.
Sweeping processes introduced in the 1970’s by Moreau, have recently appeared in new applications, such as the mobile robot model, the pedestrian traffic flow model, crowd model for emergency evacuation etc. Our general model incorporates different submodels, including certain second order sweeping processes, a class of integro-differential sweeping processes, evolution variational inequalities (EVI), and dynamical variational inequalities (DVI). The discontinuity and unboundedness of the normal cone NC(t) in (D) render all the known results in optimal control over standard differential inclusions inapplicable, and hence, addressing (P) requires new ideas. Discrete-time approximations, pioneered by Mordukovich et al., and the continuous time exponential penalty approximation, introduced by de Pinho et. al., have been instrumental in deriving optimality conditions for variants of (P). Assuming Gr C(·) is compact, the case where r = 1 ( i.e., C(t) is generated by one smooth function, ψ(t, x)) was treated by de Pinho et al. to derive the Pontryagin-type maximum principle via the method of exponential penalty approximation, that is, (D) is approximated by (Dγk) x˙ (t) = f(t, x(t), u(t)) − γkeγkψ(x(t))∇xψ(t, x), a.e. t ∈ [0, T], for which C(t) is shown to be invariant; a property fundamental for this method, as it disposes of the inherent state constraints in (D). However, for r > 1, they observed that the invariance of C(t) is a major obstacle, unless severe assumptions on the corners of C(t) are imposed. The goal of this talk is to demonstrate for r > 1 that a modified approach of this method leads to the maximum principle without imposing the severe assumptions on the corners of C(t) or the compactness of Gr C(·). We show that the invariance of C(t) itself is not necessary, but that of a carefully chosen approximation of C(t) from its interior, Cγk(t), would suffice. This construction also allows us to phrase the result in terms of subdifferentials smaller than the known ones. The set Cγk(t) is the sublevel set of a function ψγk , that approximates ψ := max
i=1,··· ,r ψi and is constructed by applying the operator ( 1 γk ⋆ log exp) to the “vecmax” function ψ.
View Info
Hide Info
Boris Mordukhovich, Department of Mathematics, Wayne State University
(Based on joint work with G. Bento, T. Mota and Yu. Nesterov.)
This talk develops the novel convergence analysis of a generic class of descent methods in nonsmooth and nonconvex optimization under several versions of the Polyak- Lojasiewicz -Kurdyka (PLK) properties. Along with other results, we prove the finite termination of generic algorithms under the PL K property with lower exponents. Specifications are given to convergence rates of some particular algorithms including inexact reduced gradient methods and the boosted algorithm in DC programming. It is revealed, e.g., that the lower exponent PLK property in the DC framework is incompatible with the gradient Lipschitz continuity for the plus function around a local minimizer. On the other hand, we show that the above inconsistency observation may fail if the Lipschitz continuity is replaced by merely the gradient continuity.
View Info
Hide Info
Nghia Vo, Department of Mathematics and Statistics, Oakland University
Recovering a low-complexity signal from its noisy observations by regularization methods is a cornerstone of inverse problems and compressed sensing. Stable recovery ensures that the original signal can be approximated linearly by optimal solutions of the corresponding Morozov or Tikhonov regularized optimization problems. In this talk, we propose new characterizations for stable recovery in finite-dimensional spaces, uncovering the role of nonsmooth second-order information. These insights enable a deeper understanding of stable recovery and their practical implications. As a consequence, we apply our theory to derive new sufficient conditions for stable recovery of the analysis group sparsity problems, including the group sparsity and isotropic total variation problems. Numerical experiments on these two problems give favorable results about using our conditions to test stable recovery.
View Info
Hide Info
Maria Alfonseca-Cubero, Department of Mathematics, North Dakota State University
This is joint work with D. Ryabogin, A. Stancu and V. Yaskin.
Given a convex body K, we consider a collection of hyperplanes which satisfy one (or more) of the following conditions:
∗ Condition (V): All hyperplanes cut off the same constant volume from K
∗ Condition (A): All hyperplane sections of K have equal area
∗ Condition (I): All hyperplane sections of K have equal moments of inertia
∗ Condition (H): All hyperplanes are at the same distance from the origin.
Ulam’s floating body problem asks conditions (V,I) imply that K must be the Euclidean ball, but a counterexample has been found recently. Therefore, we consider three conditions (V, I, H) and (V,A, H), or else two conditions from the list with an additional normalization hypothesis on K , and give some positive results for these cases.
View Info
Hide Info
Dhanushka Wijesinghe, Department of Electrical and Computer Engineering, North Dakota State University
This work introduces a compact neural network model designed for real time sleep stage classification using EEG data from a single frontal channel (Fp1–Fp2). The architecture is optimized for use in portable, low-power devices and incorporates temporal context through a memory inspired mechanism. Specifically, we enhance a standard feedforward network by integrating a transition vector that reflects the temporal progression of sleep stages, computed by applying a learned transition matrix to the softmax output of the previous epoch. Temporal characteristics of EEG signals were examined using autocovariance analysis across stages in the MASS SS3 dataset, revealing sustained correlations in N3 and REM, which justified the use of memory-based modeling. We trained two variants: a baseline model using only EEG features, and an extended version that includes the memory component. During prediction, a confidence threshold of 0.7 was applied to select the more reliable output, reducing misclassifications while allowing uncertain epochs to remain unassigned. On the MASS SS3 dataset, the final combined model achieved an accuracy of 84.0% and a Cohen’s kappa of 0.75, with only 12.8% of epochs rejected due to low confidence. This strategy notably improved detection of REM and wake states—both vital for clinical relevance—without increasing model complexity. These results support the feasibility of efficient, single-channel sleep monitoring solutions suitable for deployment in wearable EEG systems.
View Info
Hide Info
Javier I. Madariaga, Department of Mathematics, North Carolina State University
We propose a geometric framework to design randomly relaxed and stochastically perturbed algorithms to address a wide range of problems in nonlinear analysis and optimization in Hilbert spaces. Building on this framework, we develop a stochastic monotone operator splitting method based on a saddle operator. The talk will present both the theoretical foundations [1] and its application to monotone inclusions [2].
[1] P. L. Combettes and J. I. Madariaga, A geometric framework for stochastic iterations, arxiv, 2025. https://arxiv.org/pdf/2504.02761
[2] P. L. Combettes and J. I. Madariaga, Randomly activated block iterative saddle projective splitting for monotone inclusions, submitted, 2025.
View Info
Hide Info
Quoc Le, Department of Mathematics and Applied Mathematical Sciences, University of Rhode Island
The talk is concerned with stochastic approximation algorithms. Our main effort is focused on recently developed set-valued stochastic approximation methods. We begin with a brief introduction on stochastic approximation. Then recent results are recalled. The rest of the talk concentrates on applications of stochastic approximations of set-valued problems.
View Info
Hide Info
Moruf O. Disu, Department of Statistics and Department of Public Health, North Dakota State University
There has been tremendous research on the consolidation of health and medical data over the years, and researchers continue to explore efficient ways of consolidating and validating fragmented health and medical data to improve the service provided and better understand all other analyses. The two main approaches for record linkage are deterministic and probabilistic linkage. Probabilistic linkage has more advantages over the deterministic approach. However, subjective decisions are made when probabilistic linkage is applied. The data was generated with 20% noise input in the source dataset to support our proposed method. We showed an approach to selecting the optimal matching score for a match.
View Info
Hide Info
Delaney Rager, Department of Mathematics and Statistics, Loyola University Chicago
The Chaos Game Optimization (CGO) is a metaheuristic algorithm that was proposed by Azizi and Talatahari in 2020 to solve optimization problems. Traditionally, chaos games are random and iterative processes that form fractals when played. The CGO is based on some principles of chaos games in which the configuration of fractals alongside the fractals self-similarity issues are in perspective. This game theory-based algorithm formulation has been novel when solving multidimensional objective functions and displayed promising results. This project implements a corrected and improved version of the CGO algorithm and tests it with functions of varying dimensions. It also introduces new and more practical stopping criteria to address the limitations of traditional methods like relative error and a fixed number of iterations. Ultimately, this work provides a robust and efficient framework for applying the CGO to complex optimization problems, demonstrating its enhanced performance through automated analysis of objective functions.
View Info
Hide Info
Diego J. Cornejo, Department of Mathematics, North Carolina State University
We first present the concept of resolvent composition, a monotonicity-preserving operation recently introduced in [1] to combine a monotone operator with a bounded linear operator. Unlike standard operations, this approach leads to an explicit resolvent. When the monotone operator is a subdifferential, the resolvent composition corresponds to the subdifferential of a function that has been used in image recovery and machine learning applications [2]. We discuss several theoretical properties of resolvent compositions, with a focus on their asymptotic behavior.
[1] P. L. Combettes, Resolvent and proximal compositions, Set-Valued Var. Anal., vol. 31, art. 22, 29 pp., 2023.
[2] P. L. Combettes and D. J. Cornejo, Signal recovery with proximal comixtures, Proc. Europ. Signal Process. Conf., pp. 2637–2641, Lyon, France, August 26–30, 2024.
View Info
Hide Info
Kayode Ayinde, School of Mathematics and Statistics, Northwest Missouri State University
Cluster formation in data analysis remains a crucial challenge, as the existing methods do struggle to maintain accuracy and robustness across
datasets of varying dimensionality. Traditional methods, such as k-means and hierarchical clustering, are challenged in high-dimensional spaces, while their performance in lower dimensions may also be sensitive to data heterogeneity. To overcome these limitations, we introduce the Voting-based Multi-candidate Representative Average (VOMORA), a novel clustering framework that optimizes cluster formation through a voting mechanism for identifying the most representative averages. We evaluated VOMORA on two benchmark datasets: the Iris dataset (150 observations, 4 features) representing low-dimensional data and a Kaggle dataset (230 instances, 537 features) representing high-dimensional data. At these instances, VOMORA produces clusters with strong alignment to true class labels and demonstrates higher accuracy and stability when compared to standard clustering algorithms. The method is basically robust in high-dimensional scenarios, where conventional techniques often degrade, and still maintains its excellent performance in simpler settings, low-dimensional cases. Leveraging voting-based optimization of representative averages, VOMORA showcases its superiority, versatility, flexibility, and robustness over traditional clustering approaches. It also enhances interpretability, reduces sensitivity to dimensionality, and achieves reliable cluster formation across diverse data contexts, making it a valuable tool for applications not only in data mining but also in pattern recognition and machine learning.
View Info
Hide Info
Anh Vu Nguyen, Department of Mathematics, Wayne State University
The talk is devoted to investigating single-objective and multiobjective optimization problems involving the ℓ0-norm function, which is nonconvex and nondifferentiable. Our motivation comes from proton beam therapy models in cancer research. The developed approach uses subdifferential tools of variational analysis and the Gerstewitz (Tammer) scalarization function in multiobjective optimization. Based on this machinery, we propose several algorithms of the subgradient type and conduct their convergence analysis. The obtained results are illustrated by numerical examples, which reveal some characteristic features of the proposed algorithms and their interactions with the gradient descent.
View Info
Hide Info
Anh Le, Department of Civil, Construction and Environmental Engineering, North Dakota State University
Measurement of river bathymetry is critical for many civil engineering applications. However, direct measurement methods are typically costly and challenging in remote locations, especially under flooding. Recent advances in deep learning have shown promise for bathymetric inversion from remote sensing data. Yet, these approaches are limited for site specific locations. In this study, a novel bathymetry inversion method (SKM-PINN) from Unmanned Aerial Vehicle (UAV) data is developed using Physics-Informed Neural Networks (PINNs). Our novel methodology employed the Shiono-Knight Model (SKM) for shallow water equations (SWEs), which is embedded in the loss function of the neural network to infer water depth from surface velocity field. Our new approach transforms the two-dimensional SWEs into one-dimensional ordinary differential equation, which leads to fast convergence during the training of neural network. This approach addresses the most challenging issue of PINNs in applying bathymetry inversion for natural streams where the flow field can be highly complex. The SKM-PINNs is validated for both synthetic and field measurement data, which can reach the accuracy root mean square errors below 0.15 m across multiple cross-sections. The results show that SKM-PINNs is highly reliable and robust in predicting water depth in both pools and riffles, using a minimum information from UAV measurements. This work provides a cost-effective framework for high-resolution bathymetric mapping using readily available UAV technology.
View Info
Hide Info
11:20 - 11:35
Adewale F. Lukman, Department of Mathematics & Statistics, University of North Dakota
This talk introduces the Broken Adaptive Liu-Type (BALT) estimator, a novel penalized regression method for accurate estimation and variable selection in high dimensional settings. BALT combines adaptive shrinkage with a broken weighting mechanism, enabling differential regularization across parameter subspaces to achieve both sparsity and stability. By extending the Liu-type framework, BALT simultaneously controls multicollinearity and shrinks negligible coefficients toward zero while selectively retaining relevant predictors. We establish its oracle property and grouping effect under general non-orthogonal designs, providing strong theoretical guarantees. Simulation studies show that BALT consistently attains lower prediction error, improved estimation accuracy, and more parsimonious models than leading alternatives, including Lasso, Elastic Net, Ridge, Broken Adaptive Ridge (BAR), and l0-based procedures. Applications to prostate cancer, diabetes, and riboflavin gene expression datasets further demonstrate BALT’s superior performance and interpretability. Overall, BALT delivers a new, flexible, and computationally efficient tool for robust sparse modeling in complex high-dimensional regression problems.
11:35 - 11:50
Manu Manu, Department of Biology, University of North Dakota
Cell-fate decisions during development are controlled by densely interconnected gene regulatory networks (GRNs) consisting of many genes. Inferring and predictively modeling these GRNs is crucial for understanding development and other physiological processes. Gene circuits, coupled differential equations that represent gene product synthesis with a switch-like function, provide a biologically realistic framework for modeling the time evolution of gene expression. The interactions between genes in a GRN are inferred by fitting the gene circuit model to gene expression time series data. Conventional approaches, such as Parallel Lam Simulated Annealing, rely on solving the ODEs and yield excellent fits to data but are computationally very expensive. We present Fast Inference of Gene Regulation, a novel classification-based inference approach, that is significantly faster than simulated annealing. The agreement of inferred GRNs with independent empirical data and the predictive ability of the gene circuits will also be discussed.
Yes, this event is completely free to attend.
Yes, you’ll need to fill out our registration form to gain access to the event. Please fill in the registration form with some basic information to get started.
The information you provide upon registration will be used by the planning committee for future marketing purposes.

Join industry experts for insightful sessions and networking opportunities.

Discover career-changing opportunities with top companies at our event.

Register now and unlock exclusive access to workshops and keynotes
Tell your event story through images
Tell your event story through images
Tell your event story through images
Tell your event story through images
Tell your event story through images
Tell your event story through images
Tell your event story through images
Tell your event story through images