AI tools and agents are reshaping how researchers work, from proving theorems to training neural networks. Yet for many, it remains unclear how these tools fit into everyday research practice. This paper is a practical guide to AI-assisted research in mathematics and machine learning: We discuss how researchers can use modern AI systems productively, where these systems help most, and what kinds of guardrails are needed to use them responsibly. It is organized into three parts: (I) a five-level taxonomy of AI integration, (II) an open-source framework that, through a set of methodological rules formulated as agent prompts, turns CLI coding agents (e.g., Claude Code, Codex CLI, OpenCode) into autonomous research assistants, and (III) case studies from deep learning and mathematics. The framework runs inside a sandboxed container, works with any frontier LLM through existing CLI agents, is simple enough to install and use within minutes, and scales from personal-laptop prototyping to multi-node, multi-GPU experimentation across compute clusters. In practice, our longest autonomous session ran for over 20 hours, dispatching independent experiments across multiple nodes without human intervention. We stress that our framework is not intended to replace the researcher in the loop, but to augment them.
We present a constructive lower bound of Ω(1/√ε) for Frank-Wolfe (FW) when both the objective and the constraint set are smooth and strongly convex, showing that the known uniform O(1/√ε) guarantees in this regime are tight. It is known that under additional assumptions on the position of the optimizer, FW can converge linearly. However, it remained unclear whether strong convexity of the set can yield rates uniformly faster than O(1/√ε), i.e., irrespective of the position of the optimizer. To investigate this question, we focus on a simple yet representative problem class: minimizing a strongly convex quadratic over the Euclidean unit ball, with the optimizer on the boundary. We analyze the dynamics of FW for this problem in detail and develop a novel computational approach to construct worst-case FW trajectories, which is of independent interest. Guided by these constructions, we develop an analytical proof establishing the lower bound.
We investigate the extent to which an LLM’s hidden-state geometry can be recovered from its behavior in psycholinguistic experiments. Across eight instruction-tuned transformer models, we run two experimental paradigms – similarity-based forced choice and free association – over a shared 5,000-word vocabulary, collecting 17.5M+ trials to build behavior-based similarity matrices. Using representational similarity analysis, we compare behavioral geometries to layerwise hidden-state similarity and benchmark against FastText, BERT, and cross-model consensus. We find that forced-choice behavior aligns substantially more with hidden-state geometry than free association. In a held-out-words regression, behavioral similarity (especially forced choice) predicts unseen hidden-state similarities beyond lexical baselines and cross-model consensus, indicating that behavior-only measurements retain recoverable information about internal semantic geometry. Finally, we discuss implications for the ability of behavioral tasks to uncover hidden cognitive states.
The resource requirements of neural networks can be significantly reduced through pruning - the removal of seemingly less important parameters. However, for LLMs, full retraining to recover pruning-induced performance degradation is often prohibitive and classical approaches such as magnitude pruning are suboptimal on Transformers. State-of-the-art methods hence solve a layer-wise mask selection problem: finding a pruning mask that minimizes per-layer pruning error on a small set of calibration data. Exactly solving this problem is computationally infeasible due to its combinatorial nature and the size of the search space, and existing approaches rely on approximations or heuristics. We demonstrate that the mask selection problem can be made drastically more tractable at LLM scale. To that end, we decouple the rows by enforcing equal sparsity levels per row. This allows us to derive optimal 1-swaps (exchanging one kept and one pruned weight) computable efficiently via the Gram matrix. We propose a simple 1-swap algorithm that warmstarts from any pruning mask, runs efficiently on GPUs at LLM scale, and is essentially hyperparameter-free. Our approach reduces per-layer pruning error by up to 60% over Wanda and consistently improves perplexity and zero-shot accuracy across state-of-the-art GPT architectures.
Pruning is a common technique to reduce the compute and storage requirements of Neural Networks. While conventional approaches typically retrain the model to recover pruning-induced performance degradation, state-of-the-art Large Language Model (LLM) pruning methods operate layer-wise, minimizing the per-layer pruning error on a small calibration dataset to avoid full retraining, which is considered computationally prohibitive for LLMs. However, finding the optimal pruning mask is a hard combinatorial problem and solving it to optimality is intractable. Existing methods hence rely on greedy heuristics that ignore the weight interactions in the pruning objective. In this work, we instead consider the convex relaxation of these combinatorial constraints and solve the resulting problem using the Frank-Wolfe (FW) algorithm. Our method drastically reduces the per-layer pruning error, outperforms strong baselines on state-of-the-art GPT architectures, and remains memory-efficient. We provide theoretical justification by showing that, combined with the convergence guarantees of the FW algorithm, we obtain an approximate solution to the original combinatorial problem upon rounding the relaxed solution to integrality.
We introduce a Riemannian optimistic online learning algorithm for Hadamard manifolds based on inexact implicit updates. Unlike prior work, our method can handle in-manifold constraints, and matches the best known regret bounds in the Euclidean setting with no dependence on geometric constants, like the minimum curvature. Building on this, we develop algorithms for g-convex, g-concave smooth min-max problems on Hadamard manifolds. Notably, one method nearly matches the gradient oracle complexity of the lower bound for Euclidean problems, for the first time.
Post-training pruning substantially reduces inference costs but often causes severe quality degradation without adapting the remaining weights. For LLMs, such retraining is commonly considered impractical due to large computational costs, motivating increasingly sophisticated pruning criteria to compensate by selecting better sparsity patterns. In this work, we revisit post-pruning adaptation and study local reconstruction: adapting only a small pruned submodel at a time using a small calibration set by matching intermediate activations of the dense model. We conduct a large-scale study across model families and scales (up to 72B parameters) and establish three central results. First, local reconstruction is an effective adaptation mechanism for LLMs, matching post-pruning PEFT while using over an order of magnitude less data and compute. Second, we identify a broad "free lunch" regime in reconstruction granularity: across a wide range of submodel sizes, final quality remains essentially unchanged, allowing granularity to be chosen based on memory constraints. Finally, with reconstruction, the pruning criterion becomes less critical: performance gaps between sophisticated methods and simple baselines shrink with model size, making simple methods competitive again. Collectively, our results challenge the prevailing narrative that post-pruning adaptation is impractical for LLMs.
In this work, we study optimization problems of the form min_x max_y f(x, y), where f(x, y) is defined on a product Riemannian manifold and is strongly geodesically convex (g-convex) in x and strongly g-concave in y. We design accelerated methods when f is smooth and the manifolds are Hadamard. To that aim we introduce new g-convex optimization results, of independent interest: we show global linear convergence for metric-projected Riemannian gradient descent and improve existing accelerated methods by reducing geometric constants. Additionally, we complete the analysis of two previous works applying to the Riemannian min-max case by removing an assumption about iterates staying in a pre-specified compact set.
Federated Learning (FL) algorithms using Knowledge Distillation (KD) have received increasing attention due to their favorable properties with respect to privacy, non-i.i.d. data and communication cost. These methods depart from transmitting model parameters and instead communicate information about a learning task by sharing predictions on a public dataset. In this work, we study the performance of such approaches in the byzantine setting, where a subset of the clients act in an adversarial manner aiming to disrupt the learning process. We show that KD-based FL algorithms are remarkably resilient and analyze how byzantine clients can influence the learning process. Based on these insights, we introduce two new byzantine attacks and demonstrate their ability to break existing byzantine-resilient methods. Additionally, we propose a novel defence method which enhances the byzantine resilience of KD-based FL algorithms. Finally, we provide a general framework to obfuscate attacks, making them significantly harder to detect, thereby improving their effectiveness. Our findings serve as an important building block in the analysis of byzantine FL, contributing through the development of new attacks and new defence mechanisms, further advancing the robustness of KD-based FL algorithms.
In this work, we analyze two of the most fundamental algorithms in geodesically convex optimization: Riemannian gradient descent and (possibly inexact) Riemannian proximal point. We quantify their rates of convergence and produce different variants with several trade-offs. Crucially, we show the iterates naturally stay in a ball around an optimizer, of radius depending on the initial distance and, in some cases, on the curvature. In contrast, except for limited cases, previous works bounded the maximum distance between iterates and an optimizer only by assumption, leading to incomplete analyses and unquantified rates. We also provide an implementable inexact proximal point algorithm yielding new results on minmax problems, and we prove several new useful properties of Riemannian proximal methods: they work when positive curvature is present, the proximal operator does not move points away from any optimizer, and we quantify the smoothness of its induced Moreau envelope. Further, we explore beyond our theory with empirical tests.
Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning or learning with non-standard aggregated losses. More specifically, these problems are convex-linear problems where the minimization is carried out over the model parameters and the maximization over the empirical distribution of the training set indexes, where the feasible set is the simplex or a subset of it. To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm. We argue that the efficiency of such approaches critically depends on the structure of the feasible set and propose two properties that facilitate designing efficient algorithms. We focus on a specific family of sets encompassing various learning applications and provide high-probability convergence guarantees to the minimax values.
Linear bandit algorithms yield pseudo-regret bounds on compact convex action sets and two types of structural assumptions lead to better pseudo-regret bounds. When the action set is the simplex or an ℓ_p ball, there exist bandits algorithms with improved pseudo-regret bounds. Here, we derive bandit algorithms for some strongly convex sets beyond ℓ_p balls that enjoy improved pseudo-regret bounds, which answers an open question. Interestingly, when the action set is uniformly convex but not necessarily strongly convex, we obtain pseudo-regret bounds with a dimension dependency smaller than the standard rate. However, this comes at the expense of asymptotic rates varying between the optimal and linear rates.