Fields: Machine Learning, Statistical Physics, Information Theory, Neuroscience
Grokking — the phenomenon where a neural network suddenly transitions from memorisation to generalisation after a long plateau — exhibits sharp, non-analytic changes in the effective dimensionality of...
Fields: Machine Learning, Statistical Physics, Condensed Matter Physics
The renormalization group (RG) in statistical physics is a systematic procedure for integrating out short-scale degrees of freedom while preserving long-wavelength behavior, flowing toward fixed point...
Fields: Astronomy, Machine Learning, Space Physics
Speculative analogy (to be empirically validated): Neural-operator surrogates for coupled plasma dynamics can be integrated into sequential data-assimilation loops similarly to reduced-order forecast ...
Fields: Biology, Computer_Science, Immunology, Machine_Learning
The adaptive immune system's negative selection process (deleting T-cells that recognize self-antigens in the thymus) is computationally equivalent to one-class classification and anomaly detection; t...
Fields: Biology, Machine Learning, Systems Biology
Speculative analogy (to be empirically validated): Message passing over learned gene graphs can act as a computational analogue to mechanistic regulatory propagation assumptions used in perturbation-r...
Fields: Biology, Mathematics, Evolutionary Biology, Game Theory, Population Genetics, Machine Learning
The replicator equation, derived independently in evolutionary biology, game theory, and learning theory, is: ẋᵢ = xᵢ (fᵢ(x) - f̄(x)) where xᵢ is the frequency of strategy i, fᵢ(x) = Σⱼ aᵢⱼ xⱼ is ...
Fields: Biostatistics, Machine Learning, Medicine
Speculative analogy (to be empirically validated): Monte Carlo dropout predictive uncertainty can inform adaptive stopping boundaries similarly to posterior predictive criteria in Bayesian trial monit...
Fields: Chemistry, Machine Learning, Materials Science
Speculative analogy (to be empirically validated): VAE latent manifolds can compress catalyst structural descriptors into smooth generative coordinates that support guided exploration of activity-sele...
Fields: Climate Science, Machine Learning, Statistics
Speculative analogy (to be empirically validated): Reverse-diffusion sampling can act as a controllable stochastic refinement operator analogous to ensemble post-processing used to downscale and debia...
Fields: Cognitive Science, Physics, Neuroscience, Machine Learning, Thermodynamics, Theoretical Biology
Friston (2010) proposed that all biological self-organisation can be understood as the minimisation of variational free energy F, where: F = E_q[log q(s)] − E_q[log p(s,o)] = KL[q(s) || p(s|o)]...
Fields: Machine Learning, Neuroscience, Computational Neuroscience
Attention weights are a_ij = softmax_j(q_i · k_j / √d): nonnegative, sum-to-one over j for fixed i, resembling a divisive normalization across locations/channels after an expansive nonlinearity (exp)....
Fields: Computer Science, Neuroscience, Cognitive Science, Machine Learning, Computational Neuroscience
The transformer attention mechanism (Vaswani et al. 2017): Attention(Q, K, V) = softmax(QKᵀ / √d_k) V operates on queries Q, keys K, and values V. Each output position attends to all input positio...
Fields: Machine Learning, Statistical Physics, Computer Science, Information Theory
Energy-based models assign low energy to plausible configurations; training shapes the energy landscape so that data lie in wells. Contrastive objectives such as InfoNCE reweight logits of positive ve...
Fields: Computer Science, Theoretical Machine Learning, Statistics, Statistical Physics, Information Theory
PAC (Probably Approximately Correct) learning theory (Valiant 1984) provides a mathematical framework for when a learning algorithm can generalise from training data to unseen examples. A concept clas...
Fields: Computer Science, Statistics, Machine Learning, Computational Physics
Parallel tempering mitigates trapping in rugged posterior landscapes by swapping chains across temperature levels. The method is established in molecular simulation and increasingly relevant for Bayes...
Fields: Statistics, Computer Science, Machine Learning, Applied Mathematics
Ordinary least squares minimizes squared error; adding an L2 penalty pulls coefficients toward zero, stabilizing ill-conditioned designs by trading bias for variance. Equivalently, with Gaussian likel...
Fields: Critical Care, Machine Learning, Stochastic Processes
Speculative analogy (to be empirically validated): neural CDEs translate irregularly sampled physiologic streams into continuous control paths, mirroring how rough-path summaries preserve temporal sig...
Fields: Computer Science, Mathematics, Machine Learning
Graph convolutional networks perform convolution in the spectral domain of the graph Laplacian; filters are polynomials of eigenvalues (spectral filters), and message passing is equivalent to diffusio...
Fields: Computer_Science, Mathematics, Dynamical_Systems, Machine_Learning
Neural ordinary differential equations (Chen et al. 2018) define network depth as continuous time in an ODE system dh/dt = f(h,t,theta); the network learns a vector field whose flow map transforms inp...
Fields: Ecology, Machine Learning, Agriculture
Speculative analogy (to be empirically validated): Transformer attention over multi-scale canopy imagery can act as a surrogate for agronomic context integration used to infer emergent crop stress pat...
Fields: Economics, Machine Learning, Statistics
Speculative analogy (to be empirically validated): Causal forests can operationalize localized elasticity estimation similarly to structural policy analyses that segment populations by marginal respon...
Fields: Engineering, Machine Learning, Power Systems
Speculative analogy (to be empirically validated): Graph-transformer attention can approximate contingency ranking functions similarly to fast security-assessment heuristics derived from network sensi...
Fields: Engineering, Mathematics, Optimization, Convex Analysis, Machine Learning
Gradient descent x_{t+1} = x_t - η∇f(x_t) converges at rate O(1/t) for L-smooth convex f (Lipschitz gradient, ‖∇f(x)-∇f(y)‖ ≤ L‖x-y‖) and at rate O(exp(-μt/L)) for μ-strongly convex f (where μ = σ_min...
Fields: Epidemiology, Machine Learning, Distributed Systems
Speculative analogy (to be empirically validated): FedAvg-style decentralized optimization can combine geographically distributed surveillance models while preserving local governance constraints and ...
Fields: Geophysics, Geostatistics, Statistics, Machine Learning, Spatial Analysis
Kriging (Krige 1951, formalised by Matheron 1963) is the minimum-variance linear unbiased estimator for spatially correlated data: Ẑ(x₀) = Σᵢ λᵢZ(xᵢ), where the optimal weights λᵢ are determined by so...
Fields: Geoscience, Machine Learning, Remote Sensing
Speculative analogy (to be empirically validated): encoder-decoder skip architectures developed for biomedical segmentation transfer to flood delineation by preserving fine boundary detail while integ...
Fields: Immunology, Machine Learning, Bioinformatics
Speculative analogy (to be empirically validated): Large-scale protein sequence pretraining may transfer contextual representations to TCR-antigen binding tasks similarly to repertoire-level priors us...
Fields: Infectious Disease, Machine Learning, Structural Biology
Speculative analogy (to be empirically validated): masked-autoencoder pretraining on molecular imagery can learn reconstruction priors that improve low-SNR cryo-EM downstream tasks without requiring e...
Fields: Information Theory, Computational Linguistics, Machine Learning
Shannon–McMillan–Breiman asymptotic equipartition implies typical sequences carry ~nh bits per n symbols for ergodic processes with entropy rate h. Neural language models minimize average negative log...
Fields: Materials Science, Machine Learning, Chemistry
Speculative analogy (to be empirically validated): Bayesian-optimization acquisition policies can function as adaptive design rules analogous to sequential alloy-screening heuristics in autonomous mat...
Fields: Mathematics, Quantum Physics, Neuroscience, Machine Learning, Computational Neuroscience
Tensor networks (TN) are graphical representations of high-dimensional arrays in which each tensor is a node and contractions between shared indices are edges. Matrix product states (MPS) represent a ...
Fields: Mathematics, Approximation Theory, Computer Science, Machine Learning
Universal approximation theorem (Cybenko 1989, Hornik et al. 1989): a feedforward neural network with one hidden layer and sufficient neurons can approximate any continuous function on a compact domai...
Fields: Mathematics, Computer Science, Signal Processing, Machine Learning
The convolution theorem states that convolution becomes pointwise multiplication in the Fourier domain (with appropriate boundary conditions). CNNs implement spatial convolution with learned kernels, ...
Fields: Statistics, Machine Learning, Computer Science
The bridge makes the frequentist penalty/Bayesian prior equivalence explicit for model selection under correlated designs. It is useful for calibrating regularization paths, but posterior uncertainty ...
Fields: Machine Learning, Combinatorics, Computer Science
Message-passing graph neural networks (MPGNNs) are at most as powerful as the 1-Weisfeiler-Lehman (1-WL) color refinement algorithm: two graphs that 1-WL cannot distinguish will be assigned identical ...
Fields: Mathematics, Computer Science, Machine Learning, Linear Algebra
A deep neural network f(x) = σ(W_L · σ(W_{L-1} · ... · σ(W_1 x))) is architecturally a composition of linear maps (weight matrices Wᵢ ∈ ℝ^{n×m}) and pointwise nonlinearities. Backpropagation computes ...
Fields: Mathematics, Computer Science, Machine Learning
The bridge is pedagogical and formal at the level of density theorems: both results say an expressive algebra or network family can approximate continuous functions on compact domains. It does not imp...
Fields: Mathematics, Computer Science, Machine Learning
Kantorovich duality expresses W₁ as a supremum over 1-Lipschitz test functions; empirical WGAN critics approximate this supremum with neural nets, and gradient-penalty variants (Gulrajani et al.) dire...
Fields: Mathematics, Engineering, Computer Science, Machine Learning
Convex optimization: minimize f(x) subject to x in C (convex set). The Lagrangian L(x,lambda,mu) = f(x) + lambda^T h(x) + mu^T g(x) and dual function g(lambda,mu) = inf_x L satisfy strong duality (pri...
Fields: Mathematics, Game Theory, Evolutionary Biology, Machine Learning, Economics
Maynard Smith & Price (1973) showed that natural selection on heritable strategies converges to evolutionary stable strategies (ESS), which are exactly Nash equilibria of the payoff game defined by fi...
Fields: Mathematics, Structural Biology, Medical Imaging, Machine Learning
Cryo-EM particle images sample continuous conformational variation; Laplacian eigenmaps provide a mathematically grounded coordinate system for this manifold. The bridge is strong but still partly spe...
Fields: Medical Imaging, Machine Learning, Inverse Problems
Speculative analogy (to be empirically validated): DDPM score fields can act as learned regularizers in MRI inverse problems, replacing hand-crafted priors while preserving fidelity constraints from s...
Fields: Medicine, Machine Learning, Health Informatics
Speculative analogy (to be empirically validated): self-attention can unify sparse longitudinal clinical events into context-aware risk representations similarly to flexible sequence transduction in l...
Fields: Network Science, Infectious Disease, Machine Learning
Speculative analogy (to be empirically validated): graph convolutional message passing can infer latent transmission linkage structure by integrating mobility, genomic, and contact-network signals und...
Fields: Neuroscience, Computer Science, Machine Learning
Literature alignment at the objective level—CPC trains representations to predict latent summaries across temporal or view splits using contrastive classification; speculative analogy for biology—brai...
Fields: Neuroscience, Computer Science, Machine Learning
Conceptual bridge (not a literal neural isomorphism): both traditions trade fidelity of retained information against complexity or redundancy constraints; speculative analogy for practice—IB-style obj...
Fields: Computational Neuroscience, Electrical Engineering, Neuromorphic Computing, Machine Learning
Biological neural computation uses action potentials (spikes): discrete, all-or-nothing pulses of ~100 mV amplitude and ~1 ms duration. Neurons transmit information via: 1. RATE CODING: firing rate r(...
Fields: Neuroscience, Mathematics, Statistical Mechanics, Machine Learning, Neural Networks, Memory Theory
Hopfield networks (1982): N binary neurons sᵢ ∈ {-1,+1} with symmetric weights Wᵢⱼ = (1/N)Σ_μ ξᵐᵢ ξᵐⱼ (Hebb rule) and dynamics sᵢ(t+1) = sgn(Σⱼ Wᵢⱼsⱼ(t)). Energy E = -½Σᵢⱼ Wᵢⱼsᵢsⱼ decreases monotonica...
Fields: Systems Neuroscience, Signal Processing, Machine Learning, Dimensionality Reduction, Computational Neuroscience
Modern Neuropixels probes record from 384–960 electrodes simultaneously, capturing spikes from hundreds of neurons. Spike sorting — attributing voltage deflections to individual neurons — proceeds as:...
Fields: Neuroscience, Statistical Mechanics, Machine Learning, Computational Neuroscience
Long short-term memory networks (Hochreiter & Schmidhuber 1997, 96 k citations) solve the vanishing gradient problem via gating mechanisms that selectively control information flow through time. Stati...
Fields: Neuroscience, Statistics, Signal Processing, Machine Learning, Electrophysiology
EXTRACELLULAR RECORDING MIXING MODEL: A recording electrode at position x measures a weighted sum of spike waveforms from N nearby neurons: y(t) = Σᵢ Aᵢ · sᵢ(t) + noise where Aᵢ = mixing matrix en...
Fields: Numerical Analysis, Physics, Scientific Machine Learning
Literature-backed methodology (SINDy family): sparse regression across candidate libraries can recover dynamical terms when noise and collinearity are controlled; speculative analogy for sparse sensin...
Fields: Oceanography, Machine Learning, Fluid Dynamics
Speculative analogy (to be empirically validated): Spectral neural surrogates can emulate energy-transfer dynamics across scales similarly to reduced spectral ocean models used for submesoscale foreca...
Fields: Pharmacology, Machine Learning, Dynamical Systems
Speculative analogy (to be empirically validated): continuous-time latent dynamics learned by neural ordinary differential equations can serve as constrained surrogates for compartmental PK models whe...
Fields: Philosophy Of Science, Information Theory, Mathematics, Statistics, Machine Learning
Kolmogorov (1965) defined the complexity K(x) of a string x as the length (in bits) of the shortest program on a universal Turing machine U that outputs x and halts. Solomonoff (1964) independently de...
Fields: Statistical Physics, Machine Learning, Information Theory
Deep neural networks undergo a series of phenomena that are strikingly described by the language of statistical physics phase transitions: 1. **Grokking (Power et al. 2022)**: a model trains to 100% t...
Fields: Physics, Computer Science, Machine Learning
Pedagogical bridge (widely discussed, contested as literal identification): layerwise feature transformations resemble iterative coarse-graining because both discard microscopic degrees of freedom whi...
Fields: Physics, Computer Science, Machine Learning
Established modeling correspondence: RBMs define bipartite energy functions whose Gibbs distribution parallels Boltzmann weights on interacting latent-visible spins up to representation choices; specu...
Fields: Statistical Physics, Neuroscience, Machine Learning
The Hopfield (1982) model of associative memory is mathematically identical to the Sherrington-Kirkpatrick spin glass: neuron states map to spins, synaptic weights to random exchange couplings, and st...
Fields: Computer_Science, Physics, Statistical_Mechanics, Machine_Learning
Variational Bayesian inference minimizes the variational free energy F = E[log q] - E[log p] (equivalent to maximizing the ELBO), which is identical to the Helmholtz free energy F = U - TS in statisti...
Fields: Physics, Condensed Matter Physics, Computational Neuroscience, Machine Learning, Statistical Mechanics
The Hopfield network (1982) defines an energy function for a network of N binary neurons sᵢ ∈ {-1, +1} with symmetric weights Wᵢⱼ: E = -½ Σᵢ≠ⱼ Wᵢⱼ sᵢ sⱼ This is formally identical to the Ising spi...
Fields: Public Health, Machine Learning, Epidemiology
Speculative analogy (to be empirically validated): Learned surrogates of expensive agent-based epidemic simulations can support policy search similarly to reduced-form intervention response surfaces i...
Fields: Radiology, Machine Learning, Pathology
Speculative analogy (to be empirically validated): residual blocks that stabilize very deep optimization can also stabilize representation transfer under histopathology stain variability when coupled ...
Fields: Seismology, Machine Learning, Geophysics
Speculative analogy (to be empirically validated): Physics-informed neural-operator constraints can regularize aftershock field forecasts analogously to stress-transfer priors in statistical seismolog...
Fields: Machine Learning, Social Science, Mathematics, Law And Policy, Statistics
Algorithmic fairness seeks criteria that trained classifiers should satisfy to avoid discrimination. Three prominent criteria conflict when base rates differ across groups: (1) demographic parity P(Ŷ=...
Fields: Statistics, Bayesian Inference, Physics, Statistical Mechanics, Machine Learning
The partition function in statistical mechanics Z = Σ_x exp(-E(x)/kT) normalizes the Boltzmann distribution P(x) = exp(-E(x)/kT)/Z over all configurations x. In Bayesian inference, the posterior P(θ|d...
Fields: Systems Biology, Machine Learning, Statistics
Speculative analogy (to be empirically validated): contrastive objectives that maximize agreement between paired views can align transcriptomic, epigenomic, and proteomic profiles into shared latent c...
Fields: Virology, Machine Learning, Evolutionary Biology
Speculative analogy (to be empirically validated): Protein language-model likelihoods can serve as soft constraints on viable mutational trajectories similarly to fitness-landscape priors used in vira...
Know something about Machine Learning? Contribute an unknown or hypothesis →
Generated 2026-05-10 · USDR Dashboard