Build Big or Build Smart: Examing Scale and Domain Knowledge in Machine Learning for Fundamental Physics

25 August - 19 September 2025

Lukas Heinrich, Michael Kagan, Margarita Osadchy, Tobias Golling, Siddarth Mishra-Sharma

While Artificial Intelligence (AI) and Machine Learning (ML) are becoming an integral part of data analysis in fundamental physics domains such as cosmology, astrophysics, nuclear or particle physics, much of the work has focused on locally replacing traditional physics-driven algorithms with ML counterparts to achieve improved performance at an individual task-level. Yet there are no large scale systems in fundamental physics, and there has not yet been an “AlphaFold” moment leading to orders of magnitude performance gains. Rather, developments in fundamental physics have largely been applications of ML techniques developed in the ML community. Such a local deployment of ML, however, may not fully exploit the capabilities of modern large-scale ML systems. A central question now is whether large-scale ML systems that operate are possible in the context of physics analyses and what they would look like. Two paradigms have come into view for how to move the use of AI/ML in fundamental physics forward: 

• Build Bigger: adapt the developments in training large models in natural language processing (e.g. ChatGPT) and computer vision (e.g. DALL-E) as well as models operating over multiple modalities (e.g. ImageBind) to fundamental physics

• Build Smarter: adapt our physics knowledge and physically-motivated inductive biases to be better used in ML models, while enabling large-scale end-to-end optimization

The former concept follows the path of “Foundation Models” paradigm: developing large models pretrained in a self-supervised manner on massive datasets that can then be adapted to a wide array of downstream tasks; this approach has had massive impact on the state-of-the-art, such as ChatGPT, Dall-E, and Stable Diffusion. Key questions here are how to develop appropriate self-supervised training tasks and evaluation metrics, and how to maintain physics interpretability and robustness to domain shit. The latter concept focuses on how domain knowledge can be used to guide model architecture and training development, e.g. through using symmetries, constraints, or differentiable programming, so that models make physically reliable and robust prediction by construction whilst reducing the need for large data sets. A major concern is the complexity of creating such a hybrid physics ML system whose advantage may wane in the limit of sufficient training data. The two approaches promise to expand the science reach of data-intensive domains and thoroughly exploring both directions may yet reveal a more powerful solution though combination, e.g. through simulation-based inference approaches. However, research on both frontiers has only started recently, and most production-level scientific pipelines are not yet compatible with these techniques. Their wide spread adoption would represent a deep change in the scientific software landscape with unique challenges among the two paradigms. This program will provide the opportunity to invite leading researchers from Computer Science and Fundamental Physics, across industry and academia, to develop new research directions, new applications, and new strategies for integrating these paradigms into existing scientific workflows. The MIAPbP format is particularly well suited as the exploration of the below research questions requires the development and testing of prototypes and space for extended discussion between domain experts and computer scientists in order to lay the groundwork for these research directions. By deliberately focussing on technological aspects, we provide an opportunity for a very interdisciplinary exchange, embracing particle physics, astrophysics, computational neuroscience, and other areas. This program follows from the highly successful “Differentiable and Probabilistic Programming in Fundamental Physics” workshop at MIAPbP in 2023, which connected researchers across disciplines, and opened a variety of new collaborations and research directions. The 2023 MIAPbP workshop has proven that this particular format provides the appropriate environment for the development of new research directions. This program will continue the discussion of differentiable programming, exploring the developments since the 2023 workshop, whilst also addressing the question of how these techniques compare to, or can be combined with, the novel paradigm of large foundation models.

 

 

• Are very large Foundation-style ML models desirable and possible in Fundamental Science?

• Can we quantify the relative merit of scale vs clever architecture? What are the trade-offs at play, if any, in fundamental science applications?

• What self-supervised tasks are most effective for fundamental physics applications? • Do we observe typical properties of large-scale ML models on physics datasets such as interpolation in the overparametrization regime and neural scaling laws?

• Where is physics- and symmetry-driven structure important/indispensable and where are datadriven learned modules preferable? • How would learned high-dimensional representations be calibrated when a direct physical representation of them is difficult?

• How transferable could large models be (e.g. can we finetune a single large model across experimental setups)? 

• How would foundation models trained on supervised and self-supervised tasks differ in performance? How does this depend on task complexity? • How do we exploit the multi-modality of modern experiments with many sub-detectors measuring the same event, can common embedding be found (similar to text and speech or text and image )?

• How can the naturally multi-modal nature of many scientific observations (e.g., spectra and images of astronomical objects) be used under the foundation model paradigm?

• What datasets will be required for foundation model training? Can existing datasets be used as such?

• What enhancements do we need to simulators (e.g. augmentation capabilities, partial resiumulation) to create such datasets ? • What domain-specific benchmarks and evaluation metrics can be used for scientific foundation models?

• If end-to-end large physics models are possible, how can we maintain understanding of the physics / learned representation and work towards explainable AI