My research interest spans several themes, mainly centered on applying scientific machine learning (SciML) to nonlinear dynamics. Across these directions, I focus on developing methodologies that integrate theory-driven modeling, data-driven inference, and computational tools to better understand and predict complex phenomena.
Learning Chaotic Dynamics
My primary interest lies in using SciML frameworks to learn, represent, and analyze chaotic systems. Here, “learning” is meant in a broad sense: it includes discovering governing equations, reconstructing chaotic attractors from data, and enabling accurate short-term forecasting. My work investigates how modern machine learning architectures (such as neural differential equations, operator-learning models, and reservoir computing) can enhance classical techniques in nonlinear dynamics. The objective is to identify representations that remain robust under noise, sparse data regimes, and for unknown models, while offering new insights into the structure of chaotic behavior.
Modeling Nonlinear Time-Series with Application to Earth Sciences
Nonlinear time series arise in climate science, geophysics, ecology, and many other domains where complex phenomena such as feedbacks, memory, and long range interactions shape system evolution. Building on my PhD work on extreme vegetation responses to climate variability, I study how advanced ML methods can be used to characterize, predict, and ultimately forecast real-world environmental processes. This includes developing models capable of capturing nonstationarity, regime shifts, and extreme events, and evaluating model transferability across spatial and temporal scales. My goal is to contribute tools that help improve environmental prediction, risk assessment, and our general understanding of Earth-system dynamics.
Scientific Software for Machine Learning
A substantial portion of my work involves creating and maintaining scientific software to support ML research on dynamical systems. I emphasize reproducibility, interoperability, and high-performance computing workflows so that methods developed in research settings can be applied reliably to real-world scientific problems. I am very active in the Julia community, having contributed to several packages such as ReservoirComputing.jl, RecurrentLayers.jl, LuxRecurrentLayers.jl, SpectralIndices.jl, and CellularAutomata.jl. I also work in python, where I have developed a collection of recurrent neural networks built on PyTorch: torchrecurrent.
Improving Learning Techniques
I also investigate methodological foundations for improving how ML models learn from complex, structured, and often limited scientific data. My key interest is understanding how inductive biases can be incorporated to produce models that generalize better, are more interpretable, and remain stable under perturbations. This direction takes me in a opposite direction to current trends in ML: instead of building larger and more powerful models, I try to reduce models to the simplest components still able to produce accurate estimations. By doing so, I hope to uncover what components are important for learning specific dynamical regimes.