Student Projects in LAS Group
We offer Semester Projects, Bachelor’s and Master’s theses in our group. Depending on your preference, there are opportunities for working on theory, methods, and applications. M.Sc. projects at LAS often result in publications at leading conferences.A list of topics for which we are actively recruiting students is given below. If you don’t see a project below that fits well, but you are interested in the kind of research our lab does, feel free to still reach out. To learn more about the research done in the group, visit our recent publications. You can also learn more about research by individual group members.
If you are interested in working with us, you can send an application by clicking the button below. Make sure that your email includes: a résumé, a recent transcript of records, your intended start date, and we highly recommend that you mention the projects you are interested in, members of the group with whom you would like to work, or recent publications by the group relevant to your interests.
If you are a bachelors or masters student but not a student at ETH, please see the opportunity listed in Applications for Summer Research Fellowships.
Current Topics
The detailed project proposals can be downloaded only from the ETH domain.Continual Safe Adaptation in Multi-agent Domains
How drivers from Zurich can safely adapt to driving culture in New York?Resources
People
Keywords: Multi-agent systems, Reinforcement Learning, Constraints
Improving the Ability of Large Language Models
We offer various topics aimed at improving LLMs’ reasoning abilities.Resources
People
- Jonas Hübotter
- Ido Hakimi
Keywords: Large Language Models, Active Learning, Meta Learning, Computational Efficiency
Learning World Models for Legged Locomotion
Learn robust, structured models for policy learning.Resources
People
Keywords: Reinforcement Learning, Curriculum Learning, Active Learning, Open-ended Learning
Pushing the Limit of Quadruped Running Speed
with Autonomous Curriculum LearningResources
People
Keywords: Reinforcement Learning, Curriculum Learning, Active Learning, Open-ended Learning
Myopic Behavior in Goal-reaching Reinforcement Learning
Allow a goal-reaching policy to be as greedy as it can afford to be.Resources
People
Keywords: Reinforcement Learning, Optimization
De Novo Molecular Design via Diffusion Bandit Optimization
Discover promising molecules via novel algorithms merging diffusion models and bandit optimization.Resources
Keywords: Molecular Design, Bandit Optimization, Diffusion Models, Generative Models
Online Safe Locomotion Learning in the Wild
Run reinforcement learning on real robotsResources
People
Keywords: Online Reinforcement Learning, Safety, Robot learning
Autonomous Curriculum Learning for Increasingly Challenging Tasks
Proposing problems at the same time as they are being solved.Resources
People
Keywords: Curriculum Learning, Open-ended learning, Robot learning
Humanoid Locomotion Learning and Finetuning from Human Feedback
Learn and finetune robotic motions with sequence-conditioned reward models from human feedback.Resources
People
Keywords: Reinforcement learning from human feedback, Self-supervised Learning, Robot learning
Safe guaranteed domain exploration with autonomous robots
Develop and deploy algorithms for safe exploration with non-linear dynamics and unknown objectivesResources
People
Keywords: Gaussian Processes, Active learning, Bayesian optimization, Model predictive control
Non-Convex Reinforcement Learning via Submodular Optimization
Develop algorithms for decision-making beyond the limitations of classic reinforcement learning.Resources
People
Keywords: Reinforcement learning, Non-Markovian Rewards, Statistics, Optimization
Online Fair Classification for Sequential Data
We aim to design online fair classifier for sequential data.Resources
People
Keywords: Fairness, Online Classification
Generalization for Meta-Learning and Personalized Federated Learning
We aim to find the correct formulation for many important problems in meta-learning and design algorithms to solve the correct formulation.Resources
People
Keywords: Generalization, Meta-Learning, Personalized Federated Learning
Bayesian Optimization with Privacy
Develop private algorithms for Bayesian Optimization and show that privacy does not come for free, it takes a toll on performance.Resources
People
Keywords: Bayesian Optimization, Differential Privacy, Meta-Learning
Optimized Sampling and Reconstruction in NMR Spectroscopy
Optimize state-of-art NMR machine with data-driven methods.Resources
People
- Nicolas Schmidt [ZHAW]
- Mojmír Mutný
Keywords: active learning, experiment design, real-world applications, neural networks
Point Processes for Species Modelling with Active Learning Citizen Science
Modelling species habitat using Point process modelling and active learning via citizen scienceResources
People
Keywords: active learning, experiment design, real-world applications, neural networks
Automating Biology with ML: Guiding Generative Modelling for Improved Protein Design
Resources
People
Keywords: generative modelling, proteins, enzymes, active learning, experiment design, real-world applications
Human kernels – querying similarity
What are the right assumptions before using ML? Sometimes we don’t know we know. Can machines help us?Resources
People
Keywords: active learning, experiment design, real-world applications, neural networks
Graph Neural Optimization for Molecular Design
Employing Graph Neural Networks, develop a scalable Bayesian optimization algorithm which generates valid molecules with desirable profiles. [At capacity for Spring semester 2024.]People
- Miles Wang-Henderson
- Parnian Kassraie
- Ilija Bogunovic
Keywords: Molecular Design, Energy-based Generative Models, Bayesian Optimization, Graph Neural Networks, Methodology, Applied
Machine Learning for Population Dynamics
Design and model spatio-temporal population dynamics using recent techniques in optimal transport and machine learning with focus on applications in single-cell biology.Resources
People
Keywords: optimal transport, spatio-temporal dynamics, partial and stochastic difference equations
Structured Exploration in Large-Scale Sequential Decision-Making
How can we leverage structure for efficient exploration? How can we scale these techniques in the context of deep learning and large data?Resources
People
Keywords: exploration, information-directed sampling, reinforcement learning, active learning
Confident Estimation via Online Convex Optimization
How can we make confident predictions using online convex optimization?Resources
People
Keywords: online convex optimization, confidence sets, frequentist and Bayesian statistics
Applications of Machine Learning for Choosing Crop Varieties
Learning crop variety selection and management policies from data.Resources
People
Keywords: applied, uncertainty quantification, active learning, reinforcement learning, remote sensing
Assimilation of crop growth models with remote sensing
Monitoring staple crops with satellite data.Resources
People
Keywords: applied, remote sensing, sustainable agriculture
Machine Learning for Converter Control
Algorithms for control of power electronics converters, in collaboration with Hitachi Energy.Resources
People
Keywords: applied, reinforcement learning, control
General Areas
We offer projects in several general areas.- Probabilistic Approaches (Gaussian processes, Bayesian Deep Learning)
- Discrete Optimization in ML
- Online learning
- Large-Scale Machine Learning
- Causality
- Active Learning
- Bayesian Optimization
- Reinforcement Learning
- Meta Learning
- Learning Theory
Examples of Previous Master Theses
Lifelong Bandit Optimization: No Prior and No Regret
Awarded ETH Medal. Felix Schur with Jonas Rothfuss and Parnian Kassraie. UAI 2023. [paper]BaCaDI: Bayesian Causal Discovery with Unknown Interventions
Alex Hägele with Jonas Rothfuss and Lars Lorch. AISTATS 2023. [paper]MARS: Meta-Learning as Score Matching in the Function Space
Kruno Lehman with Jonas Rothfuss. ICLR 2023. [paper]Neural Contextual Bandits without Regret
Parnian Kassraie with Andreas Krause. AISTATS 2022. [paper]Near-Optimal Multi-Perturbation Experimental Design for Causal Structure Learning
Scott Sussex with Andreas Krause and Caroline Uhler. NeurIPS 2021. [paper] [blog]DiBS: Differentiable Bayesian Structure Learning
Awarded ETH Medal. Lars Lorch with Jonas Rothfuss. NeurIPS 2021. [paper] [blog]PopSkipJump: Decision-Based Attack for Probabilistic Classifiers
Noman Ahmed Sheikh with Carl-Johann Simon-Gabriel. ICML 2021. [paper]Icons on this page are by www.flaticon.com.