Dear all
I am looking to hire a PhD student for research in AI/ML at the University of Bath, UK. The focus of the research topic would be on developing large-scale ML systems which are resource efficient when measured in terms of measures such as computations involved, number of GPUs required, GPU memory & energy consumption. Within the larger context of sparse training [1] and low-precision representation [2], some of the recent works towards addressing these challenges for machine learning in large output spaces with commodity GPUs are of particular interest for the PhD topic [3,4,5]. More details on our recent works can be found at my homepage :
https://sites.google.com/site/rohitbabbar/HomeMotivated applicants with a Bachelor/Masters degree in Computer Science (and related programmes such as Applied Mathematics, and Physics) with excellent programming skills in Python & PyTorch and strong foundations in applied mathematics (linear algebra, and probabilistic analysis) are particularly welcome.
You will be part of the Computer Science department at the University of Bath in the historic city of Bath. Located near Bristol, it is also reachable from London in about an hour, with train connections every 30 mins. You will have the opportunity to collaborate with leading researchers in the domain and attend scientific conferences to present your research.
Application process : It is strongly recommended that as a first step you email me with your CV to my email
rb2608_AT_bath.ac.uk by 10th November, earlier would be better. We can have an initial discussion/interview, and use that as a basis to finalise the full application required by mid-December.
The detailed description and application process can also be found at :
https://www.findaphd.com/phds/project/faculty-of-science-ursa-phd-project-energy-efficient-and-affordable-training-of-large-scale-deep-learning-models/?p175755 However, as suggested above, it is advisable to first contact me, so that required details can be filled in the final application.
References :
[1] Dynamic Sparse Training with Structured Sparsity, ICLR 2024
[2] Mixed precision training, ICLR 2018
[3] Towards Memory-Efficient Training for Extremely Large Output Spaces -- Learning with 500k Labels on a Single Commodity GPU, ECML 2023
[4] Renée: End-to-end training of Extreme Classification Models, MLSys 2023
[5] Navigating Extremes: Dynamic Sparsity in Large Output Spaces, NeurIPS 2024 (to appear)