Category: Programming

Deep Reinforcement Learning for Games

Hey, I’m Ryan Cross and for my Computer Science MEng Project, I undertook a project in applying Deep Reinforcement Learning to the video game StarCraft II, to replicate some of the work that DeepMind had done at the time.

As part of this project, I needed to train a reinforcement learning model for thousands of games. Quickly, it became apparent that it was entirely infeasible to train my models on my computer, despite it being a fairly powerful gaming machine. I was only able to run 2 copies of the game at once, and this was nowhere near enough when for some of my tests 50,000+ runs were needed. Worse still, due to the setup of my model the smaller the number of instances I ran at once, the slower my code would converge.

It was around this time my supervisor Dr Matteo Leonetti pointed out that the University had some advanced Computing facilities (ARC) I could use. Even better, there was a large amount of GPUs there, which greatly accelerates machine learning, and was perfect for running StarCraft II on.

After getting an account, I set about getting my code running on ARC3. Quickly, I ran into an issue where StarCraft II refused to run on ARC3. After a quick Google to check it was nothing I could fix easily, I had a chat with Martin Callaghan about getting the code running in any way. It turned out that due to the setup of the ARC HPC clusters, getting my code running was as simple as adding a few lines to a script and building myself a
Singularity container. This was pretty surprising, I thought that getting a game to run on a supercomputer was going to be a giant pain, instead, it turned out to be quite easy!

The container actually ended coming in handy much later too, when I was handing my project over, I could simply ask them to run a single command or just give them my container, and they had my entire environment ready to test my code. No more “I can’t run it because I only have Python 2.7”, just the same environment everywhere.
Better for me, and better for reproducibility!

Once I’d got that all setup, running my experiments was easy. I’d fire off a test in the morning, leave it running for 8 hours playing 32 games at once and check my results when I got in. I managed to get all the results I needed very quickly, which would just have been infeasible without ARC3 and the GPUs it has. Getting results for tests was taking 30 minutes instead of multiple hours, meaning I could make changes and write up results much
quicker.

Later, I started to transition my code over to help out on a PhD project, utilising transfer learning to improve my results. At this point, I had models that were bigger than most PCs RAM, and yet ARC3 was training them happily. With how ubiquitous machine learning is becoming, its great to have University resources which are both easy to use, and extremely powerful.

The Carpentries and Research Computing

I’m pleased to announce that we’ve renewed our membership of The Carpentries for another year.

For those of you that don’t know what ‘The Carpentries’ are, they (we) are an international organisation of volunteer instructors, trainers, organisations and staff who develop curricula and teach coding and data science skills to researchers worldwide.

We’re pleased to be able to support the aims of the Carpentries and in conjunction with other UK partner organisations (and especially our friends at the Software Sustainability Institute ) help the wider UK research community develop their skills.

Here at Leeds, we organise and run two and three-day workshops as part of our training programme. We have a new group of instructors on board, so do keep an eye on the training calendar for upcoming workshops. We run workshops using R, Python and occasionally Matlab.

In conjunction with out colleagues at the University of Huddersfield, we’ve also attracted some BBSRC STARS to run another set of workshops. You’ll find more information about this at the Next Generation Biologists website.

In previous years we have run a number of workshops in conjunction with our colleagues in the School of Earth and Environment funded by a number of NERC ATSC awards.

If you’re interested in finding out more, perhaps to be a helper at a workshop, a future instructor or you’d like to find out more about the content of a typical workshop then please get in touch.

The Julia Programming language and JuliaCon 2018

Julia is a relatively new, free and open source scientific programming language that has come out of MIT.  I first played with it in 2012, back in the days when it didn’t even have release numbers — just github hashes and it has come a long way since then! In my mind, I think of it as what a language would look like if your two primary design parameters were ‘Easy to use for newbies’ and ‘Maximally JITable‘ — this is almost certainly a gross oversimplification but it doesn’t seem to offend some of the people who helped create the language.  Another way to think about it is ‘As easy to write as Python or MATLAB but with speed on par with C or Fortran’.

I attended Julia’s annual conference, JuliaCon, last week along with around 350 other delegates from almost every corner of the Research Computing world.  While there, I gave a talk on ‘The Rise of the Research Software Engineer’.  This was the first time I’d ever had one of my talks recorded and you can see the result below

All of the conference talks are available at https://www.youtube.com/user/JuliaLanguage/videos. If you’d like to get a flavour of what Julia can do for your computational research, a couple of the JuliaCon 2018 tutorials I’d recommend are below

An Introduction to Julia

Machine learning with Julia

JuliaCon 2018 marked an important milestone for the language when version 1.0 was released so now is a fantastic time to try out the language for the first time. You can install it on your own machines from https://julialang.org/downloads/ and we’ve also installed it on ARC3.  You can make it available to your ARC3 session using the following module command