Deep Reinforcement Learning for Games

Hey, I’m Ryan Cross and for my Computer Science MEng Project, I undertook a project in applying Deep Reinforcement Learning to the video game StarCraft II, to replicate some of the work that DeepMind had done at the time.

As part of this project, I needed to train a reinforcement learning model for thousands of games. Quickly, it became apparent that it was entirely infeasible to train my models on my computer, despite it being a fairly powerful gaming machine. I was only able to run 2 copies of the game at once, and this was nowhere near enough when for some of my tests 50,000+ runs were needed. Worse still, due to the setup of my model the smaller the number of instances I ran at once, the slower my code would converge.

It was around this time my supervisor Dr Matteo Leonetti pointed out that the University had some advanced Computing facilities (ARC) I could use. Even better, there was a large amount of GPUs there, which greatly accelerates machine learning, and was perfect for running StarCraft II on.

After getting an account, I set about getting my code running on ARC3. Quickly, I ran into an issue where StarCraft II refused to run on ARC3. After a quick Google to check it was nothing I could fix easily, I had a chat with Martin Callaghan about getting the code running in any way. It turned out that due to the setup of the ARC HPC clusters, getting my code running was as simple as adding a few lines to a script and building myself a
Singularity container. This was pretty surprising, I thought that getting a game to run on a supercomputer was going to be a giant pain, instead, it turned out to be quite easy!

The container actually ended coming in handy much later too, when I was handing my project over, I could simply ask them to run a single command or just give them my container, and they had my entire environment ready to test my code. No more “I can’t run it because I only have Python 2.7”, just the same environment everywhere.
Better for me, and better for reproducibility!

Once I’d got that all setup, running my experiments was easy. I’d fire off a test in the morning, leave it running for 8 hours playing 32 games at once and check my results when I got in. I managed to get all the results I needed very quickly, which would just have been infeasible without ARC3 and the GPUs it has. Getting results for tests was taking 30 minutes instead of multiple hours, meaning I could make changes and write up results much
quicker.

Later, I started to transition my code over to help out on a PhD project, utilising transfer learning to improve my results. At this point, I had models that were bigger than most PCs RAM, and yet ARC3 was training them happily. With how ubiquitous machine learning is becoming, its great to have University resources which are both easy to use, and extremely powerful.

Moving some home directories around

Over the summer we had to do some back-end work to make user’s lives slightly better, by replacing the servers the home directories were served from (let’s call them the UFAs) with something a bit newer and shinier (let’s call this the SAN). There were a few good reasons for this; the hardware was 13 years old, for a start. We had to do some consolidation work to tidy up home directories from users who were never going to return to the institution, and we needed to have a consistent policy on home directory creation.

Historically we’ve had really good service from the UFAs, with great bandwidth and throughput, and we’ve always said that a replacement service needs to at least match what we’ve had in the past. That’s the basis of all the hardware replacement we do in HPC ; whatever we put in has to provide at least as good a service as what it replaces. So, we did some testing, to make sure we knew what the replacement matrix should look like.

When this project started, back in 2014, initial testing wasn’t good; in fact the performance for just simple dd if=/dev/zero of= bs=1M count=1024 -type on a single node, single threaded was considerably worse than on the UFAs. However, with a newer SAN and the right mix of NFS mount options and underlying filesystems – work carried out by the servers and storage team, who did an excellent job – we were able to get an improvement on some standard tasks, like extracting a standard application.

bar chart
fig 1 – how long does it take to extract OpenFOAM?

You’ll see that the time taken to extract a file was considerably larger on the replacement service. We found some interesting things; file async is a process where a server makes sure that a file has been written before it sends the acknowledgement that it’s got the file. In these cases turning async on made everything go much quicker, at the risk of data being lost – however we felt that risk was worth taking as the situation where that would occur would be most unlikely and we do have resilient backups. Single threaded performance was equivalent, and although multithreaded was not an improvement it was equivalent or better than writing to local NFS storage.

There’s also an interesting quirk relating to XFS being a 64-bit filesystem; a 32-bit application might get be told to use an inode that’s bigger than it knows how to handle, which returns an IO error. We needed to do a quick bit of work to make sure there weren’t that many 32-bit applications still being used (there are, but it’s not many, and we have a solution for users who might be affected by this – if you are, get in touch).

In the end a lot of hours were spent on the discovery phase of this project, then as we entered the test phase (Mark & I started using the new home directory server about a month before everybody else) we found a few issues that needed sorting, especially with file locking and firewalls. Once that was sorted there was a bunch of scripting that needed to happen so human error was minimised (one of the nice things about being a HPC systems admin is that you very quickly learn how to programatically do the same task multiple times), and we needed to tidy up the user creation processes – some of which have been around since the early 00s. The error catching and “unusual circumstances” routines – as you’d expect – made up the bulk of that scripting!

We’ve gone from 29 different home directory filesystems to three; performance is about the same, and quotas are larger. We’ve done all the tidying up work that means future migrations will go smoother, and although there was a bit of disruption for everybody it was all over quickly and relatively painlessly for the users (which is the most important thing). We are still keeping an eye on things, too.

Huge thanks are due to everybody in the wider IT Service who helped out.

The Carpentries and Research Computing

I’m pleased to announce that we’ve renewed our membership of The Carpentries for another year.

For those of you that don’t know what ‘The Carpentries’ are, they (we) are an international organisation of volunteer instructors, trainers, organisations and staff who develop curricula and teach coding and data science skills to researchers worldwide.

We’re pleased to be able to support the aims of the Carpentries and in conjunction with other UK partner organisations (and especially our friends at the Software Sustainability Institute ) help the wider UK research community develop their skills.

Here at Leeds, we organise and run two and three-day workshops as part of our training programme. We have a new group of instructors on board, so do keep an eye on the training calendar for upcoming workshops. We run workshops using R, Python and occasionally Matlab.

In conjunction with out colleagues at the University of Huddersfield, we’ve also attracted some BBSRC STARS to run another set of workshops. You’ll find more information about this at the Next Generation Biologists website.

In previous years we have run a number of workshops in conjunction with our colleagues in the School of Earth and Environment funded by a number of NERC ATSC awards.

If you’re interested in finding out more, perhaps to be a helper at a workshop, a future instructor or you’d like to find out more about the content of a typical workshop then please get in touch.

The Julia Programming language and JuliaCon 2018

Julia is a relatively new, free and open source scientific programming language that has come out of MIT.  I first played with it in 2012, back in the days when it didn’t even have release numbers — just github hashes and it has come a long way since then! In my mind, I think of it as what a language would look like if your two primary design parameters were ‘Easy to use for newbies’ and ‘Maximally JITable‘ — this is almost certainly a gross oversimplification but it doesn’t seem to offend some of the people who helped create the language.  Another way to think about it is ‘As easy to write as Python or MATLAB but with speed on par with C or Fortran’.

I attended Julia’s annual conference, JuliaCon, last week along with around 350 other delegates from almost every corner of the Research Computing world.  While there, I gave a talk on ‘The Rise of the Research Software Engineer’.  This was the first time I’d ever had one of my talks recorded and you can see the result below

All of the conference talks are available at https://www.youtube.com/user/JuliaLanguage/videos. If you’d like to get a flavour of what Julia can do for your computational research, a couple of the JuliaCon 2018 tutorials I’d recommend are below

An Introduction to Julia

Machine learning with Julia

JuliaCon 2018 marked an important milestone for the language when version 1.0 was released so now is a fantastic time to try out the language for the first time. You can install it on your own machines from https://julialang.org/downloads/ and we’ve also installed it on ARC3.  You can make it available to your ARC3 session using the following module command

Programming for Evolutionary Biology conference

I thought I’d update you on a conference I’m presenting at in September.  It’s Programming for Evolutionary Biology 2018, taking place in Buttermere in the beautiful Lake District from September 2-6th 2018.

It’s organised by our colleagues (and BBSRC STARS partners) at the University of Huddersfield, Jarek Bryk (@jarekbryk), Maria Luisa Martin Cerezo and Marina Soares da Silva.

PEB’s aim is to bring together scientists broadly interested in applying bioinformatic tools to answer evolutionary and ecological questions.

Unlike other conferences featuring mostly talks and poster sessions, it aims to serve as a platform for discussing common programming pitfalls encountered during research and features workshops to further develop participants’ bioinformatic skills.

It’s a fantastic programme, please check it out.  The organisers are still able to accept applications, so if you are interested then get in touch.

My session is on Cloud Computing.  We’ll be looking at how to use Cloud services for genomics analyses, including setting up our own server in the Cloud, how to store and manage our data and an introduction to the Cloud Genomics services: Microsoft Genomics and Google Genomics.

Summer training workshops

The summer months are a great opportunity to catch up on training and development and we’ve got two workshops coming up at the University of Leeds in July.

To keep up to date with our training workshops, keep an eye on our training pages.  Courses for the new academic year will be advertised in late August.

On to the workshops:

Software Carpentry with Python

Dates: July 9th and 10th 2018

Software Carpentry aims to help researchers get their work done in less time and with less pain by teaching them basic research computing skills. This hands-on workshop will cover basic concepts and tools, including program design, version control, data management, and task automation. Participants will be encouraged to help one another and to apply what they have learned to their own research problems.

In this workshop, you will learn the basics of Python, the Linux command line shell and version control using Git and Github.

Workshop website: https://arctraining.github.io/2018-07-09-leeds/
Booking: https://ti.to/university-of-leeds-research-computing/software-carpentry

Instructors for this workshop are: Martin Callaghan; Harriet Peel and James O’Neill (all University of Leeds)

If you prefer R to Python, there’ll be an R based Data Carpentry workshop coming up in August, just before the start of the new academic year.

HPC Carpentry

Dates:  July25th and 26th 2018

This workshop is run in conjunction with our colleagues at EPCC (Edinburgh University) through ARCHER training. It’s an introductory workshop and will use one of the new EPSRC funded ‘Tier 2’ clusters, Cirrus ,rather than our local HPC facilities.

This course is aimed at researchers who have little or no experience of using high performance or high throughput computing but are interested to learn how it could help their research, how they could use it and how it provides additional performance. You need to have previous experience working with the Unix Shell.

You don’t need to have any previous knowledge of the tools that will be presented at the workshop.

After completing this course, participants will:

  • Understand motivations for using HPC in research
  • Understand how HPC systems are put together to achieve performance and how they differ from desktops/laptops
  • Know how to connect to remote HPC systems and transfer data
  • Know how to use a scheduler to work on a shared system
  • Be able to use software modules to access different HPC software
  • Be able to work effectively on a remote shared resource

Workshop website: http://www.archer.ac.uk/training/courses/2018/07/hpc-carpentry-leeds/index.php
Booking: http://www.archer.ac.uk/training/registration/

Instructors for this workshop are: Andy Turner (EPCC), Martin Callaghan (University of Leeds) and Chris Bording (IBM Research, Hartree)