Category: Uncategorised

Moving some home directories around

Over the summer we had to do some back-end work to make user’s lives slightly better, by replacing the servers the home directories were served from (let’s call them the UFAs) with something a bit newer and shinier (let’s call this the SAN). There were a few good reasons for this; the hardware was 13 years old, for a start. We had to do some consolidation work to tidy up home directories from users who were never going to return to the institution, and we needed to have a consistent policy on home directory creation.

Historically we’ve had really good service from the UFAs, with great bandwidth and throughput, and we’ve always said that a replacement service needs to at least match what we’ve had in the past. That’s the basis of all the hardware replacement we do in HPC ; whatever we put in has to provide at least as good a service as what it replaces. So, we did some testing, to make sure we knew what the replacement matrix should look like.

When this project started, back in 2014, initial testing wasn’t good; in fact the performance for just simple dd if=/dev/zero of= bs=1M count=1024 -type on a single node, single threaded was considerably worse than on the UFAs. However, with a newer SAN and the right mix of NFS mount options and underlying filesystems – work carried out by the servers and storage team, who did an excellent job – we were able to get an improvement on some standard tasks, like extracting a standard application.

bar chart
fig 1 – how long does it take to extract OpenFOAM?

You’ll see that the time taken to extract a file was considerably larger on the replacement service. We found some interesting things; file async is a process where a server makes sure that a file has been written before it sends the acknowledgement that it’s got the file. In these cases turning async on made everything go much quicker, at the risk of data being lost – however we felt that risk was worth taking as the situation where that would occur would be most unlikely and we do have resilient backups. Single threaded performance was equivalent, and although multithreaded was not an improvement it was equivalent or better than writing to local NFS storage.

There’s also an interesting quirk relating to XFS being a 64-bit filesystem; a 32-bit application might get be told to use an inode that’s bigger than it knows how to handle, which returns an IO error. We needed to do a quick bit of work to make sure there weren’t that many 32-bit applications still being used (there are, but it’s not many, and we have a solution for users who might be affected by this – if you are, get in touch).

In the end a lot of hours were spent on the discovery phase of this project, then as we entered the test phase (Mark & I started using the new home directory server about a month before everybody else) we found a few issues that needed sorting, especially with file locking and firewalls. Once that was sorted there was a bunch of scripting that needed to happen so human error was minimised (one of the nice things about being a HPC systems admin is that you very quickly learn how to programatically do the same task multiple times), and we needed to tidy up the user creation processes – some of which have been around since the early 00s. The error catching and “unusual circumstances” routines – as you’d expect – made up the bulk of that scripting!

We’ve gone from 29 different home directory filesystems to three; performance is about the same, and quotas are larger. We’ve done all the tidying up work that means future migrations will go smoother, and although there was a bit of disruption for everybody it was all over quickly and relatively painlessly for the users (which is the most important thing). We are still keeping an eye on things, too.

Huge thanks are due to everybody in the wider IT Service who helped out.

Summer training workshops

The summer months are a great opportunity to catch up on training and development and we’ve got two workshops coming up at the University of Leeds in July.

To keep up to date with our training workshops, keep an eye on our training pages.  Courses for the new academic year will be advertised in late August.

On to the workshops:

Software Carpentry with Python

Dates: July 9th and 10th 2018

Software Carpentry aims to help researchers get their work done in less time and with less pain by teaching them basic research computing skills. This hands-on workshop will cover basic concepts and tools, including program design, version control, data management, and task automation. Participants will be encouraged to help one another and to apply what they have learned to their own research problems.

In this workshop, you will learn the basics of Python, the Linux command line shell and version control using Git and Github.

Workshop website: https://arctraining.github.io/2018-07-09-leeds/
Booking: https://ti.to/university-of-leeds-research-computing/software-carpentry

Instructors for this workshop are: Martin Callaghan; Harriet Peel and James O’Neill (all University of Leeds)

If you prefer R to Python, there’ll be an R based Data Carpentry workshop coming up in August, just before the start of the new academic year.

HPC Carpentry

Dates:  July25th and 26th 2018

This workshop is run in conjunction with our colleagues at EPCC (Edinburgh University) through ARCHER training. It’s an introductory workshop and will use one of the new EPSRC funded ‘Tier 2’ clusters, Cirrus ,rather than our local HPC facilities.

This course is aimed at researchers who have little or no experience of using high performance or high throughput computing but are interested to learn how it could help their research, how they could use it and how it provides additional performance. You need to have previous experience working with the Unix Shell.

You don’t need to have any previous knowledge of the tools that will be presented at the workshop.

After completing this course, participants will:

  • Understand motivations for using HPC in research
  • Understand how HPC systems are put together to achieve performance and how they differ from desktops/laptops
  • Know how to connect to remote HPC systems and transfer data
  • Know how to use a scheduler to work on a shared system
  • Be able to use software modules to access different HPC software
  • Be able to work effectively on a remote shared resource

Workshop website: http://www.archer.ac.uk/training/courses/2018/07/hpc-carpentry-leeds/index.php
Booking: http://www.archer.ac.uk/training/registration/

Instructors for this workshop are: Andy Turner (EPCC), Martin Callaghan (University of Leeds) and Chris Bording (IBM Research, Hartree)

University of Birmingham meeting: Visualisation Workshop

The University of Birmingham hosted a Remote Visualisation Workshop earlier this week, which had an interesting variety of folk talking about interesting things.

Despite the disparity of the topics – ranging from using consumer grade Virtual Reality equipment to visualise and manipulate Molecular Dynamics simulation data (only 600 pounds! fun watching someone trying to tie a
knot in a long string peptide!), alternative methods to interact with HPC machines such as Jupyter notebook and JupyterHub, the pros and cons of various technical means to achieve or improve remote graphical access, and an overview of how visualisation fits into the scientific method – it was clear how they all had the same aim:

Reducing the friction involved in using a computer and working with it results.

The length of the list of technologies covered by the end of the day was pretty impressive and I’d say that the award for best-name-of-technology-I’ve-not-heard-of-before goes to Apache Guacamole. Technology with the most promise might go to Jupyter notebook / JupyterHub – not because I think it will replace SSH access with an interface bearing a strong resemblance to Mathematica which I was using in the 1990’s – but due to the way it provides an alternative method focused on being able to develop, collaborate and present work.

The slides for my bit are here.

Introduction to reproducible research workflows in Python

Are you using Python for research purposes or data analysis? Are you interested in learning how to make your computational workflows more reproducible?
   
ARC (Advanced Research computing) at Leeds is offering a one-day introductory workshop on reproducible workflows with Python on the 14th of June 2018,  running from 10:00 to 16:00.

⚠ Please note that this is not a course to learn Python but for Python users wanting to learn more about how to introduce reproducibility practices into their data analysis workflows.


More details can be found at https://arc.leeds.ac.uk/training/spc-1-introduction-to-reproducible-workflows-in-python/

Registration link: https://ti.to/university-of-leeds-research-computing/spc-1-introduction-to-reproducible-workflows-in-python

Open to all staff and students at The University of Leeds

University of Surrey meeting: What is Research Software Engineering?

I recently attended an event at The University of Surrey called What is Research Software Engineering? Along with many institutions, Surrey are considering some sort of central Research Software Engineering function so they invited a few people from the national community to give talks on the topic.

The first talk was by Simon Hettrick, deputy director of the Software Sustainability Institute who presented the history of research software engineers.  We’ve been around forever of course but we didn’t rally around a name until 2012. Simon’s talk is essential viewing for someone who has never heard of the RSE movement and he is more than happy to come to a university near you.

My talk was called So you wanna build an RSE group? where I discussed the history and lessons learned when we built the RSE group at The University of Sheffield.  Developing this talk was useful reflection for me since it allowed me to consider what should be in the strategy for the RSE group that we will be building at Leeds (Hiring soon!…join the national RSE group to ensure that you are notified of all RSE career opportunities)

Next up was our host, Evren Imre who gave a very open and frank talk about his personal experiences of working as a vision researcher — much of which was spent developing software.

Finally, we heard from Christian Kroos: In his talk, Here be dragons he presented some considerations on research software design based on his experiences developing research software, both as a researcher and as a research software developer.