Friday, August 29, 2014

Fair Tree Slurm Fairshare Algorithm

That's right.  Levi Morrison and I created a second Slurm fairshare algorithm, Fair Tree.  Our first algorithm, LEVEL_BASED, was accepted into Slurm and became available in 14.11.0pre3 about one month ago.  Fair Tree was accepted into Slurm in time for 14.11 and replaced LEVEL_BASED.

When given the same inputs, both algorithms produce effectively equivalent outputs.  The objective of both algorithms is the same:  If accounts A and B are siblings and A has a higher fairshare factor than B, all children of A will have higher fairshare factors than all children of B.

So why bother writing a new algorithm three months after the first one if the first algorithm successfully solved the same problems?

Friday, June 20, 2014

LEVEL_BASED Slurm Prioritization Algorithm

Levi Morrison and I have co-authored a new prioritization mechanism for Slurm called LEVEL_BASED.  To see why it is necessary, please see my other post about the problems with algorithms that existed at the time of its creation.

DEPRECATED:  This has been deprecated by our new algorithm, Fair Tree.  Yes, we really did replace this algorithm within a few months even though it worked great for us.  See the post about Fair Tree for details.


Problems with Existing Slurm Prioritization Methods

UPDATELevel-Based was replaced by Fair Tree, an even better algorithm that we created.

Levi Morrison and I have co-authored a new prioritization mechanism for Slurm called LEVEL_BASED.  In order to understand why LEVEL_BASED is necessary, I have chosen to write this post about our issues with the existing options and a separate post about LEVEL_BASED.  If you just want to see information about LEVEL_BASED, see the post LEVEL_BASED Slurm Prioritization Algorithm.

We want users from an account that has higher usage to have lower priority than users from an account with lower usage.  There is no existing algorithm that consistently does this.

Thursday, May 22, 2014

Job Script Generator for Slurm and PBS published on Github

We published version 2.0 of our batch job script generator on Github.  It is a Javascript library (LGPLv3) that allows users to learn Slurm and PBS syntax by testing various inputs in an easy-to-understand manner.  Links: git repo, demo, other github projects of ours.

Thursday, April 17, 2014

Scheduler Limit: Remaining Cputime Per User/Account

Update May 9, 2014:  Added a link to our GrpCPURunMins Visualizer

I have discussed this with several interested people recently so it's time for me to write it up.  When running an HPC batch job scheduling system such as Slurm, Moab, Maui, or LSF, there are many ways to configure user limits.  Some of the easiest limits to understand are on the number of jobs a user can run or the maximum cores or nodes that they can use.  We have used a different limit for several years now that is worth sharing.

No one likes to see a cluster that is 60% utilized while users sit in the queue, unable to run due to a core count limit they are hitting.  Likewise for a site with no user limits, only the lucky user himself likes being able to fill up a cluster with $MAX_WALLTIME day jobs during a brief lull in usage.  Obviously, other users are displeased when they then submit jobs five minutes later that will now have to wait for all of the other user's job to finish in $MAX_WALLTIME days.  This is typically solved by limiting the core or node count per user/account, but we use a limit that vastly improves the situation.