Thursday, October 17, 2013

User Fencing Tools (UFT) on github

I just published a set of scripts, programs, config file examples, etc that I wrote for use at BYU but should be useful to other HPC sites.  I couldn't think of a better name for it, so I called it the User Fencing Tools (UFT).  It is available in our github repo at https://github.com/BYUHPC/uft.

The tools are used to control users on HPC login nodes and compute nodes in various ways.  The tools make use of cgroups, namespaces, and cputime limits to ensure that users don't negatively affect each others' work.  We limit memory, CPU, disk, and cputime for users.

UFT also has examples for how to control ssh-launched processes on compute nodes.  You can account for those with Torque but can't control them (just like normal).  SLURM will have accounting and resource enforcement for these in 13.12 (Dec. 2013).

Wednesday, August 21, 2013

IPMI over LAN vulnerability and some BMC "features"

I don't want to pull away credit or page views from Dan Farmer's great work, but this needs more exposure...
For those of you who manage servers with IPMI over LAN enabled, there is a very severe vulnerability that may allow anyone full root access to your iLO/iDRAC/IMM/ILOM/whatever (aka BMC).  This is independent of the OS, though once rooted the attacker can then take over the OS in the same way they would as if they have physical access.  They can control power, boot settings, serial over LAN, BIOS settings (via serial), KVM, and can even read/write arbitrary system memory.

For those of you who do not have IPMI over LAN enabled, there may be some stuff that affects you too...

Wednesday, July 24, 2013

Server Room and Three Phase Power for Systems Administrators

There doesn't seem to be much educational material about server room power that is comprehensible to systems administrators.  I don't think there is a "typical" sysadmin type out there but I'm guessing that most have had little to no formal training about server room power.  Three phase power may seem like black magic and lots of incorrect assumptions are made, thus I decided to write this post.  Hopefully this will be useful to some sysadmins out there.

Tuesday, July 16, 2013

Per-user /tmp and /dev/shm directories

Updated Oct 7, 2013: Tons of updates
Updated: March 19, 2014: The recommended configuration has been in production for months now and works great

I recently discovered a great feature in Linux that allows for per-process namespaces (aka polyinstantiation).  Different processes on the same machine can have different views of a filesystem, such as where /tmp and /dev/shm are.  You can easily make it so that each user on a shared system has a different /tmp that, to each of them, really looks like (and is) /tmp.  This isn't done by setting an environment variable; this redefines mount points on a per-process basis such that each users' processes are using their own directory as /tmp.

Wednesday, July 10, 2013

Installing a Xeon Phi (MIC) Card in a Dell PowerEdge R720

We got an early release of Dell's Phi installation kit with installation instructions that weren't all that great (to say the least). Dell told me that they are working on better instructions.  In case you're confused, here you go.

A few things to note:
  • We have dual 95W CPUs.  These instructions might be different (correct?) for higher wattage CPUs (larger heat sinks, different plastic baffles?)
  • The extra heat sinks are for the CPUs, not the Phi.  Our 95W CPUs did not need them.
  • The 2.5" and 3.5" mounting brackets are not necessary in our configuration
  • We used a different bracket that was provided
Here are some pictures of what it should look like:

Tuesday, May 14, 2013

RHEL 6.2 - Linux Kernel Problem?

We experienced several problems when we upgraded to Red Hat Enterprise Linux 6.2 from CentOS 5.4.  A user of ours started reporting slowness on some of his larger HPC jobs.  We looked at tons of things then started noticing that one or more nodes would start swapping for no reason.  His job would only use about 60-70% of the memory on each node but some nodes would inexplicably swap (diagnosed with vmstat).  I talked to people at other universities and HPC sites and verified that a similar problem was occurring on their RHEL 6.2 installations.