15 Aug

/work Filesystem Outtage Has Been Fixed

Early today, Boqueron’s /work had a hiccup and it went offline for a few hours.  This caused some weird behavior, including not being able to sign in through certain software such as SCP programs.  We’ve fixed the issue, but because jobs run on /work, the running jobs that remained after /work went offline eventually got killed.  Everything should be fine now and you should be able to resubmit your jobs with no problems.  No data loss should have occurred as a result of this error.

Please remember that per HPCf Usage Policies, data in /work is not backed up.  Always make sure to move important data off of /work and into a more permanent storage location, such as a computer in your laboratory or a personal workstation.

As always, if you have any other issues or questions, please let us know at help(at)hpcf.upr.edu.

01 Jun

Boqueron Unscheduled Maintenance During Morning of June 1st, 2016

As you may have noticed, Boqueron underwent unscheduled, urgent maintenance earlier today.  This maintenance unfortunately had the effect of killing all running and pending jobs in the process.

What happened?

An issue in the way Slurm (the queue manager) was interacting with the cluster manager software that we use on Boqueron (Bright CM) was causing Slurm’s configuration to get rewritten spontaneously when certain actions were taken in the cluster.  Every time this happened, jobs got killed.  Today, we contacted Bright support and they were kind enough to help us out through a live screen-sharing session.  The changes they had to make required the Slurm configuration to get rewritten much like those other times, and so earlier today jobs got killed as well.

Is the issue resolved?

Yes.  The issue was bugging us for a few weeks now, but it should now be completely resolved.

Can I submit jobs again?  Won’t they get killed again?

Yes, you may submit jobs again; and no, they should not get killed again.  No system is perfect, but the solution we arrived at today with Bright support should result in continuous, stable queue operation under normal, day-to-day circumstances.

But I’m afraid they’ll get killed again!

You shouldn’t be afraid.  We recognize that the recent shakiness of the queues would cause user confidence to drop, but again, the core issue should now be resolved and we expect stable times for our cluster. *knocks on wood*

We do apologize for the inconvenience this has created.  For any further questions or comments, feel free to contact us at help(at)hpcf.upr.edu.

18 May

Notice: Boqueron Maintenance Completed – May 18, 2016

We have completed the maintenance work on Boqueron that had been scheduled for today.  Core sharing should now be disabled.  Users may now resubmit jobs to Boqueron.  Everything should be working fine, but we encourage users to submit at least one test job first to ensure everything is working correctly.  Please report any issues or questions to help(at)hpcf.upr.edu.

10 May

Notice of Boqueron Scheduled Maintenance – Wed May 18, 2016

Boqueron will be undergoing scheduled maintenance on Wednesday May 18th, 2016.  Effective immediately, all jobs which cannot complete their run by 12:00 am May 18th will not be allocated until maintenance is over.  We have reserved a maintenance window starting at 12:00 am and ending at 1:00 pm the same day, though we anticipate the maintenance operations will take a much shorter time than that.  We will email another notice indicating when maintenance is done so that you may submit jobs again.

Reason for Maintenance Window

As some of you may have noticed, Boqueron is currently set up in a way that allows a single compute core to run multiple jobs at once.  That is to say, a user’s job currently does not actually reserve compute cores, the cores are shared among various jobs.  This is not the intended behavior of Boqueron.  Not only does this hurt jobs’ performance, but it places a heavy load on the compute nodes.

To fix this, Slurm (the resource manager) must be switched off and reconfigured.  Switching off Slurm would kill any jobs that are running at the moment, so we need to create a maintenance window to ensure that no jobs are killed in the process.

What effect will this change have on future jobs?

After the change, we anticipate that all user jobs will run much faster than they do right now.  There’s a small trade-off, though: since cores will now be reserved, we’ll start seeing jobs actually waiting in line to be allocated.  So far, because of the core sharing, most jobs are usually run almost as soon as they are submitted (but they run under a considerably degraded performance).  We anticipate this will change, and that jobs will actually have to wait before being allocated (but they will run much faster once they are allocated).

As always, if you have any questions, feel free to contact us at help(at)hpcf.upr.edu.

16 Mar

Boqueron – All Users Have Been Added, plus VASP and Gaussian 09

The build of our new cluster Boqueron has progressed well in the past few weeks, and we are pleased to announce that all currently-registered users have finally been added to Boqueron.  If you believe that you were registered to receive an account but have not received one yet, please write to help(at)hpcf.upr.edu so that we can help you out.

Additionally, Gaussian 09 and VASP 5 are now available on our cluster for those users who are authorized by the programs’ respective licensors to use them.  If you need any help getting them set up, please let us know.  VASP in particular is still being tested on Boqueron since some users have run into some errors that apparently are known issues with VASP.

So if you will use VASP (which you can only do if we have received prior written authorization from VASP that you may do so), please keep in mind we’re still doing a test drive, and please report any errors to us so that we can get VASP running well as soon as possible.