The HPCf Helpdesk is experiencing unexpected downtime. We are currently hard at work at fixing the issue, but until everything has been resolved, we will not be able to check the email address firstname.lastname@example.org for incoming requests. Until maintenance work has finished, please direct any and all questions, requests, comments, or issues to the new, temporary address email@example.com. We are sorry for the inconvenience, and we thank you for your patience and continued support of the HPCf.
The staff from the data center where Boqueron is hosted at will be carrying out some electrical work next week that will impact Boqueron. To cooperate with the efforts and to protect as many jobs as possible from getting spontaneously killed, we have opened a maintenance window starting at 7:30am Monday January 30th and ending at 12:00pm Tuesday January 31st. Any newly submitted jobs that cannot complete before the maintenance window begins will be held in the queue until the window ends.
During this window, the Boqueron login node will continue to operate, and the file systems /home and /work should continue to be available, so you should be able to access your files during this time. The worker nodes will be powered down, however.
Jobs that are currently running will be allowed to continue to run, but if any remain running at the time of the maintenance window, they will unfortunately be killed.
We realize this announcement comes a bit short notice, but please understand that HPCf staff was notified of this electrical work yesterday afternoon.
We apologize for any inconvenience you may experience from this maintenance window, and we thank you for your cooperation. As always, if you have any questions or comments, please send them to firstname.lastname@example.org.
An unexpected outage in the data center where Boqueron is hosted wiped out 60 of Boqueron’s nodes over the weekend. All jobs running on those nodes were killed. We have since turned the nodes back on and they should now be working as usual.
To make sure the entire cluster has recovered from the outage, the 20 nodes that remained up during the outage (nodes 41~60) will be rebooted at a later time, and so they will remain closed off to new jobs until then (they will be listed as either “draining” or “drained” by Slurm). The jobs that are currently running there will be allowed to run uninterrupted.
The outage did not affect Boqueron’s login node or Boqueron’s /home or /work file systems. Other HPCf services such as our web site, however, were affected. They are also back online, and you should not experience any issues when using them.
We are currently in talks with the administrators at the data center to asses the situation and prevent it from occurring again.
As always, if you have any questions, problems, or comment, you may write to us at email@example.com.
Early today, Boqueron’s /work had a hiccup and it went offline for a few hours. This caused some weird behavior, including not being able to sign in through certain software such as SCP programs. We’ve fixed the issue, but because jobs run on /work, the running jobs that remained after /work went offline eventually got killed. Everything should be fine now and you should be able to resubmit your jobs with no problems. No data loss should have occurred as a result of this error.
Please remember that per HPCf Usage Policies, data in /work is not backed up. Always make sure to move important data off of /work and into a more permanent storage location, such as a computer in your laboratory or a personal workstation.
As always, if you have any other issues or questions, please let us know at firstname.lastname@example.org.
As you may have noticed, Boqueron underwent unscheduled, urgent maintenance earlier today. This maintenance unfortunately had the effect of killing all running and pending jobs in the process.
An issue in the way Slurm (the queue manager) was interacting with the cluster manager software that we use on Boqueron (Bright CM) was causing Slurm’s configuration to get rewritten spontaneously when certain actions were taken in the cluster. Every time this happened, jobs got killed. Today, we contacted Bright support and they were kind enough to help us out through a live screen-sharing session. The changes they had to make required the Slurm configuration to get rewritten much like those other times, and so earlier today jobs got killed as well.
Is the issue resolved?
Yes. The issue was bugging us for a few weeks now, but it should now be completely resolved.
Can I submit jobs again? Won’t they get killed again?
Yes, you may submit jobs again; and no, they should not get killed again. No system is perfect, but the solution we arrived at today with Bright support should result in continuous, stable queue operation under normal, day-to-day circumstances.
But I’m afraid they’ll get killed again!
You shouldn’t be afraid. We recognize that the recent shakiness of the queues would cause user confidence to drop, but again, the core issue should now be resolved and we expect stable times for our cluster. *knocks on wood*
We do apologize for the inconvenience this has created. For any further questions or comments, feel free to contact us at email@example.com.
We have completed the maintenance work on Boqueron that had been scheduled for today. Core sharing should now be disabled. Users may now resubmit jobs to Boqueron. Everything should be working fine, but we encourage users to submit at least one test job first to ensure everything is working correctly. Please report any issues or questions to firstname.lastname@example.org.
Boqueron will be undergoing scheduled maintenance on Wednesday May 18th, 2016. Effective immediately, all jobs which cannot complete their run by 12:00 am May 18th will not be allocated until maintenance is over. We have reserved a maintenance window starting at 12:00 am and ending at 1:00 pm the same day, though we anticipate the maintenance operations will take a much shorter time than that. We will email another notice indicating when maintenance is done so that you may submit jobs again.
Reason for Maintenance Window
As some of you may have noticed, Boqueron is currently set up in a way that allows a single compute core to run multiple jobs at once. That is to say, a user’s job currently does not actually reserve compute cores, the cores are shared among various jobs. This is not the intended behavior of Boqueron. Not only does this hurt jobs’ performance, but it places a heavy load on the compute nodes.
To fix this, Slurm (the resource manager) must be switched off and reconfigured. Switching off Slurm would kill any jobs that are running at the moment, so we need to create a maintenance window to ensure that no jobs are killed in the process.
What effect will this change have on future jobs?
After the change, we anticipate that all user jobs will run much faster than they do right now. There’s a small trade-off, though: since cores will now be reserved, we’ll start seeing jobs actually waiting in line to be allocated. So far, because of the core sharing, most jobs are usually run almost as soon as they are submitted (but they run under a considerably degraded performance). We anticipate this will change, and that jobs will actually have to wait before being allocated (but they will run much faster once they are allocated).
As always, if you have any questions, feel free to contact us at email@example.com.
The HPCf mail server is currently undergoing emergency maintenance as of 02/16/15 @ 14:00 PR time. Please follow our site for further updates.
==UPDATE 02/16/2015 @ 16:50==
The HPCf Mail server is no longer under emergency maintenance, but it is still under observation. Please report any problems to help(at)hpcf.upr.edu.