Posts Tagged ‘maintenance’

The Art of Scaling

Thursday, April 19th, 2012

Note: this is a purely anecdotal posting about our struggles with some performance bottlenecks in the last few months. If you're not interested in such background information, just skip.

You might have noticed that since about January 2012 using our file and mail servers hasn't been as smooth as usual. This posting will give you some background information concerning the challenges we encountered and why it took so long to fix them. Let's begin with the file server.

Way back in the days (i.e. 5 years ago), when the total file server data volume at D-PHYS was about 10 TB, we used individual file server to store this data. When one server was full, we got a bigger one, copied all the data and life was good for another year or two. Today, the file server data volume (home and group shares) is above 150 TB and growing fast and this strategy doesn't work any longer: individual servers don't scale and copying this amount of data alone takes weeks. That's why in 2009 we started migrating the 'many individual servers' setup to a SAN architecture in which the file servers are just huge hard drives (iSCSI over Infiniband, for the technically inclined) connected to a frontend server that manages space allocation and the file system. The same is true for the backup infrastructure, where the data volume is even bigger.

This new setup had to be developed, tested and put in place as seamlessly and unobtrusively as possible while ensuring data access at all times (apart from single hour-long migrations). The SAN architecture was implemented for Astro in December 2010 and has been running beautifully ever since. In 2011 we laid the groundwork to adopt this system for the rest of D-PHYS's home and group shares and after a long and thorough testing period the rollout happened on January 5, 2012. Unfortunately, that's when things got ugly.

At first, we noticed some exotic file access problems on 32bit workstations. It took us some time to understand that the underlying issue was an incompatibility with the new filesystem using 64-bit addresses for the data blocks. As a consequence we had to replace the filesystem of the home shares. Independently we ran into serious I/O issues with the installed operating system, so we had to upgrade the kernel of the frontend server and move the home directories onto a dedicated server. In parallel, we had to incorporate some huge chunks of group data while always making sure that nightly backups were available. All this necessitated a few more migrations until we finally achieved a stable system on March 28.

The upshot: what we had hoped to be a fast and easy migration turned out to cause a lot of problems and take much longer than anticipated, but now we have a stable and solid setup that will scale up to hundreds or even thousands of TB of data.
See live volume management and usage graphs for our file servers.

As for the mail server, matters are to some extent related and partly just coincidental in time. The IMAP server does need access to the home directories and hence also suffered when their performance was impaired. But even after having solved the file server issues, we still saw single load peaks on the IMAP server that prevented our users from working with their email. Again, we put a lot of time and effort into finding the reason. As of April 13, we're back to good performance and arrive at the following set of conclusions:

Particular issues:

  • a covertly faulty harddisk in the mail server RAID seems to have impaired performance
  • CPU load of the individual virtual machines on the mail server was not distributed across the available CPU cores in an optimal way

General mail server load:

  • while incoming mail volume doesn't increase much, outgoing mails have grown 50% in the last year alone
  • more and more sophisticated spam requires more thorough virus and spam scanning, increasing the load on the mail server
  • our users have amassed 1.1 TB of mail storage (up from 400 GB in January 2010), which need to be accessed and organized

Bottom line:

We'd like to thank you for your patience during the last 4 months and apologize for any inconvenience you might have had to endure. In all likelihood the systems will be a lot more stable in the future, but of course we're constantly working to ensure the D-PHYS IT infrastructure is able to keep up with the fast growing demand of disk space (the data volume has tripled in the last year alone). We've learned a lot and we'll put it to good use.

Mail Server Maintenance Downtime this Evening

Friday, April 13th, 2012

For some hardware and other maintenance we schedule a downtime of our mail server today (Fri, 13th of April 2012) evening after 6pm.

The downtime will likely take less than one hour. During the downtime you will neither be able to access your mails on the server nor to send mails via our server. Mails which are sent to the Dept. of Physics won't get lost, but will have some lag.

Migration of Home Directories

Friday, March 16th, 2012

In order to restructure the filesystem of the home directories, we schedule a migration on

Wednesday, 28. March 2012, starting at 17:00 and lasting for several hours.

During this time the home directories (winhome, machome, unixhome), the mail services and some websites will not be available.

To protect you from losing or corrupting any of your files, we strongly recommend you close all open files on the home directories before the migration.

Since we have switched to generic names for our services, the home directories will still be accessible the same way as before after the migration is over, so you don't have to change anything.

Update 19:40 Migration completed. Every test we could think of passed. Please let us know if you find any remaining issues. Thanks for your patience.

Short maintenance downtime on Sun, Feb 12

Friday, February 10th, 2012

Yesterday's outage could be traced to a flaky voltage controller on one of our RAID adapters. We schedule a short maintenance downtime on

Sunday, Feb 12, around 13:00

in order to replace the faulty controller. Most services will be affected.

Update 14:57 Cleanup took a bit longer than expected, but now all system are back again.

File server migration

Tuesday, January 24th, 2012

In order to solve our recent file server problems, we schedule another migration on

Wednesday, January 25, starting at 17:00 and lasting for several hours.

During this time, you will not have access to your home or group directories, and also email will only work intermittently. Please stop all running jobs and log out prior to the migration.

Update 20:30 Migration completed. Every test we could think of passed. Please let us know if you find any remaining issues. Thanks for your patience.

Emergency file server migration

Thursday, January 12th, 2012

On Jan 5, after weeks of thorough planning and rigorous testing, we performed a migration of the home directories and group shares to our new SAN system. Soon afterwards, the first phone calls started coming in. The initial problem was very exotic and affected very few people (that's why we had no chance to detect it during the testing period), but the action we took to address it unfortunately caused a cascade of consecutive faults that led to the instabilities you had to endure for one week now and for which we are truly sorry. We now know how to fix the underlying problem, but we cannot operate on the running server. That's why we have to schedule an

emergency file server migration on Sat, Jan 14, starting at 07:00 and lasting well into the afternoon probably.

During this time, you will not have access to your home or group directories, and also email will only work intermittently. Please stop all running jobs and log out before Saturday morning.

We apologize for the suboptimal performance since Jan 5. You have every right to expect better, but this caught us completely off guard. Thank you for your understanding.

Update, Sat 14:15: mounts and email are up and running again. The problem on 32bit machines still persists, but we have an idea how to fix it on Monday.

Update Fri 20.01: we (hence you) are still suffering from severe stability problems on the file server. We are very hard at work and now have a plan that we really really hope will solve the problems. There will be another migration sometime next week. We're truly sorry for the inconvenience you have to endure.

Network Interruption Today from 7pm to 8pm

Wednesday, December 7th, 2011

Today, the 7th of December 2011, around 7pm, there will be a complete network interruption in the whole Department of Physics for about one hour. The central ETH IT Services (“Informatikdienste”) will replace the hardware of the core router to the HPx network zone (includes the HIT building).

Wireless LAN should not be affected, but as the servers will be offline, too, you won't have access to files or mails on the servers, i.e. don't expect to be able to work during the network downtime. The technicians will reconnect the servers first, so access to the servers from the outside of the Department or via WLAN will be restored earlier than 8pm. Workstations and printers will get network access back afterwards.

plompy terminal server: unscheduled maintenance reboot

Friday, November 4th, 2011

Today at 13:00 we had to reboot plompy, one of our LTSP terminal servers. We're sorry for any inconvenience. In order to prevent those terminal servers from clogging up memory in the future, we hereby announce the following maintenance reboot schedule:

plompy: first Sunday of each month, 02:00 am
plimpy: third Sunday of each month, 02:00 am

There will always be one functioning terminal server and upon login your thin client will choose the right one. Please don't forget to save your data when leaving for the weekend before those two Sunday dates (that's always a good idea anyways).

Mail Server Upgrade on Wednesday, 5th of October

Monday, October 3rd, 2011

On Wednesday, 5th of October 2011, starting at 16:30, we will upgrade the operating system on several servers of our mail server cluster. This will result in temporary unavailability of most e-mail related services we provide: sending and receiving e-mails, mailing lists and webmail access.

Due to the maintenance e-mails may have some lag and arrive a few hours later than usual.

Update, 21:55: Upgrade of the incoming and IMAP/webmail/mailing list servers were successful. Everything is back to normal.

New SSL and HTTPS certificates for many ISG D-PHYS services

Thursday, September 30th, 2010

In the past all HTTPS secured web sites hosted or provided by us used certificates issued by ourselves. This caused unsettling warnings in most browsers as the user had to manually add the root certificate of our certification authority (CA) to his web browser.

To allow SSL certificates other than those signed by ourselves, namely certificates automatically accepted by all browsers, but also community-backed CACert certificates issued by ETH ID, we will change the configuration of our web server zwoelfi this evening. This may cause some short interruptions to some of the hosted sites, but should not be of longer duration.

Some of these web sites will get already new SSL certificates issued by QuoVadis (accepted by nearly all browsers by default) this evening.

Update Friday, 1. Oct. 2010, 21:00h: Due to several unexpected issues with the new QuoVadis certificate, for now the webserver runs again with the old ISG signed SSL certificate on all virtual hosts.

Update Thursday, 7. Oct. 2010, 23:00h: Most of the issues with the new QuoVadis certificate are solved now and all virtual hosts planned for the QuoVadis SSL certificate use it now again.

(more…)