MLG Software : Fileserver


General

Our group 'owns' several disks on CGA.MEDIA.MIT.EDU, which is a central fileserver (an alpha, in fact) operated by systems programming. CGA is located in E15-040 - it's about the third machine from the left, in the row of fileservers. We share it with the following groups: Interactive Cinema, Physics & Media, Advanced Human Interface.

We have four disks, as in this table:

Path Type Size notes
/ti RZ59 1.3 G Parent partition; home directories
/re RZ59 1.3 G /usr/local partitions
/mi RZ59 1.3 G large projects
/ut ST11200 4.0 G new overflow (includes UROP and class student homes)

Before we had /ut, we had a different fourth disk, an RZ57 mounted as /so; since the disk racks are full, when we scored /ut we had to swap /so out. /so used to be used for sound files, and its final contents were copied to /ut/so . I explain this because there are still /so mount points everywhere.

Paths

On Unix machines, these filesystems get mounted via the automounter, which is a process that looks for accesses to certain directories, then makes a temporary NFS mount at those directories as necessary. On all the MLG machines, there are symbolic links from the paths specified above (/ti etc) to the necessary automount directories; on other machines at the media-lab, they are accessible through the lab-wide path /mas/bvg (so our disks are /mas/bvg/ti etc). On the fileserver itself, they actually live under /bvg, but you shouldn't have any cause to actually look on the fileserver itself.

About which files belong on which disk

Basically, it doesn't really matter, but seeing as we have actual separate physical disks, there's this nagging feeling that we should try and use them for different things. My ideal set up would be to have the large 4 G disk as /ti, then have everyone's home directories there, and it would never fill up (ha!). But I couldn't actually figure out how to replace a partition with a new disk while keeping the same name (the disks are mounted using AdvFS, the journalling advanced file server, which is apparently very advantageous, but means that it is very far from clear how to actually mount them or move them).

Having said that, here are the recommended usages as they stand:

/ti
The preferred mount point, thus even though system-type files are on /re, it is preferred practice to access them as /ti/local/iris/bin rather than /re/local/iris/bin; this would allow us to change the symbolic link, /ti/local, at some local date, e.g. if we wanted to move the files from /re to /ut, and nothing would break.

/ti also has the master directories for other central functions such as public ftp (/ti/ftp), web pages (/ti/http) and various other central filestructures that never really took off.

/re
System files, thus on our Unix workstations, /usr/local is actually a link to /re/local/$MACHTYPE (via /ti/local/$MACHTYPE) where $MACHTYPE is one of ds, iris, alpha or common. Usually, the shell environment variable MACHTYPE is set to the result of /mas/bin/common/machtype or /usr/local/bin/machtype .

/mi
Our original overflow disk, contains user files in directories at the root level named after their owners

/ut
Our new overflow disk, contains the soundfiles formerly on /so and the home directories of urops (in /ut/urop) and students in Barry's class (in /ut/class). Presumably, other users will put their project files on here at the root level.

Access from Macintoshes

CGA runs CAP, the Columbia Appletalk server for Unix filesystems, so you should be able to access our filesystems via the chooser. However, CAP relies on Unix accounts, so it depends on whether you have an account in CGA's /etc/passwd (which in general mere mortals do not) and it can get quite ugly beyond that.

Backups

It used to be that syspro would look after making regular backups of the central fileservers. This has been decidedly patchy of late, although I believe it still goes on. I also run incremental backups each night that I remember. Check out the scripts /ti/u/dpwe/bvg-bu1 (which is run as root on eno to backup onto the local DAT drive on Kew; it has to run on an alpha for the weird advfs dump command) - a level 1 dump of all four fileserver disks as well as the local disks on SOUND and KEW. kew-bu0 and sound-bu0 (in the same place) run level zero (complete, baseline) backups of those two machines, and the incremental dumps just encompass files that have changed since the last recorded level 0 dump. I leave the level 0 dumps of the fileserver disks to syspro - bvg-bu1 reports just how long ago these were, so you can consider making your own.

Addendum, 1995oct03: I believe syspro has officially ceased making level 0 backups of the fileserver, and it is now the responsibility of each group to ensure that this happens. Level 0 backups are more of a problem than the nightly level 1 backups because every file on the system is written out, which quickly fills the average DAT drive.

Luckily, the lab has invested in these neato auto-changing DAT drives. Each fileserver has one of these drives, which takes a cartridge 'brick' containing six DAT tapes. The drives are compatible with 120m tapes, nominal 4 G capacity, and they are also data compressing, so you can probably get 6 G or more of file system onto each tape. It's wise to use the sixth cartridge slot for a cleaning tape, thus a single brick provides for an uninterrupted run dumping 20-30 G of storage. This is just as well, as CGA currently has about 20 G of disk online.

I ran a level 0 dump last night, using the script /ti/u/dpwe/cga-bu0 . This appears to work; all you have to do is go down to the fileserver room (E15-040, combo 2468*) and insert a populated brick into the DAT drive sitting atop the fileserver stack labelled CGA. You also have to press the 'load tape' button to make the second-highest green LED light up. Then you can start the script. If it works, it will report how many meg it wrote for each of the 13 disks it dumps. It takes 9-10 hours to run. I recommend cycling (at least) two level 0 dumps (so that if writing one messes up, you still have a valid set in the other) and making level 0 backups once every 2-4 weeks. This limits the size of the level 1 backups (which comprise every file changed since the last level 0 backup).

Information on NFS mounting and automount access control

See the Notes on NFS mounting page.


Back to computational resources

Back to Machine Listening Group home page


DAn Ellis <dpwe@media.mit.edu>
MIT Media Lab Perceptual Computing