The filesystem we use is somewhat distributed - there are a lot of files on different disks and the setup is a little complicated. This page is meant to explain some of the weirder aspects of this before the community memory is lost.
Machines can access disks on other machines using NFS - the network file system. When a client machine accesses a disk which is physically connected to a server machine, we say that the client has 'nfs-mounted' the server. There are two flavors of NFS mounting, hard and soft. A 'hard' mount is the way it was originally intended to work, and is achieved either by a line in /etc/fstab (which lists the disks mounted at boot time, both local and over NFS) or by direct use of /etc/mount . A 'soft' mount is a disk that was mounted via the automounter.
The automounter (/usr/etc/automount) is a very clever and useful program that (somehow) monitors access to certain directories, then automatically makes a temporary NFS-mount to a particular remote drive if any attempt is made to access that directory. The map between directories in the local file system and which remote drives should be mounted is stored in a configuration file passed as an argument; here at the lab we use /mas/etc/auto.master.sgi (for SGI machines at least). Note that the configuration file is on a remote file system, however, since it is needed before the automounter can be run, that single filesystem (mas-disk:/mas) is hard-mounted in /etc/fstab. After that, disks on the fileservers and other machines are soft-mounted by the automounter, usually through the /mas tree (for standard lab filesystems), or through the /net/MACHINENAME tree (for the disks local to a particular machine).
This is where it starts to get interesting. There are obvious privacy and security issues in allowing a remote machine to mount the local disk on your workstation. In general, requests for remote mounts will be refused by a machine acting as a server (which can be any workstation) unless that disk is specifically mentioned in /etc/exports . Moreover, the /etc/exports file provides for limited access lists, so that requests will only be honored for particular disks from particular machines.
We used to have unrestricted exports on all our machines - so that the local disks on our workstations could be mounted, read from and written to by literally anyone on the internet. This is an obvious weakness, and, unfortunately, one that the recent 'cracking' tools (SATAN, ISS etc) can detect. After FINCHLEY got severely hacked, it was time to close this loophole.
The access control lists in /etc/exports can name individual machines (I think) or netgroups. A netgroup is a named list of machines (with provision for specifying particular users on the machines) used I think solely for /etc/exports. In any case, I defined a netgroup called 'bvg' which includes all the machines in our group, and I modified the /etc/exports files on our machines so that automounting, at least of the very sensitive root partition is limited to our netgroup. Our netgroups are actually distributed over NIS/YP, and thus the master definition of the netgroups is on KEW, the master YP server, in the file /etc/netgroup . When a new machine is added to our group, it should be added to kew:/etc/netgroup, and then, on KEW, the netgroup YP map should be refreshed by executing './ypmake netgroup' in /var/yp as root.
There are compatibility problems between the DECstations and the SGIs. Try as I might, I couldn't get a DECstation to successfully mount an SGI disk if the disk was exported with a limited netgroup on the SGI - despite the fact that the netgroup included the putative DECstation client. For this reason, some of the SGI disk partitions are still exported to the world (i.e. unrestricted) so that they may accessed from our DECstations. I'm not sure what the compatibility with the Alphas is like.
dpwe@media.mit.edu 1996jan16
Back to Machine Listening Group home page