It’s easy to create an automated staging server for content that doesn’t need to be compiled (like most web content.) The trick is that CVS has a very flexible logging system. All you need to do is have your CVS server send an email on each check in and have the staging server take that email and check out the files that changed.
The Bonsai project helpfully wrote a nice perl script which emails check in information in a machine readable format. To use it:
- Check out the CVSROOT module on your CVS server
- Copy the perl script into that directory
- Add the line:
ALL $CVSROOT/CVSROOT/dolog.pl -r /cvs [email protected]
toCVSROOT/loginfo
- where
/cvs
is whatever your cvs path is andstage1.example.com
is your staging server
- where
- Add the
dolog.pl
script, and check in the file and the change tologinfo
.
It’s simple to make the staging server respond to these emails.
- Edit your
/etc/aliases
file and add:cvs-watch: "|cvs-robot"
- Check out your CVS module in the appropriate place
- Create
/etc/smrsh/cvs-robot
So what goes in the cvs-robot script?
#!/bin/shexport CVSROOT=':pserver:[email protected]:/cvs'cd /var touch /tmp/checkin-errors.log grep 'www' | grep '. HEAD .' | cut -d" " -f3- | xargs -r cvs update -P -d &> /tmp/checkin-errors.log
This file is only an example. You’ll obviously have a different CVSROOT, and you’ll likely have checked out into a different place than /var. Your module might not be ‘www’ and your branch might not be HEAD. Edit as appropriate.
Bugs
So what doesn’t work? Well, if you add a new directory, this script fails. I’m not sure why. You’ll need to log onto stage1 and do an update manually to get new directories. Luckily, adding a directory is usually rare.
The update is checked out using the root account. I’m not sure what the security concerns are about doing that, and it’s possible you may have permissions issues. You can create a simple wrapper script that calls the real update command:
#!/bin/shsudo -u apache /etc/smrsh/cvs-robot-real