> We are running a JobDumpCommand on each job that when dumped runs a few commands.
> It seems like the first command is the only one that works right now. We are removing,
> at dump, the log files, and on a mental ray job, we are removing all the .mi files
> that were used in the render. At some time in the past it did work how we wanted it to,
> [running both commands], but for some reason it will only do one now.
The 'jobdumpcommand' is only supposed to accept executing a single command.
You can probably jam all the commands in if you specify a shell to run them
with, ie:
sh -c 'rm -rf /some/path; rm -rf /some/other/path'
..though that's somewhat specific to platforms that support the CSH.
You could use perl just as easily I suppose.
Though normally I'd recommend if you want to run several commands,
put those commands into a script, and set the jobdumpcommand to that script.
A common thing to do is to put the commands into the submit script,
and make a command line option for the submit script (ie. "-cleanup")
that takes arguments for the dirs to remove, or whatever.
Cautions should of course be taken when running 'rm -rf' commands,
so for instance, we test to make sure when removing the logdir that
there's a 'framelist' file within, to ensure we don't accidentally
blow away a production dir..!
----------------------------------------------------------
#!/bin/csh -f
# SUBMIT SCRIPT
# HANDLE CLEANUP ARGUMENT
# This is invoked by the jobdumpcommand when the job finishes.
#
if ( "$1" == "-cleanup" ) then
# CLEAN UP THE LOG DIR
if ( "$1" != "" && -d $1 && -e $1/framelist ) rm -rf $1
# CLEAN UP THE MI DIR
if ( "$2" != "" && -d $2 ) rm $2/*.mi
exit 0
endif
# HANDLE SUBMITTING THE JOB
set $logdir = ..
set $midir = ..
touch $midir/mi_files
rush -submit << EOF
...
jobdumpcommand -nolog //path/to/your/script.csh -cleanup $logdir $midir
...
EOF
----------------------------------------------------------
Be careful when writing recursive scripts like this -- if you forget
the 'exit 0' at the end of the cleanup section, the script will fallthrough
to submitting another job.. creating a worm! %^O
BTW, if you're modifying one of the rush submit scripts, the variable $G::self
always has the path to the script, so you can use it this way:
...
jobdumpcommand -nolog $G::self -cleanup $logdir $mipath
...
|