Dylan Penhale wrote:
> We currently split our render groups and criteria according to machine spec=
> s, software and department. We would like to be able to reserve a pool sele=
> ction from 9-00 till 22-00 and then open that pool to the rest of the farm =
> out of those hours.=20
>
> e.g. Comp renders are fast, we don't want 3D renders on those machines duri=
> ng the day. However at night comp isn't (always) there so those machines co=
> uld be used by 3D. Currently the only way I can think to do this is to:
I see, so you have 'day' machines that should only
run comps during the day, but open up to all jobs at night?
And since the 3d jobs are long, you probably want them to
be killed at 9am so that they don't continue to run for
what could be hours.
So doing something like online/offline'ing the processors
at 10pm/9am via cron wouldn't be good, because it would affect
comps AND 3d jobs, eg:
# min hour dom mon dow command
0 9 * * 1,2,3,4,5 /usr/local/rush/bin/rush -getoff +resv_day
0 22 * * 1,2,3,4,5 /usr/local/rush/bin/rush -online +resv_day
So instead of that..
I'd say the easy thing to do would be to leave the rush/etc/hosts
file alone, and just write a script that runs on a single machine
at 9am and 10pm that alternately:
1) Adds the "+day" group to all the 3d jobs at 10pm
2) Removes the "+day" group from the 3d jobs at 9am
By removing them, that will kill them immediately.
The jobs likely have enough info in them that the script
can use to determine 3d jobs from comps, eg. the job title,
job notes, or if nothing else, each job's 'command' (by
checking for either 'submit-maya' or 'submit-nuke', etc)
For instance, if it's maya jobs you're worried about,
you can walk all the jobs looking at the 'Command:'
(from the 'rush -ljf +any -t 5' report) checking for
the word 'submit-maya, and for all that are found:
1) At 9am use 'rush -rc +day <JOBIDS>' to remove them,
(first saving the +day cpu values into the job's
notes field, so they can later be added back at 10pm)
2) At 10pm use 'rush -ac <SPEC> <JOBIDS>' to add them back.
(<SPEC> would be info saved in the job notes at 9am)
I think something like that would work well, and would take
immediate effect.
> 2) Push out the hosts files as part of a cron - I've not tried this but ima=
> gine it wouldn't be robust.
Probably not a good solution, since hostgroup memberships are
expanded at the time the job is submitted, so changing them
back and forth are unlikely to be useful, unless the jobs
render in less time than a full shift (ie. a job running 24 hours).
--
Greg Ercolano, erco@(email surpressed)
Seriss Corporation
Rush Render Queue, http://seriss.com/rush/
Tel: (Tel# suppressed)ext.23
Fax: (Tel# suppressed)
Cel: (Tel# suppressed)
|