> We have some scripts that present reports on the queue that we'd like to
> be able to run on machines that don't have rush installed. Is there a way
> to do this?
There's a few ways.
1. RSH/SSH
----------
You can do a simple rsh(1) to the machine that has rush installed, eg:
rsh somehost rush -lac
..and just have your script read stdout to get the report, and check
stderr for error messages.
2. CGI-BIN SCRIPT (HTTP)
------------------------
Another approach would be to create a simple cgi-bin script on a webserver
that has rush installed. Then your other machines can make HTTP queries
to this script to get the reports it needs.
Consider this simple cgi-bin script:
#!/usr/bin/perl -w
use strict; $|=1;
print "Content-Type: text/plain;\n\n";
system("rush -lac 2>&1");
exit(0);
..which will return a 'rush -lac' report to anyone that connects to it.
Let's say the URL to this script is: http://yourserver/cgi-bin/rush-lac.cgi
Then from any client, you could get the report by running e.g.:
curl http://yourserver/cgi-bin/rush-lac.cgi
..or:
GET http://yourserver/cgi-bin/rush-lac.cgi
..or by using an appropriate perl or python module that can make HTTP requests.
OPTIMIZATION WITH CACHING
-------------------------
One company used this for their internal GUI applications which regularly
polled a similar script to get updated reports.
They used it so much, though, that it was creating a lot of load on the
server for the same reports from all their workstations running GUI apps.
Since they were often all asking for the same report (jobs list),
I helped them set it up so a cache could be used, to prevent hitting
the server so much.
The first client machine to ask for the report would run the real
rush command, getting the live report, which was also saved to the cache file.
Then any other machines asking for the same report within 8 seconds would
simply get sent the cache file (instead of again running the same rush command).
After 8 seconds, a request for that report would run a new rush command,
and the cache would last for another 8 seconds.
The script simply looked at the date stamp on the cache file, and if it
was older than 8 seconds, a new command would refresh it.
This really improved response to the end user's GUI application, and
kept the rush server being asked the same thing over and over.
Simple local file locking was used to prevent races on the cache.
We implemented this some years ago; the client, has around 600 machines,
and another and a few hundred users any of whom might be running their
GUI tool, and have been running happily with this config since.
I recommended this same design to another customer with a similar large
configuration to decrease the load on their servers with a custom UI
that was polling for info.
|