开发者

Tailing 'Jobs' with Perl under mod_perl

开发者 https://www.devze.com 2022-12-31 20:50 出处:网络
I\'ve got this project running under mod_perl shows some information on a host. On this page is a text box with a dropdown that allows users to ping/nslookup/traceroute the host. The output is shown i

I've got this project running under mod_perl shows some information on a host. On this page is a text box with a dropdown that allows users to ping/nslookup/traceroute the host. The output is shown in the text box like a tail -f.

It works great under CGI. When the user requests a ping it would make an AJAX call to the server, where it essentially starts the ping with the output going to a temp file. Then subs开发者_JS百科equent ajax calls would 'tail' the file so that the output was updated until the ping finished. Once the job finished, the temp file would be removed.

However, under mod_perl no matter what I do I can's stop it from creating zombie processes. I've tried everything, double forking, using IPC::Run etc. In the end, system calls are not encouraged under mod_perl.

So my question is, maybe there's a better way to do this? Is there a CPAN module available for creating command line jobs and tailing output that will work under mod_perl? I'm just looking for some suggestions.

I know I could probably create some sort of 'job' daemon that I signal with details and get updates from. It would run the commands and keep track of their status etc. But is there a simpler way?

Thanks in advance.


I had a short timeframe on this one and had no luck with CPAN, so I'll provide my solution here (I probably re-invented the wheel). I had to get something done right away.

I'll use ping in this example.

When ping is requested by the user, the AJAX script creates a record in a database with the details of the ping (host, interval, count etc.). The record has an auto-incrementing ID field. It then sends a SIGHUP to to a job daemon, which is just a daemonised perl script.

This job daemon receives the SIGHUP, looks for new jobs in the database and processes each one. When it gets a new job, it forks, writes the PID and 'running' status to the DB record, opens up stdout/stderr files based on the unique job ID and uses IPC::Run to direct STDOUT/STDERR to these files.

The job daemon keeps track of the forked jobs, killing them if they run too long etc.

To tail the output, the AJAX script send back the job ID to the browser. Then on a Javascript timer, the AJAX script is called which basically checks the status of the job via the database record and tails the files.

When the ping finishes, the job daemon sets the record status to 'done'. The AJAX script checks for this on it's regular status checks.

One of the reasons I did it this way is that the AJAX script and the job daemon talk through and authenticated means (the DB).

0

精彩评论

暂无评论...
验证码 换一张
取 消