I'm having a little problem with nginx and the Perl FCGI module. I have a long operation in my FCGI program that may outlive the server (or the user on the server) on the other end of the Unix socket I'm using to communicate FCGI. I need the FCGI accept() loop in my program to break if the FCGI request is closed. I tried installing INT, TERM, etc signal handlers, but they do nothing, since the only communication between nginx and my program happens over the FCGI socket, AFAIK.
I also tried this but there's no way that I can see to use the FCGI module in Perl to send raw开发者_JAVA技巧 data to or from nginx over the FCGI socket. Is there a way I can do it without modifying the FCGI module to have a "ping" function?
The basic problem is that my program does not know if nginx has terminated the FCGI request.
Example:
#!/usr/bin/perl -w
use strict;
use FCGI;
my $fcgi_socket = FCGI::OpenSocket( '/tmp/test.socket', 100000 );
my $request = FCGI::Request(\*STDIN, \*STDOUT, \*STDERR, \%ENV, $fcgi_socket);
REQUEST: while($request->Accept() >= 0) {
#begin handling request
my $result = '';
while (1) { #or select(), etc
if (somehow check whether the fcgi $request is still live) {
next REQUEST;
}
#check for results, set $result if done
}
print $result;
}
You have to use a FCGI
implementation which treats FCGI_ABORT_REQUEST
.
You cannot use the following, because they ignore FCGI_ABORT_REQUEST
:
- FCGI <=v0.69 (the one which you are currently using?)
- FCGI-Async <=v0.19
- Net-FastCGI <=v0.08
- FCGI-EV <=1.0.7
You could use the following, which treat FCGI_ABORT_REQUEST
:
- Vitaly Kramskikh's AnyEvent-FCGI
When using AnyEvent-FCGI
, checking for an aborted request is as easy as calling $request->is_active()
, but keep in mind that is_active()
will not reflect the true state of the request until the on_request
handler returns, which means you have to return from on_request
as soon as possible and somehow do the actual work "in parallel" (you probably don't want to use Perl threads, but something more akin to continuations) in order to give the AnyEvent
loop the opportunity to process any further requests (including FCGI_ABORT_REQUEST
s) while you are completing the long-winded operations.
I am not familiar enough with AnyEvent
to know for sure whether there is a better way of doing this, but here's my take, below, for a start:
use AnyEvent;
use AnyEvent::FCGI;
my @jobs;
my $process_jobs_watcher;
sub process_jobs {
# cancel aborted jobs
@jobs = grep {
if ($_->[0]->is_active) {
true
} else {
# perform any job cleanup
false
}
} @jobs;
# any jobs left?
if (scalar(@jobs)) {
my $job = $jobs[0];
my ( $job_request, $job_state ) = @$job;
# process another chunk of $job
# if job is done, remove (shift) from @jobs
} else {
# all jobs done; go to sleep until next job request
undef $process_jobs_watcher;
}
}
my $fcgi = new AnyEvent::FCGI(
port => 9000,
on_request => sub {
my $request = shift;
if (scalar(@jobs) < 5) { # set your own limit
# accept request and send back headers, HTTP status etc.
$request.print_stdout("Content-Type: text/plain\nStatus: 200 OK\n\n");
# This will hold your job state; can also use Continutiy
# http://continuity.tlt42.org/
my $job_state = ...;
# Enqueue job for parallel processing:
push @jobs, [ $request, $job_state ];
if (!$process_jobs_watcher) {
# If and only if AnyEvent->idle() does not work,
# use AnyEvent->timer() and renew from process_jobs
$process_jobs_watcher = AnyEvent->idle(cb => \&process_jobs);
}
} else {
# refuse request
$request.print_stdout("Content-Type: text/plain\nStatus: 503 Service Unavailable\n\nBusy!");
}
}
);
AnyEvent->loop;
精彩评论