开发者

I'm having troubles with Java sockets in a client/server type application when having to accept many connections

开发者 https://www.devze.com 2023-02-23 13:30 出处:网络
First of all, thanks for reading. This is my first time in stackoverflow as user, although I\'ve always read it and found useful solutions :D. By the way, sorry if I\'m not clear enough explaining mys

First of all, thanks for reading. This is my first time in stackoverflow as user, although I've always read it and found useful solutions :D. By the way, sorry if I'm not clear enough explaining myself, I know that my English isn't very good.

My socket based program is having a strange behaviour, and some performance issues. The client and server communicate with each other by reading/writing serialized objects into object input and output streams, in a multi-threaded way. Let me show you the code basics. I have simplified it to be more readable and a complete exception handling for example is intentionally ommited. The server works like this:

Server:

// (...)

public void serve() {
    if (serverSocket == null) {
        try {
            serverSocket = (SSLServerSocket) SSLServerSocketFactory
                                    .getDefault().createServerSocket(port);
            serving = true;
            System.out.println("Waiting for clients...");
            while (serving) {
                SSLSocket clientSocket = (SSLSocket) serverSocket.accept();
                System.out.println("Client accepted.");
                //LjServerThread class is below
                new LjServerThread(clientSocket).start();
            }
        } catch (Exception e) {
            // Exception handling code (...)
        }
    }
}

public void stop() {
    serving = false;
    serverSocket = null;
}

public boolean isServing() {
    return serving;
}

LjServerThread class, one instance created per client:

private SSLSocket clientSocket;
private String IP;
private long startTime;

public LjServerThread(SSLSocket clientSocket) {
        this.clientSocket = clientSocket;
        startTime = System.currentTimeMillis();
        this.IP = clientSocket.getInetAddress().getHostAddress();
}

public synchronized String getClientAddress() {
    return IP;
}

@Override
public void run() {
    ObjectInputStream in = null;
    ObjectOutputStream out = null;
    //This is my protocol handling object, and as you will see below,
    //it works processing the object received and returning another as response.
    LjProtocol protocol = new LjProtocol();
    try {
        try {
            in = new ObjectInputStream(new BufferedInputStream(
                                     clientSocket.getInputStream()));
            out = new ObjectOutputStream开发者_如何学Go(new BufferedOutputStream(
                                    clientSocket.getOutputStream()));
            out.flush();
        } catch (Exception ex) {
            // Exception handling code (...)
        }
        LjPacket output;
        while (true) {
            output = protocol.processMessage((LjPacket) in.readObject());
            // When the object received is the finish mark, 
            // protocol.processMessage()object returns null.
            if (output == null) {
                break;
            }
            out.writeObject(output);
            out.flush();
            out.reset();
        }
        System.out.println("Client " + IP + " finished successfully.");
    } catch (Exception ex) {
        // Exception handling code (...)
    } finally {
        try {
            out.close();
            in.close();
            clientSocket.close();
        } catch (Exception ex) {
            // Exception handling code (...)
        } finally {
            long stopTime = System.currentTimeMillis();
            long runTime = stopTime - startTime;
            System.out.println("Run time: " + runTime);
        }
    }
}

And, the client, is like this:

    private SSLSocket socket;

    @Override
    public void run() {
        LjProtocol protocol = new LjProtocol();
        try {
            socket = (SSLSocket) SSLSocketFactory.getDefault()
                     .createSocket(InetAddress.getByName("here-goes-hostIP"),
                                                                       4444);
        } catch (Exception ex) {

        }
        ObjectOutputStream out = null;
        ObjectInputStream in = null;
        try {
        out = new ObjectOutputStream(new BufferedOutputStream(
                                         socket.getOutputStream()));
        out.flush();
        in = new ObjectInputStream(new BufferedInputStream(
                                          socket.getInputStream()));
        LjPacket output;
        // As the client is which starts the connection, it sends the first 
        //object.
        out.writeObject(/* First object */);
        out.flush();
        while (true) {
            output = protocol.processMessage((LjPacket) in.readObject());
            out.writeObject(output);
            out.flush();
            out.reset();
        }
        } catch (EOFException ex) {
            // If all goes OK, when server disconnects EOF should happen.
            System.out.println("suceed!");
        } catch (Exception ex) {
            // (...)
        } finally {
            try {
                // FIRST STRANGE BEHAVIOUR:
                // I have to comment the "out.close()" line, else, Exception is
                // thrown ALWAYS.
                out.close();
                in.close();
                socket.close();
            } catch (Exception ex) {
                System.out.println("This shouldn't happen!");
            }
        }
    }
}

Well, as you see, the LjServerThread class which handles accepted clients in the server side, measures the time it takes... Normally, it takes between 75 - 120 ms (where the x is the IP):

  • Client x finished successfully.
  • Run time: 82
  • Client x finished successfully.
  • Run time: 80
  • Client x finished successfully.
  • Run time: 112
  • Client x finished successfully.
  • Run time: 88
  • Client x finished successfully.
  • Run time: 90
  • Client x finished successfully.
  • Run time: 84

But suddenly, and with no predictable pattern (at least for me):

  • Client x finished successfully.
  • Run time: 15426

Sometimes reaches 25 seconds! Ocasionally a small group of threads go a little slower but that doesn't worry me much:

  • Client x finished successfully.
  • Run time: 239
  • Client x finished successfully.
  • Run time: 243

Why is this happening? Is this perhaps because my server and my client are in the same machine, with the same IP? (To do this tests I execute the server and the client in the same machine, but they connect over internet, with my public IP).

This is how I test this, I make requests to the server like this in main():

    for (int i = 0; i < 400; i++) {
        try {
            new LjClientThread().start();
            Thread.sleep(100);
        } catch (Exception ex) {
            // (...)
        }
    }

If I do it in loop without "Thread.sleep(100)", I get some connection reset exceptions (7 or 8 connections resetted out of 400, more or less), but I think I understand why it happens: when serverSocket.accept() accepts a connection, a very small amount of time has to be spent to reach serverSocket.accept() again. During that time, the server cannot accept connections. Could it be because of that? If not, why? It would be rare 400 connections arriving to my server exactly at the same time, but it could happen. Without "Thread.sleep(100)", the timing issues are worse also.

Thanks in advance!


UPDATED:

How stupid, I tested it in localhost... and it doesn't give any problem! With and without "Thread.sleep(100)", doesn't matter, it works fine! Why! So, as I can see, my theory about why the connection reset is beeing thrown is not correct. This makes things even more strange! I hope somebody could help me... Thanks again! :)


UPDATED (2):

I have found sightly different behaviours in different operating systems. I usually develop in Linux, and the behaviour I explained was about what was happening in my Ubuntu 10.10. In Windows 7, when I pause 100ms between connections, all its fine, and all threads are lighting fast, no one takes more than 150ms or so (no slow connection issues!). This is not what is happening in Linux. However, when I remove the "Thread.sleep(100)", instead of only some of the connections getting the connection reset exception, all of them fail and throw the exception (in Linux only some of them, 6 or so out of 400 were failing).

Phew! I've just find out that not only the OS, the JVM enviroment has a little impact also! Not a big deal, but noteworthy. I was using OpenJDK in Linux, and now, with the Oracle JDK, I see that as I reduce the sleep time between connections, it starts failing earlier (with 50 ms OpenJDK works fine, no exceptions are thrown, but with Oracle's one quite a lot with 50ms sleep time, while with 100ms works fine).


The server socket has a queue that holds incoming connection attempts. A client will encounter a connection reset error if that queue is full. Without the Thread.sleep(100) statement, all of your clients are trying to connect relatively simultaneously, which results in some of them encountering the connection reset error.


Two points I think you may further consider researching. Sorry for a bit vague here but this is what I think.

1) Under-the-hood, at tcp level there are few platform dependent things control the amount of time it takes to send/receive data across a socket. The inconsistent delay could be because of the settings such as tcp_syn_retries. You may be interested to look at here http://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/tcpvariables.html#AEN370

2)Your calculated execution time is not only the amount of time it took to complete the execution but includes the time until the finalization is done which is not guaranteed to happen immediately when an object is ready for finalization.

0

精彩评论

暂无评论...
验证码 换一张
取 消