开发者

Java: GatheringByteChannel advantages?

开发者 https://www.devze.com 2022-12-31 08:57 出处:网络
I\'m wondering when the GatheringByteChannel\'s write methods (taking in an array of ByteBuffers) have advantages over the \"regular\" WritableByteChannel write methods.

I'm wondering when the GatheringByteChannel's write methods (taking in an array of ByteBuffers) have advantages over the "regular" WritableByteChannel write methods.

I tried a test where I开发者_开发问答 could use the regular vs. the gathering write method on a FileChannel, with approx 400KB/sec total in ByteBuffers of between 23-27 bytes in length in both cases. Gathering writes used an array of 64. The regular method used up approx 12% of my CPU, and the gathering method used up approx 16% of my CPU (worse than the regular method!)

This tells me it's NOT useful to use gathering writes on a FileChannel around this range of operating parameters. Why would this be the case, and when would you ever use GatheringByteChannel? (on network I/O?)

Relevant differences here:

public void log(Queue<Packet> packets) throws IOException
{
    if (this.gather)
    {
        int Nbuf = 64;
        ByteBuffer[] bbufs = new ByteBuffer[Nbuf];
        int i = 0;
        Packet p;
        while ((p = packets.poll()) != null)
        {
            bbufs[i++] = p.getBuffer();
            if (i == Nbuf)
            {
                this.fc.write(bbufs);
                i = 0;
            }
        }
        if (i > 0)
        {
            this.fc.write(bbufs, 0, i);
        }
    }
    else
    {
        Packet p;
        while ((p = packets.poll()) != null)
        {
            this.fc.write(p.getBuffer());
        }
    }
}

update:

I did some testing and the gathering approach for various lengths of ByteBuffers seems to have no benefit for file I/O. Far more relevant was the "fragmentation" of the I/O stream via byte buffer length. I changed my program so it copied a relatively large (27MB) file via reading the input into byte buffers of a particular length. The program starts to slow down significantly if the buffers are less than 256 bytes in length.

I decided to try a third option, namely to write my own simple "gathering" routine that takes buffers and consolidates them into a larger buffer before writing to a filechannel. This blows away the GatheringByteChannel write(ByteBuffer[] buffers) method for speed. (Note: The reading size is the same for all three modes of writing, so the fact that I'm creating a bunch of small ByteBuffers and using them to read I/O isn't causing a significant slowdown.) I'm kind of disappointed that Java doesn't just do this for you. Oh well.

enum GatherType { NONE, AUTOMATIC, MANUAL }

static class BufferWriter
{
    final private FileChannel fc;
    private GatherType gather = GatherType.NONE;

    BufferWriter(FileChannel f) { this.fc = f; } 

    public void setGather(GatherType gather) { this.gather=gather; }
    public void write(Queue<ByteBuffer> buffers) throws IOException
    {
        switch (this.gather)
        {
            case AUTOMATIC:
            {
                int Nbuf = 64;
                ByteBuffer[] bbufs = new ByteBuffer[Nbuf];
                int i = 0;
                ByteBuffer b;
                while ((b = buffers.poll()) != null)
                {
                    bbufs[i++] = b;
                    if (i == Nbuf)
                    {
                        this.fc.write(bbufs);
                        i = 0;
                    }
                }
                if (i > 0)
                {
                    this.fc.write(bbufs, 0, i);
                }
            }
            break;
            case MANUAL:
            {
                ByteBuffer consolidatedBuffer = ByteBuffer.allocate(4096);
                ByteBuffer b;
                while ((b = buffers.poll()) != null)
                {
                    if (b.remaining() > consolidatedBuffer.remaining())
                    {
                        consolidatedBuffer.flip();
                        this.fc.write(consolidatedBuffer);
                        consolidatedBuffer.clear();
                    }

                    if (b.remaining() > consolidatedBuffer.remaining())
                    {
                        this.fc.write(b);
                    }
                    else
                    {
                        consolidatedBuffer.put(b);
                    }
                }

                consolidatedBuffer.flip();
                if (consolidatedBuffer.hasRemaining())
                {
                    this.fc.write(consolidatedBuffer);
                }
            }
            break;
            case NONE:
            {
                ByteBuffer b;
                while ((b = buffers.poll()) != null)
                {
                    this.fc.write(b);
                }
            }
            break;
        }
    }
}


Does packets.poll() creates every time a new Buffer? If not, your are writing wrong data in the first case.

0

精彩评论

暂无评论...
验证码 换一张
取 消