I have an embedded application that has this requirement: One outgoing TCP network stream need absolute highest priority over all other outgoing network traffic. If there are any packets waiting to be transferred on that stream, they should be the next packets sent. Period.
My measur开发者_运维百科e of success is as follows: Measure the high priority latency when there is no background traffic. Add background traffic, and measure again. The difference in latency should be the time to send one low priority packet. With a 100Mbps link, mtu=1500, that is roughly 150 us. My test system has two linux boxes connected by a crossover cable.
I have tried many, many things, and although I have improved latency considerably, have not achieved the goal (I currently see 5 ms of added latency with background traffic). I posted another, very specific question already, but thought I should start over with a general question.
First Question: Is this possible with Linux? Second Question: If so, what do I need to do?
- tc?
- What qdisc should I use?
- Tweak kernel network parameters? Which ones?
- What other things am I missing?
Thanks for your help!
Eric
Update 10/4/2010: I set up tcpdump on both the transmit side and the receive side. Here is what I see on the transmit side (where things seem to be congested):
0 us Send SCP (low priority) packet, length 25208
200 us Send High priority packet, length 512
On the receive side, I see:
~ 100 us Receive SCP packet, length 548
170 us Receive SCP packet, length 548
180 us Send SCP ack
240 us Receive SCP packet, length 548
... (Repeated a bunch of times)
2515 us Receive high priority packet, length 512
The problem appears to be the length of the SCP packet (25208 bytes). This is broken up into multiple packets based on the mtu (which I had set to 600 for this test). However, that happes in a lower network layer than the traffic control, and thus my latency is being determined by the maximum tcp transmit packet size, not the mtu! Arghhh..
Anyone know a good way to set the default maximum packet size for TCP on Linux?
You might want to check settings on your NIC driver. Some drivers coalesce interrupts, which trades off higher throughput for increased latency.
http://www.29west.com/docs/THPM/latency-interrupt-coalescing.html
Also, I don't know if the NIC is buffering multiple output packets, but if it is, that will make it harder to enforce the desired priorities: if there are multiple low-priority packets buffered up in the NIC, the kernel probably doesn't have a way to tell the NIC "forget about that stuff I already sent you, send this high-priority packet first".
--- update ---
If the problem is long TCP segments, I believe you can control what max segment size the TCP layer advertises by the mtu
option on ip route
. For example:
ip route add default via 1.1.1.1 mtu 600
(Note that you would need to do this on the receive side).
精彩评论