When I am sending a DNS query to the DNS it returns the header with the format bit set. Indicating there is a problem with the format, but I am failing to see what it is. Its possible I have misinterpreted the RFC, or misread it but right now I cant seem to work it out.
The DNS structure I am sending looks like this in hex.
Header
00 01 - ID = 1
01 00 - RD = 1
00 01 - QD = 1
00 00 - AN
00 00 - NS
00 00 - NR
Question for www.google.com
03 77 - 3 w
77 77 - w w
06 67 - 6 g
6f 6f - o o
67 6c - g l
65 03 - e 3
63 6f - c o
6d 00 - m 0
00 01 - QTYPE
00 01 - QCLASS
I then flip the bytes for any field that is two bytes, to convert to big endian for the network format. So eac开发者_StackOverflowh row of the header, and then QTYPE and QCLASS ...
Here's what a byte-by-byte hexdump of that query packet should look like (tested and working!):
00000000 00 01 01 00 00 01 00 00 00 00 00 00 03 77 77 77 |.............www|
00000010 06 67 6f 6f 67 6c 65 03 63 6f 6d 00 00 01 00 01 |.google.com.....|
I think your problem is that the third and fourth bytes of the packet (flags
and rcode
) are two single-byte fields, not one 2-byte field - it looks like you might be treating it as a 16 bit integer and swapping the bytes?
To get these you can use netcat and dig.
# nc –uip 53 > dnsreqdump
# dig www.example.com @localhost
# nc –u 8.8.8.8 53 <dnsreqdump >dnsrespdump
Now you can inspect them in hexedit or your favorite hex editor.
I tend to think that your problem depends on how are you actually "flipping the bits to convert to network format".
Typical C library implementations provide the htonl()
function family to do the conversion from host into network order and viceversa.
Of course, without seeing the code, I cannot be sure that this is the problem.
精彩评论