I am playing with the Unix hexdump utility. My input file is UTF-8 encoded, containing a single character ñ
, which is C3 B1
in hexadecimal UTF-8.
hexdump test.txt
0000000 b1c3
0000002
Huh? This shows B1 C3
- the inverse of what I expected! Can someone explain?
For getting the expected output I do:
hexdump -C test.txt
00000000开发者_Go百科 c3 b1 |..|
00000002
I was thinking I understood encoding systems.
This is because hexdump defaults to using 16-bit words and you are running on a little-endian architecture. The byte sequence b1 c3
is thus interpreted as the hex word c3b1
. The -C
option forces hexdump to work with bytes instead of words.
I found two ways to avoid that:
hexdump -C file
or
od -tx1 < file
I think it is stupid that hexdump decided that files are usually 16bit word little endian. Very confusing IMO.
精彩评论