开发者

IFS variable issue in script with Unicode [closed]

开发者 https://www.devze.com 2023-02-15 10:00 出处:网络
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.

This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.

Closed 6 years ago.

Improve this question 开发者_StackOverflow社区

I'm using IFS variable in a shell script to parse some data (the data is already provided to me in a given format). As the default IFS is whitespace/tab/etc, i'm using the character '¬' to delimit fields in the input file lines. The data is something like

14352345¬AFSFDG1234¬text¬(http://www.google.com,3)(http://www.test.com,2)¬(www.test2.com,4)¬123-23432

I have created a script which pipes the file into a while loop using the IFS variable using:

#!/bin/bash;
while IFS=¬ read -r sessionId qId testResults realResults queryId;
do echo $sessionId; done < inputFile

(inside this loop I actually do some awk processing with another file).

What happens is that if I run this file manually (just ./file), it works perfectly. If i run it as part of a scripted (cron) or within another script, I get parsing errors which suggest that my IFS variable is not being used. I've tried copying out the old IFS variable and resetting after parsing as well as different ways of passing in the IFS variable (¬,'¬',$'¬', etc, but does not seem to help).

Any pointers/tips would be greatly appreciated.


Update: After some additional debugging, turns out the problem is with the awk statement rather than the separator


You're either having a problem with Unicode, or with the shell you're trying to use, the former being more likely.

The character you chose as separator (¬) is outside of the ASCII set, and can be (generally) represented in two different ways by a computer: Either it'll bee encoded as latin1 or similar, where the character occupies an octet, or it'll be encoded as UTF-8 and use two octets. There are other possibilities, but these two are the most likely, so bear with me.

If you saved your script encoded as UTF-8 and you're trying to run it in a non-unicode locale, the shell will get two (wrong) characters as separator instead of one. To test for this, try using an ascii character as separator, like ~ for example.

If you find that using ~ works, you'll have to take a look at the global configuration of your system, and make sure that the locale is the same in the environment you used to create your script, as it is in the environment where the script runs. You can do this executing the locale command. You may create a script that runs this command and stores its output in a file:

#!/bin/sh
locale > /tmp/locale-env

Then you make it run from cron, for example, and take a look at the /tmp/locale-env file. Compare its contents with the output of locale as you run it from your interactive shell. Depending on your distribution, you may be able to set your global locale in /etc/environment, /etc/profile or other location. You may wish to go UTF-8 system-wide:

LANG=en_US.UTF-8
export LANG

This is a trap that we international users tend to know better than English speaking ones, since ASCII and UTF-8 is exactly the same for English characters, and these issues go unnoticed more often than not.

0

精彩评论

暂无评论...
验证码 换一张
取 消