I was asked this question last night by a friend and I am stumped. So I thought I would ask here.
The question is:
How would you trouble shoot a web page that is not 开发者_如何学Cloading on a linux web server?
Check your Network Interface Configuration card first:
Open a terminal, then type: ifconfig
.
Verify the interface you are using has an ip assigned. e.g:
An active interface looks like this:
wlan0 Link encap:Ethernet HWaddr 00:06:25:09:6A:D7
inet addr:216.10.119.243 Bcast:216.10.119.255 <--notice ip here
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2924 errors:0 dropped:0 overruns:0 frame:0
TX packets:2295 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:180948 (176.7 Kb) TX bytes:166521 (162.6 Kb)
Interrupt:10 Memory:c88b5000-c88b6000
And a shutdown interface:
wlan0 Link encap:Ethernet HWaddr 00:06:25:09:6A:D7
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:2924 errors:0 dropped:0 overruns:0 frame:0
TX packets:2287 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:180948 (176.7 Kb) TX bytes:166377 (162.4 Kb)
Interrupt:10 Memory:c88b5000-c88b6000
If you believe the server is able to connect to internet, send and receive some packages from it to test if it is able to transfer data properly through the internet:
From another computer, in the terminal type the command ping followed by your server's ip (the one you got with the command ifconfig), e.g:
ping -c 10 216.10.119.243
the -c 10
part is for only sending 10 packets (that way you dont have to stop it with ctrl C).
After that, you can use the curl
utility. It acts like a text based Web browser in which you can select to see either the header or complete body of a Web page's HTML code displayed on your screen.
A good start is to use the curl command with the -I flag to view just the Web page's header and HTTP status code. If you dont use -I then you will see all the we page's html code displayed on the screen.
So, type curl -I and then the ip of your server, for example
curl -I 216.10.119.243
You can use the wget
command to download a Web site's Web pages, including the entire directory structure of the Web site, to a local directory of yours.
If you activate timestamping (-N), you view not only the HTML content of the Web site's index page in your local directory, but also the download speed, file size and the start and stop times for the download. You could use it like this:
wget -N 216.10.119.243
If you want to see the amount of traffic in and out of your server, use:
netstat -an
Lastly use
traceroute -I 216.10.119.243
to verify the route path and transit times of packets between your machine and the server you are troubleshooting.
That should do it. Hope it helps.
精彩评论