用python2.7,urllib2写了一个读取网页的脚本,大概内容是这样的:
...
try:
temp_url=urllib2.urlopen(url,timeout=2)
except:
return
res=temp_url.read()
...
在延迟800ms~1200ms,丢包30%的模拟网络环境中持续不停跑了大半天,然后read()报了个time out异常
...
File "../scripts/_readurl.py", line 50, in text2mp3
voice_data=temp_url.read()
File "/usr/lib/python2.7/socket.py", line 355, in read
data = self._sock.recv(rbufsize)
File "/usr/lib/python2.7/httplib.py", line 612, in read
s = self.fp.read(amt)
File "/usr/lib/python2.7/socket.py", line 384, in read
data = self._sock.recv(left)
timeout: timed out
我知道urlopen()有time out一说,怎么read也能time out啊?
一直认为urlopen会把网页下载下来,然后用read获取,难道不是这样的?
是不是可以改成这样?
...
try:
except:
...