我用Python爬虫爬取了豆瓣上的数据,但是在写入csv的时候确死活写不进去(csv文件空白)···
以下是回调写入数据类:
class ScrapeCallback:
def __init__(self):
self.writer = csv.writer(open('countries.csv', 'w', newline=''))
self.fields = ('name', 'year', 'score')
self.writer.writerow(self.fields)
def __call__(self, url, html):
csslist = ['span[property = "v:itemreviewed"]', 'span.year', 'strong[property="v:average"]']
try:
tree = lxml.html.fromstring(html)
row = [tree.cssselect('{0}'.format(field))[0].text for field in csslist]
#写入csv
self.writer.writerow(row)
print(url, row)
except Exception as e:
print("ScrapeCallback error:", e)
print语句是正常显示的,说明函数没报错。请大神找一下错误原因,十分感谢
附源码
问题已解决
原因:强制结束进程导致文件未正常关闭