在一个爬虫项目中spiders下我有2个爬虫文件,name为name1,name2
在以scrapy crwal name1 进行采集时可以进入pipeline,
但是以scrapy crwal name2 进行采集时进入不了pipeline
排查了很久不知道为什么原因?有办法解决吗?
当然我的项目名是generic 配置文件如下:
python
# -*- coding: utf-8 -*- # Scrapy settings for generic project # # For simplicity, this file contains only the most important settings by # default. All the other settings are documented here: # # http://doc.scrapy.org/en/latest/topics/settings.html # BOT_NAME = 'generic' SPIDER_MODULES = ['generic.spiders'] NEWSPIDER_MODULE = 'generic.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'generic (+http://www.yourdomain.com)' #dns cach #EXTENSIONS={"scrapy.contrib.resolver.CachingResolver": 0,} CONCURRENT_REQUESTS = 100 COOKIES_ENABLED = False LOG_LEVEL = 'INFO' ITEM_PIPELINES = ['generic.pipelines.MongoStorePipeline.Pipeline': 1] ''' [ 'generic.pipelines.ImagePipeline.LogoImage', 'generic.pipelines.MongoStorePipeline.Pipeline' ] ''' ITEM_PIPELINES = {'scrapy.contrib.pipeline.images.ImagesPipeline': 2} IMAGES_STORE = './companylog' LOG_FILE = "logs/scrapy.20150428.log"
在iOS App中引用并访问Https