Scrapy Linkextractor duplicating(?)

Posted on

Question :

Scrapy Linkextractor duplicating(?)

I have the crawler implemented as below.

It is working and it would go through sites regulated under the link extractor.

Basically what I am trying to do is to extract information from different places in the page:

– href and text() under the class ‘news’ ( if exists)

– image url under the class ‘think block’ ( if exists)

I have three problems for my scrapy:

1) duplicating linkextractor

It seems that it will duplicate processed page. ( I check against the export file and found that the same ~.img appeared many times while it is hardly possible)

And the fact is , for every page in the website, there are hyperlinks at the bottom that facilitate users to direct to the topic they are interested in, while my objective is to extract information from the topic’s page ( here listed several passages’s title under the same topic ) and the images found within a passage’s page( you can arrive to the passage’s page by clicking on the passage’s title found at topic page).

I suspect link extractor would loop the same page over again in this case.

( maybe solve with depth_limit?)

2) Improving parse_item

I think it is quite not efficient for parse_item. How could I improve it? I need to extract information from different places in the web ( for sure it only extracts if it exists).Beside, it looks like that the parse_item could only progress HkejImage but not HkejItem (again I checked with the output file). How should I tackle this?

3) I need the spiders to be able to read Chinese.

I am crawling a site in HK and it would be essential to be capable to read Chinese.

The site:

As long as it belongs to ‘dailynews’, that’s the thing I want.

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors import LinkExtractor
import items

class EconjournalSpider(CrawlSpider):
    name = "econJournal"
    allowed_domains = [""]
    login_page = ''
    start_urls =  ''

    rules=(Rule(LinkExtractor(allow=('dailynews', ),unique=True), callback='parse_item', follow =True),

    def start_requests(self):
         yield Request(
# name column
    def login(self, response):
        return FormRequest.from_response(response,
                    formdata={'name': 'users', 'password': 'my password'},

    def check_login_response(self, response):
        """Check the response returned by a login request to see if we are
        successfully logged in.
        if "username" in response.body:       
            self.log("nnnSuccessfully logged in. Let's start crawling!nnn")
            return Request(url=self.start_urls)
            self.log("nnnYou are not logged in.nnn")
            # Something went wrong, we couldn't log in, so nothing happens

    def parse_item(self, response):
        hxs = Selector(response)

        for image in images:
            allimages['image'] = image.xpath('a/img[not(@data-original)]/@src').extract()
            yield allimages

        for new in news:
            allnews = items.HKejItem()
            allnews['news_url'] = new.xpath('h2/@href').extract()
            yield allnews

Thank you very much and I would appreciate any help!

Asked By: yukclam9


Answer #1:

First, to set settings, make it on the file or you can specify the custom_settings parameter on the spider, like:

custom_settings = {
    'DEPTH_LIMIT': 3,

Then, you have to make sure the spider is reaching the parse_item method (which I think it doesn’t, haven’t tested yet). And also you can’t specify the callback and follow parameters on a rule, because they don’t work together.

First remove the follow on your rule, or add another rule, to check which links to follow, and which links to return as items.

Second on your parse_item method, you are getting incorrect xpath, to get all the images, maybe you could use something like:


and then to get the image url:

allimages['image'] = image.xpath('./@src').extract()

for the news, it looks like this could work:

allnews['news_url'] = new.xpath('.//a/@href').extract()

Now, as and understand your problem, this isn’t a Linkextractor duplicating error, but only poor rules specifications, also make sure you have valid xpath, because your question didn’t indicate you needed xpath correction.

Answered By: yukclam9

Leave a Reply

Your email address will not be published. Required fields are marked *