Scrapy with XPath Selectors

28/12/2020
Chưa phân loại
HTML is the language of the web pages, and there is a lot of information hanging in between every web page‘s opening and closing html tag. There are lots of ways to access this, however in this article we would be doing so using Xpath selector through Python‘s Scrapy library.

The Scrapy library is a very powerful web scraping library, easy to use as well. If you are new to this, you can follow the available tutorial on using the Scrapy library.

This tutorial covers the use of Xpath selectors. Xpath uses path like syntax to navigate the nodes of XML documents. They are also useful in navigating HTML tags.

Unlike in the Scrapy tutorial, we are going to be doing all of our operations here on the terminal for simplicity sake. This doesn‘t mean that the Xpath can‘t be used with the proper Scrapy program though, they can be utilized in the parse library on the response parameter.

We are going to be working with the example.webscraping.com site, as it is very simple and would help understand the concepts.

To use scrapy in our terminal, type in the command below:

$ scrapy shell http://example.webscraping.com

It would visit the site and get the needed information, then leave us with an interactive shell to work with. You should see a prompt like:

In [1]:

From the interactive session, we are going to be working with the response object.

Here‘s what our syntax would look like for the majority of this article:

In [1]: response.xpath(‘xpathsyntax’).extract()

This command above is used to extract all of the matched tags according to the Xpath syntax and then stores it in a list.

In [2]: response.xpath(‘xpathsyntax’).extract_first()

This command above is used to extract only the first matched tag, and stores it in a list.
We can now start working on the Xpath syntax.

NAVIGATING TAGS

Navigating tags in Xpath is very easy, all that is needed is the forward-slash “/” followed by the name of the tag.

In [3]: response.xpath(/html’).extract()

The command above would return the html tag and everything it contains as a single item in a list.

If we want to get the body of the web page, we would use the following:

In [4]: response.xpath(/html/body’).extract()

Xpath also allows the wildcard character “*”, which matches everything in the level in which it is used.

In [5]: response.xpath(/*).extract()

The code above would match everything in the document. The same thing happens when we use ‘/html’.

In [6]: response.xpath(/html/*).extract()

Asides navigating tags, we can get all the descendant tags of a particular tag by using the “//”.

In [7]: response.xpath(/html//a’).extract()

The above code would return all the anchor tags under in the html tag i.e. it would return a list of all the descendant anchor tags.

TAGS BY ATTRIBUTES AND THEIR VALUES

Sometimes, navigating html tags to get to the required tag could be trouble. This trouble can be averted by simply finding the needed tag by its attribute.

In [8]: response.xpath(‘/html//div[@id = "pagination"]’).extract()

The code above returns all the div tags under the html tag that have the id attribute with a value of pagination.

In [9]: response.xpath(‘/html//div[@class = "span12"]’).extract()

The code above would return a list of all the div tags under the html tag, only if they have the class attribute with a value of span12.

What if you do not know the value of the attribute? And all you want is to get tags with a particular attribute, with no concern about it‘s value. Doing this is simple as well, all you need to do is to use only the @ symbol and the attribute.

In [10]: response.xpath(‘/html//div[@class]’).extract()

This code would return a list of all the div tags that contain the class attribute regardless of what value that class attribute holds.

How about if you know only a couple of characters contained in the value of an attribute? It‘s also possible to get those type of tags.

In [11]: response.xpath(‘/html//div[contains(@id, "ion")]’).extract()

The code above would return all the div tags under the html tag that have the id attribute, however we do not know what value the attribute holds except that we know it contains “ion”.

The page we are parsing has only one tag in this category, and the value is “pagination” so it would be returned.

Cool right?

TAGS BY THEIR TEXT

Remember we matched tags by their attributes earlier. We can also match tags by their text.

In [12]: response.xpath(‘/html//a[.=" Algeria"]’).extract()

The code above would help us get all the anchor tags that have the “ Algeria” text in them. NB: It must be tags with exactly that text content.

Wonderful.

How about if we do not know in the exact text content, and we only know a few of the text content? We can do that as well.

In [13]: response.xpath(‘/html//a[contains (text(),"A")]’).extract()

The code above would get the tags that have the letter “A” in their text content.

EXTRACTING TAG CONTENT

All along, we have been talking about finding the right tags. It‘s time to extract the content of the tag when we find it.

It‘s pretty simple. All we need to do is to add “/text()” to the syntax, and the contents of the tag would be extracted.

In [14]: response.xpath(‘/html//a/text()’).extract()

The code above would get all the anchor tags in the html document, and then extract the text content.

EXTRACTING THE LINKS

Now that we know how to extract the text in tags, then we should know how to extract the values of attributes. Most times, the values of attributes that are of utmost importance to us are links.

Doing this is almost same as extracting the text values, however instead of using “/text()” we would be using the “/@” symbol and the name of the attribute.

In [15]:response.xpath(<a href="mailto:’/html//a/@href">‘/html//a/@href</a>’).extract()

The code above would extract all of the links in the anchor tags, the links are supposed to be the values of the href attribute.

NAVIGATING SIBLING TAGS

If you noticed, we have been navigating tags all this while. However, there’s one situation we haven’t tackled.

How do we select a particular tag when tags with the same name are on the same level?

<tr>
    <td><div>
<a href="/places/default/view/Afghanistan-1">
<img src="/places/static/images/flags/af.png"> Afghanistan</a>
</div></td>

    <td><div>
<a href="/places/default/view/Aland-Islands-2">
<img src="/places/static/images/flags/ax.png"> Aland Islands</a>
</div></td>
</tr>

In a case like the one we have above, if we are to look at it, we might say we’d use extract_first() to get the first match.

However, what if we want to match the second one? What if there are more than ten options and we want the fifth one? We are going to answer that right now.

Here is the solution: When we write our Xpath syntax we put the position of the tag we want in square brackets, just like we are indexing but the index starts at 1.

Looking at the html of the web page we are dealing with, you’d notice that there a lot of <tr> tags on the same level. To get the third <tr> tag, we’d use the following code:

In [16]: response.xpath(‘/html//tr[3]’).extract()

You’d also notice that the <td> tags are in twos, if we want only the second <td> tags from the <tr> rows we’d do the following:

In [17]: response.xpath(‘/html//td[2]’).extract()

CONCLUSION:

Xpath is a very powerful way to parse html files, and could help minimize the use of regular expressions in parsing them considering it has the contains function in its syntax.

There are other libraries that allow parsing with Xpath such as Selenium for web automation. Xpath gives us a lot of options while parsing html, but what has been treated in this article should be able to carry you through common html parsing operations.

ONET IDC thành lập vào năm 2012, là công ty chuyên nghiệp tại Việt Nam trong lĩnh vực cung cấp dịch vụ Hosting, VPS, máy chủ vật lý, dịch vụ Firewall Anti DDoS, SSL… Với 10 năm xây dựng và phát triển, ứng dụng nhiều công nghệ hiện đại, ONET IDC đã giúp hàng ngàn khách hàng tin tưởng lựa chọn, mang lại sự ổn định tuyệt đối cho website của khách hàng để thúc đẩy việc kinh doanh đạt được hiệu quả và thành công.
Bài viết liên quan

Installing Docker on Debian 10

In this article, I am going to show you how to install the latest Docker CE (Community Edition) on Debian 10 Buster. So,...
29/12/2020

Install and Configure Jupyter Notebook on CentOS 8

In this article, I am going to show you how to install and configure Jupyter Notebook on CentOS 8. So, let’s get started....
29/12/2020

Top data recovery tools for linux

Sadly sometimes we lose precious information by formatting a wrong partition or disk, due to hardware problems or transference...
29/12/2020
Bài Viết

Bài Viết Mới Cập Nhật

Tìm Hiểu Về Thuê Proxy US – Lợi Ích và Cách Sử Dụng Hiệu Quả
11/12/2024

Mua Proxy V6 Nuôi Facebook Spam Hiệu Quả Tại Onetcomvn
03/06/2024

Hướng dẫn cách sử dụng ProxyDroid để duyệt web ẩn danh
03/06/2024

Mua proxy Onet uy tín tại Onet.com.vn
03/06/2024

Thuê mua IPv4 giá rẻ, tốc độ nhanh, uy tín #1
28/05/2024