DEV Community

Cover image for Scrape Like a Pro: BeautifulSoup + Python Full Tutorial
Leapcell
Leapcell

Posted on

3 1 1 1 1

Scrape Like a Pro: BeautifulSoup + Python Full Tutorial

Image description

Leapcell: The Best of Serverless Web Hosting

Comprehensive HTML Processing Tutorial: From Parsing to Data Extraction

I. Introduction

As the foundational language for web pages, HTML (Hypertext Markup Language) is widely used in fields such as web data processing and web development. Whether developers optimize web structures or data analysts extract information from web pages, HTML processing is indispensable. This tutorial focuses on core operations like HTML parsing, modification, and data extraction, helping readers master comprehensive methods and techniques for handling HTML.

II. Review of HTML Basics

2.1 Basic HTML Structure

A standard HTML document starts with the <!DOCTYPE html> declaration and includes the <html> root element, which nests two main sections: <head> and <body>. The <head> section typically contains meta-information about the page, such as titles, character encodings, and links to CSS stylesheets. The <body> section holds the visible content of the page, including text, images, links, forms, and other elements.

<!DOCTYPE html>  
<html lang="en">  
<head>  
    <meta charset="UTF-8">  
    <title>My Page</title>  
</head>  
<body>  
    <h1>Hello, World!</h1>  
    <p>This is a simple HTML page.</p>  
</body>  
</html>  
Enter fullscreen mode Exit fullscreen mode

2.2 HTML Elements and Attributes

HTML consists of various elements represented by tags, such as <p> for paragraphs and <a> for links. Elements can include attributes that define additional information. For example, the href attribute in <a href="https://example.com"> specifies the target address of the link. Attributes are typically in "name-value" pairs, and attribute values must be enclosed in quotes.

III. HTML Parsing

3.1 Parsing Tools and Libraries

In different development environments, multiple tools and libraries can parse HTML:

  • Browsers: Browsers have built-in powerful HTML parsing engines that render HTML code into visual pages. Through browser developer tools (e.g., Chrome DevTools), you can view the parsed DOM (Document Object Model) structure and analyze element styles and attributes.
  • Python Libraries:
    • BeautifulSoup: One of the most commonly used HTML parsing libraries in Python, it easily parses HTML and XML documents and provides a simple API for navigating, searching, and modifying the parse tree.
    • lxml: A Python library built on the libxml2 and libxslt libraries, it parses quickly, supports both HTML and XML parsing, and can be used with XPath expressions for efficient data extraction.
    • html5lib: This library parses HTML in a way very similar to modern browsers, making it suitable for handling irregular HTML code.
  • JavaScript: In a browser environment, JavaScript can directly manipulate the DOM using methods provided by the document object, such as getElementById and getElementsByTagName, to parse and operate on HTML. In a Node.js environment, libraries like jsdom can simulate a browser environment to parse HTML.

3.2 Parsing HTML with Python

3.2.1 BeautifulSoup Parsing Example

First, install the BeautifulSoup library:

pip install beautifulsoup4  
Enter fullscreen mode Exit fullscreen mode

Here is the basic code for parsing HTML with BeautifulSoup:

from bs4 import BeautifulSoup  

html_doc = """  
<html>  
  <head><title>Sample Page</title></head>  
  <body>  
    <p class="intro">This is an introductory paragraph.</p>  
    <p class="content">Here is some content.</p>  
  </body>  
</html>  
"""  
soup = BeautifulSoup(html_doc, 'html.parser')  # Use Python's built-in parser  
# Can also use other parsers, e.g., lxml: soup = BeautifulSoup(html_doc, 'lxml')  
print(soup.title.string)  # Output: Sample Page  
Enter fullscreen mode Exit fullscreen mode

3.2.2 lxml Parsing Example

Install the lxml library:

pip install lxml  
Enter fullscreen mode Exit fullscreen mode

Use lxml to parse HTML and extract data via XPath:

from lxml import etree  

html = """  
<html>  
  <body>  
    <div class="box">  
      <p>First paragraph</p>  
      <p>Second paragraph</p>  
    </div>  
  </body>  
</html>  
"""  
tree = etree.HTML(html)  
paragraphs = tree.xpath('//div[@class="box"]/p/text()')  
print(paragraphs)  # Output: ['First paragraph', 'Second paragraph']  
Enter fullscreen mode Exit fullscreen mode

IV. Navigation and Search in the HTML Document Tree

4.1 Navigating the HTML Document Tree

Take BeautifulSoup as an example: after parsing, the HTML document forms a document tree, which can be navigated in multiple ways:

  • Accessing Child Elements: You can directly access child elements by tag name, e.g., soup.body.p accesses the first <p> element under the <body> element. You can also use the contents attribute to get a list of child elements or the children attribute to iterate over child elements as a generator.
  • Accessing Parent Elements: Use the parent attribute to get the direct parent of the current element, and the parents attribute to recursively traverse all ancestor elements.
  • Accessing Sibling Elements: The next_sibling and previous_sibling attributes get the next and previous sibling elements, respectively. The next_siblings and previous_siblings attributes iterate over all subsequent and preceding siblings.

4.2 Searching the HTML Document Tree

  • find_all() Method: BeautifulSoup’s find_all() method searches for all elements that match specified criteria, which can be filtered by tag name, attributes, etc. For example, to find all <p> tags: soup.find_all('p); to find all elements with the class content: soup.find_all(class_='content').
  • find() Method: The find() method returns the first element that matches the criteria, e.g., soup.find('a') returns the first <a> element in the document.
  • CSS Selectors: Use the select() method with CSS selector syntax for more flexible element searching. For example, to select all <div> elements with the class box: soup.select('div.box); to select all <li> elements under the element with the id main: soup.select('#main li).

V. Modifying HTML

5.1 Modifying Element Attributes

Both Python libraries and JavaScript can easily modify HTML element attributes.

  • Python (BeautifulSoup):
  from bs4 import BeautifulSoup  

  html = """  
  <html>  
    <body>  
      <a href="https://old-url.com">Old Link</a>  
    </body>  
  </html>  
  """  
  soup = BeautifulSoup(html, 'html.parser')  
  link = soup.find('a')  
  link['href'] = 'https://new-url.com'  # Modify the href attribute  
  print(soup.prettify())  
Enter fullscreen mode Exit fullscreen mode
  • JavaScript:
  <!DOCTYPE html>  
  <html lang="en">  
  <head>  
      <meta charset="UTF-8">  
  </head>  
  <body>  
      <img id="myImage" src="old-image.jpg" alt="Old Image">  
      <script>  
          const image = document.getElementById('myImage');  
          image.src = 'new-image.jpg';  // Modify the src attribute  
      </script>  
  </body>  
  </html>  
Enter fullscreen mode Exit fullscreen mode

5.2 Adding and Removing Elements

  • Python (BeautifulSoup):

    • Adding Elements:
    from bs4 import BeautifulSoup  
    
    html = """  
    <html>  
      <body>  
        <ul id="myList"></ul>  
      </body>  
    </html>  
    """  
    soup = BeautifulSoup(html, 'html.parser')  
    ul = soup.find('ul')  
    new_li = soup.new_tag('li')  
    new_li.string = 'New Item'  
    ul.append(new_li)  # Add a new element  
    
    • Removing Elements:
    from bs4 import BeautifulSoup  
    
    html = """  
    <html>  
      <body>  
        <p id="removeMe">This paragraph will be removed.</p>  
      </body>  
    </html>  
    """  
    soup = BeautifulSoup(html, 'html.parser')  
    p = soup.find('p', id='removeMe')  
    p.decompose()  # Remove the element  
    
  • JavaScript:

    • Adding Elements:
    <!DOCTYPE html>  
    <html lang="en">  
    <head>  
        <meta charset="UTF-8">  
    </head>  
    <body>  
        <div id="parentDiv"></div>  
        <script>  
            const parentDiv = document.getElementById('parentDiv');  
            const newParagraph = document.createElement('p');  
            newParagraph.textContent = 'This is a new paragraph.';  
            parentDiv.appendChild(newParagraph);  // Add a new element  
        </script>  
    </body>  
    </html>  
    
    • Removing Elements:
    <!DOCTYPE html>  
    <html lang="en">  
    <head>  
        <meta charset="UTF-8">  
    </head>  
    <body>  
        <p id="removeParagraph">This paragraph will be removed.</p>  
        <script>  
            const paragraph = document.getElementById('removeParagraph');  
            paragraph.remove();  // Remove the element  
        </script>  
    </body>  
    </html>  
    

VI. HTML Data Extraction

6.1 Extracting Text Content

  • Python (BeautifulSoup): Use the string attribute or get_text() method to get the text content within an element. For example:
  from bs4 import BeautifulSoup  

  html = """  
  <html>  
    <body>  
      <p class="text">Extract this text.</p>  
    </body>  
  </html>  
  """  
  soup = BeautifulSoup(html, 'html.parser')  
  text = soup.find('p', class_='text').string  
  print(text)  # Output: Extract this text.  
Enter fullscreen mode Exit fullscreen mode
  • JavaScript: Use the textContent or innerText attributes to get text content, e.g., const element = document.getElementById('myElement'); const text = element.textContent;.

6.2 Extracting Attribute Values

Both Python and JavaScript can easily extract HTML element attribute values. For example, to extract the href attribute value of an <a> tag:

  • Python (BeautifulSoup): href = soup.find('a')['href']
  • JavaScript: const link = document.querySelector('a'); const href = link.getAttribute('href');

6.3 Complex Data Extraction

In real-world applications, data often needs to be extracted from complex HTML structures—for example, extracting product names, prices, and links from a web page with a product list. In such cases, combine loops and conditionals with the navigation and search methods above to traverse and extract the required data:

from bs4 import BeautifulSoup  
import requests  

url = "https://example.com/products"  
response = requests.get(url)  
soup = BeautifulSoup(response.text, 'html.parser')  

products = []  
for product_div in soup.find_all('div', class_='product'):  
    name = product_div.find('h2', class_='product-name').string  
    price = product_div.find('span', class_='product-price').string  
    link = product_div.find('a')['href']  
    products.append({'name': name, 'price': price, 'link': link})  

print(products)  
Enter fullscreen mode Exit fullscreen mode

VII. Handling Irregular HTML

In practice, HTML code often has irregular formats, such as unclosed tags or missing attribute quotes. Different parsers handle irregular HTML differently:

  • html5lib: This parser behaves similarly to browsers and can better handle irregular HTML by attempting to correct erroneous structures.
  • lxml: The lxml parser is relatively strict but has some fault tolerance. When processing severely irregular HTML, you may need to preprocess it first or use lxml.etree.HTMLParser with the recover=True parameter to enable recovery mode.
  • BeautifulSoup: It handles irregular HTML based on the selected parser’s characteristics. For complex irregular documents, it is recommended to prioritize the html5lib parser.

VIII. Performance Optimization and Best Practices

8.1 Choosing the Right Parser

Select a parser based on specific needs:

  • lxml: Ideal for speed when HTML is relatively standardized.
  • html5lib: More suitable for handling irregular HTML.
  • html.parser (Python built-in): Meets basic needs with simplicity and moderate performance requirements.

8.2 Reducing Redundant Parsing

When processing multiple HTML documents or operating on the same document multiple times, avoid redundant parsing. Cache parsed results or complete all related operations in a single parsing pass.

8.3 Using Search Methods Appropriately

When searching for elements, use more precise filtering conditions to reduce unnecessary traversal. For example, CSS selectors or XPath expressions can more efficiently locate target elements.

IX. Conclusion

Through this tutorial, you have comprehensively learned all aspects of HTML processing, including basic structures, parsing methods, document tree navigation, modification operations, data extraction, and techniques for handling irregular HTML. In practice, selecting appropriate tools and methods based on specific scenarios, while focusing on performance optimization and best practices, will help you complete HTML processing tasks more efficiently. Whether in web development or data collection, mastering HTML processing skills will greatly facilitate your work.

This tutorial covers key aspects of HTML processing. If you have specific use cases during your learning or want to dive deeper into a particular section, feel free to communicate with us at any time.

Leapcell: The Best of Serverless Web Hosting

Finally, we recommend the best platform for deploying Python services: Leapcell

Image description

🚀 Build with Your Favorite Language

Develop effortlessly in JavaScript, Python, Go, or Rust.

🌍 Deploy Unlimited Projects for Free

Only pay for what you use—no requests, no charges.

⚡ Pay-as-You-Go, No Hidden Costs

No idle fees, just seamless scalability.

Image description

📖 Explore Our Documentation

🔹 Follow us on Twitter: @LeapcellHQ

ACI image

ACI.dev: Best Open-Source Composio Alternative (AI Agent Tooling)

100% open-source tool-use platform (backend, dev portal, integration library, SDK/MCP) that connects your AI agents to 600+ tools with multi-tenant auth, granular permissions, and access through direct function calling or a unified MCP server.

Star our GitHub!

Top comments (0)

Tiger Data image

🐯 🚀 Timescale is now TigerData: Building the Modern PostgreSQL for the Analytical and Agentic Era

We’ve quietly evolved from a time-series database into the modern PostgreSQL for today’s and tomorrow’s computing, built for performance, scale, and the agentic future.

So we’re changing our name: from Timescale to TigerData. Not to change who we are, but to reflect who we’ve become. TigerData is bold, fast, and built to power the next era of software.

Read more

👋 Kindness is contagious

Discover this thought-provoking article in the thriving DEV Community. Developers of every background are encouraged to jump in, share expertise, and uplift our collective knowledge.

A simple "thank you" can make someone's day—drop your kudos in the comments!

On DEV, spreading insights lights the path forward and bonds us. If you appreciated this write-up, a brief note of appreciation to the author speaks volumes.

Get Started