In this article, we’ll cover an overview of web scraping with Selenium using a real-life example.
For a detailed tutorial on Selenium, see our blog.
- Create a virtual environment:
python3 -m venv .env
- Install Selenium using pip:
pip install selenium
- Install Selenium Web Driver. See this page for details.
With virtual environment activated, enter IDLE by typing in python3
. Enter the following command on IDLE:
>>> from selenium.webdriver import Chrome
If there are no errors, move on to the next step. If there is an error, ensure that chromedriver
is added to the PATH.
Import required modules as follows:
from selenium.webdriver import Chrome, ChromeOptions
from selenium.webdriver.common.by import By
Add the skeleton of the script as follows:
def get_data(url) -> list:
...
def main():
...
if __name__ == '__main__':
main()
Create ChromeOptions object and set headless
to True
. Use this to create an instance of Chrome
.
browser_options = ChromeOptions()
browser_options.headless = True
driver = Chrome(options=browser_options)
Call the driver.get
method to load a URL. After that, locate the link for the Humor section by link text and click it:
driver.get(url)
element = driver.find_element(By.LINK_TEXT, "Humor")
element.click()
Create a CSS selector to find all books from this page. After that run a loop on the books and find the bookt title, price, stock availability. Use a dictionary to store one book information and add all these dictionaries to a list. See the code below:
books = driver.find_elements(By.CSS_SELECTOR, ".product_pod")
data = []
for book in books:
title = book.find_element(By.CSS_SELECTOR, "h3 > a")
price = book.find_element(By.CSS_SELECTOR, ".price_color")
stock = book.find_element(By.CSS_SELECTOR, ".instock.availability")
book_item = {
'title': title.get_attribute("title"),
'price': price.text,
'stock': stock. text
}
data.append(book_item)
Lastly, return the data
dictionary from this function.
For the complete code, see main.py.
For a detailed tutorial on Selenium, see our blog.