XPath Parser is a JavaScript utility for extracting data from HTML and XML documents; built for web scraping in a JavaScript environment. It's open source, modern, lightweight and fast. You can easily integrate it into new or existing web crawlers, browser extensions, etc.
# using NPM
npm i @remotemerge/xpath-parser
# using Yarn
yarn add @remotemerge/xpath-parser
Import the XPathParser class in your project.
import XPathParser from '@remotemerge/xpath-parser'
The XPathParser constructor XPathParser(html|DOM)
supports both DOM and HTML string, initialize as required.
const parser = new XPathParser('<html>...</html>');
This method evaluates the given expression and captures the first result. It is useful for scraping a single element
value like title
, price
, etc. from HTML pages.
const result = parser.queryFirst('//span[@id="productTitle"]');
console.log(result);
Sample output:
LETSCOM Fitness Tracker HR, Activity Tracker Watch with Heart Rate...
This method evaluates the given expression and captures all results. It is useful for scraping all URLs, all images, all CSS classes, etc. from HTML pages.
// scrape titles
const results = parser.queryList('//span[contains(@class, "zg-item")]/a/div');
console.log(results);
Sample output:
['Cell Phone Stand,Angle Height Adjusta…', 'Selfie Ring Light with Tripod…', 'HOVAMP MFi Certified Nylon…', '...']
This method loop through the given expressions and captures the first match of each expression. It is useful for
scraping full product information (title
, seller
, price
, rating
, etc.) from HTML pages. The keys are preserved
and the values are returned to the same keys.
const result = parser.multiQuery({
title: '//div[@id="ppd"]//span[@id="productTitle"]',
seller: '//div[@id="ppd"]//a[@id="bylineInfo"]',
price: '//div[@id="ppd"]//span[@id="priceblock_dealprice"]',
rating: '//div[@id="ppd"]//span[@id="acrCustomerReviewText"]',
});
Sample output:
{
title: 'LETSCOM Fitness Tracker HR, Activity Tracker Watch with Heart Rate Monitor...',
seller: 'LETSCOM',
price: '$20.39',
rating: '1,489 ratings',
}
This method captures the root
element and runs queries within its namespace. It is useful for scraping multiple
products and full information about each product. For example, there can be 10 products on a page and each product
has (title
, url
, image
, price
, etc.). This method also supports pagination
parameter. The keys are preserved
and the values are returned to the same keys. Here pagination
is optional parameter.
const result = parser.subQuery({
root: '//span[contains(@class, "zg-item")]',
pagination: '//ul/li/a[contains(text(), "Next")]/@href',
queries: {
title: 'a/div/@title',
url: 'a/@href',
image: 'a/span/div/img/@src',
price: './/span[contains(@class, "a-color-price")]',
}
});
console.log(result);
Sample output:
{
paginationUrl: 'https://www.example.com/gp/new-releases/wireless/reTF8&pg=2',
results: [
{
title: 'Cell Phone Stand,Angle Height Adjustable Stab/Kindle/Tablet,4-10inch',
url: '/Adjustable-LISEN-Aluminum-Compatible-4-10&refRID=H1HWDWERK8YCRN76ER1T',
image: 'https://images-na.ssl-images-example.com/images/I/61UL200_SR200,200_.jpg',
price: '$16.99'
},
{
title: 'Selfie Ring Light with Tripod Stand and Pheaming Photo Photography Vlogging Video',
url: '/Selfie-Lighting-Steaming-Photography-Vlogging/dp/B081SV&K8YCRN76ER1T',
image: 'https://images-na.ssl-images-example.com/images/I/717L200_SR200,200_.jpg',
price: '$46.99'
},
{
// ...
}
]
}
This method waits until the element (matches by expression) exists on a page. The first parameter expression
is XPath
expression to match and the second parameter maxSeconds
is the maximum time to wait in seconds (default to 10 seconds)
.
parser.waitXPath('//span[contains(@class, "a-color-price")]/span')
.then((response) => {
// expression match and element exists
}).catch((error) => {
// match nothing and timeout
});
Welcome the community for contribution. Please make a PR request for bug fixes, enhancements, new features, etc.
All the XPath expressions above are tested on Amazon product listing and related pages for educational purposes only. The icons are included from flaticon website.