Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.
Crawlee covers your crawling and scraping end-to-end and helps you build reliable scrapers. Fast.
Your crawlers will appear human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data, and store it to disk or cloud while staying configurable to suit your project's needs.
We recommend visiting the Introduction tutorial in Crawlee documentation for more information.
Crawlee requires Node.js 16 or higher.
With Crawlee CLI
The fastest way to try Crawlee out is to use the Crawlee CLI and choose the Getting started example. The CLI will install all the necessary dependencies and add boilerplate code for you to play with.
1npx crawlee create my-crawler
1cd my-crawler
2npm start
Manual installation
If you prefer adding Crawlee into your own project, try the example below. Because it uses PlaywrightCrawler we also need to install Playwright. It's not bundled with Crawlee to reduce install size.
1npm install crawlee playwright
1import { PlaywrightCrawler, Dataset } from 'crawlee';
23// PlaywrightCrawler crawls the web using a headless4// browser controlled by the Playwright library.5const crawler = newPlaywrightCrawler({
6// Use therequestHandlertoprocesseachofthecrawledpages.
7asyncrequestHandler({ request, page, enqueueLinks, log }) {
8 const title = await page.title();
9 log.info(`Title of ${request.loadedUrl} is '${title}'`);
1011// Save results as JSON to ./storage/datasets/default12 await Dataset.pushData({ title, url: request.loadedUrl });
1314// Extract links from the current page15// and add them to the crawling queue.16 await enqueueLinks();
17 },
18// Uncomment this option to see the browser window.19// headless: false,20});
2122// Add first URL to the queue and start the crawl.23await crawler.run(['https://crawlee.dev']);
By default, Crawlee stores data to ./storage in the current working directory. You can override this directory via Crawlee configuration. For details, see Configuration guide, Request storage and Result storage.
🛠 Features
Single interface for HTTP and headless browser crawling
Persistent queue for URLs to crawl (breadth & depth first)
Pluggable storage of both tabular data and files
Automatic scaling with available system resources
Integrated proxy rotation and session management
Lifecycles customizable with hooks
CLI to bootstrap your projects
Configurable routing, error handling and retries
Dockerfiles ready to deploy
Written in TypeScript with generics
👾 HTTP crawling
Zero config HTTP2 support, even for proxies
Automatic generation of browser-like headers
Replication of browser TLS fingerprints
Integrated fast HTML parsers. Cheerio and JSDOM
Yes, you can scrape JSON APIs as well
💻 Real browser crawling
JavaScript rendering and screenshots
Headless and headful support
Zero-config generation of human-like fingerprints
Automatic browser management
Use Playwright and Puppeteer with the same interface
Chrome, Firefox, Webkit and many others
Usage on the Apify platform
Crawlee is open-source and runs anywhere, but since it's developed by Apify, it's easy to set up on the Apify platform and run in the cloud. Visit the Apify SDK website to learn more about deploying Crawlee to the Apify platform.
Your code contributions are welcome, and you'll be praised to eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.
License
This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.