This project is a web scraper built using Scrapy to collect recipe data from Epicurious. The scraper extracts the title, author, creation date, ingredients, instructions, and tags from a given recipe page and stores them in a MongoDB database.
-
Clone the repository:
git clone https://github.com/yourusername/recipe-scraper.git cd recipe-scraper
-
Create a virtual environment and activate it:
python3 -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the required dependencies:
pip install -r requirements.txt
-
Install MongoDB:
-
On macOS:
brew tap mongodb/brew brew install [email protected]
-
On Ubuntu:
sudo apt-get update sudo apt-get install -y mongodb
-
On Windows, download and install MongoDB from MongoDB Community Server.
-
-
Start MongoDB:
-
On macOS and Linux:
sudo systemctl start mongod sudo systemctl enable mongod
-
On Windows, start MongoDB from the Services menu or use:
net start MongoDB
-
To run the scraper, execute the following command:
scrapy runspider recipes.py
This command will start the spider, begin scraping the specified recipe pages, and store the scraped data in the MongoDB database recipes in the recipes collection. It also updates the categories collection with unique categories and subcategories.
recipe-scraper/ ├── init.py ├── spiders/ │ ├── init.py │ └── recipes_spider.py ├── requirements.txt └── README.md
- recipe_scraper/: Contains the Scrapy project files.
- spiders/: Directory for spider files.
- recipes_spider.py: Spider for scraping recipe data.
- spiders/: Directory for spider files.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.
- Fork the repository.
- Create your feature branch: git checkout -b feature/my-new-feature
- Commit your changes: git commit -am 'Add some feature'
- Push to the branch: git push origin feature/my-new-feature
- Submit a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Installation: Detailed instructions on how to set up the environment and install MongoDB.
- Usage: How to run the scraper and what it does.
- Project Structure: Overview of the project's file structure.
- Example Output: Sample output of the scraper.
- Contributing: Instructions for contributing to the project.
- License: Information about the project's license.