A distributed web crawler for xiaohongshu and visualization for the crawled content.
As this crawler supports distribution, using a pre-build docker is the recommended and convenient way to build this project. If you only wish to build with a stand-alone crawler, following the instruction below:
- add
chromedriver
to PATH - install all required python packages in
requirements.txt
- run
xiaohongshu_consumer.py
This project use celery
to distribute tasks, so you have to run worker(s) first, and then execute the consumer code to create tasks.
registry.cn-hangzhou.aliyuncs.com/kaitohh/celery:5
is the pre built docker image, and you can also usedocker build -t <my_image_name> .
to build image locally.
- Install docker
- if you with to run a distributed version of crawler, make sure you have deployed a cluster using docker
swarm
orkubernetes
.
run following command
docker-compose up
and all services will be first built locally and then run automatically. Note that this command will only create one replica for each service.
After all services up, visit localhost:5555
to enter the celery flower dashboard, and localhost:8080
to enter the docker visualizer page.
run following command
docker stack deploy -c docker-compose.yml <your_stack_name>
You can also modify the replicas
in Line 8, docker-compose.yml
to be equal to your amount of cluster.
Below is the screenshot of the docker visualizer page for a successful deploy.
run following command
set USE_CELERY=1 & python xiaohongshu_consumer.py
now visit the celery dashboard and you will see your tasks.
If you wish to build manually, you have to first follow these instructions for building a stand-alone crawler
, then start a redis server and change the environment variable REDIS_URL
to your redis host. Finally, run celery worker command to start workers.
See the Dockerfile
and docker-compose.yml
as a reference.
see xiaohongshu_wordcloud.py for more detailed implementaion.