You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See error
This webpage cannot function properly.
localhost did not send any data.
ERR_EMPTY_RESPONSE
Expected behavior
Display web pages normally.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: WINDOWS10
Browser chrome
docker desktop
Additional context
log
scraperr | (node:1) ExperimentalWarning: CommonJS module /usr/local/lib/node_modules/npm/node_modules/debug/src/node.js is loading ES Module /usr/local/lib/node_modules/npm/node_modules/supports-color/index.js using require().
2024-12-07 11:10:53 scraperr_api | 2024-12-07 03:10:53,844 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
scraperr | Support for loading ES Module in require() is an experimental feature and might change at any time
scraperr_api | 2024-12-07 03:10:53,847 INFO supervisord started with pid 8
scraperr | (Use node --trace-warnings ... to show where the warning was created)
scraperr_api | 2024-12-07 03:10:54,850 INFO spawned: 'api' with pid 9
scraperr |
scraperr | > [email protected] start
scraperr | > next start
scraperr |
scraperr_api | 2024-12-07 03:10:54,851 INFO spawned: 'worker' with pid 10
scraperr_api | 2024-12-07 03:10:55,853 INFO success: api entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | 2024-12-07 03:10:55,853 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-12-07 11:10:57 scraperr | ▲ Next.js 14.2.4
scraperr | - Local: http://localhost:3000
scraperr_api | INFO: Will watch for changes in these directories: ['/project']
scraperr_api | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
scraperr |
scraperr | ✓ Starting...
scraperr | ✓ Ready in 3.1s
scraperr |
scraperr | > [email protected] start
scraperr_api | INFO: Started reloader process [17] using StatReload
scraperr | > next start
scraperr_api | INFO:main:Starting job worker...
scraperr_api | Traceback (most recent call last):
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
scraperr |
scraperr | ▲ Next.js 14.2.4
scraperr | - Local: http://localhost:3000
scraperr_api | return _run_code(code, main_globals, None,
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
scraperr_api | exec(code, run_globals)
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 50, in
scraperr_api | asyncio.run(main())
scraperr |
scraperr_api | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
scraperr_api | return loop.run_until_complete(main)
scraperr | ✓ Starting...
scraperr | ✓ Ready in 8.5s
scraperr_api | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
scraperr_api | return future.result()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 45, in main
scraperr_api | await process_job()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 16, in process_job
scraperr_api | job = await get_queued_job()
scraperr_api | File "/project/api/backend/job/job.py", line 39, in get_queued_job
scraperr_api | res = common_query(query)
scraperr_api | File "/project/api/backend/database/common.py", line 44, in query
scraperr_api | _ = cursor.execute(query)
scraperr_api | sqlite3.OperationalError: no such table: jobs
scraperr_api | 2024-12-07 03:11:03,285 WARN exited: worker (exit status 1; not expected)
scraperr_api | 2024-12-07 03:11:04,466 INFO spawned: 'worker' with pid 22
scraperr_api | 2024-12-07 03:11:05,468 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO: Started server process [20]
scraperr_api | INFO: Waiting for application startup.
scraperr_api | INFO: 2024-12-07 03:11:06,857 - api.backend.database.startup - Executing query: CREATE TABLE IF NOT EXISTS jobs (
scraperr_api | id STRING PRIMARY KEY NOT NULL,
scraperr_api | url STRING NOT NULL,
scraperr_api | elements JSON NOT NULL,
scraperr_api | user STRING,
scraperr_api | time_created DATETIME NOT NULL,
scraperr_api | result JSON NOT NULL,
scraperr_api | status STRING NOT NULL,
scraperr_api | chat JSON,
scraperr_api | job_options JSON
scraperr_api | )
scraperr_api | INFO: 2024-12-07 03:11:06,858 - api.backend.database.startup - Executing query:
scraperr_api |
scraperr_api | CREATE TABLE IF NOT EXISTS users (
scraperr_api | email STRING PRIMARY KEY NOT NULL,
scraperr_api | hashed_password STRING NOT NULL,
scraperr_api | full_name STRING,
scraperr_api | disabled BOOLEAN
scraperr_api | )
scraperr_api | INFO: 2024-12-07 03:11:06,858 - api.backend.app - Starting up...
scraperr_api | INFO: Application startup complete.
scraperr_api | INFO:main:Starting job worker...
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | 2024-12-07 04:34:29,887 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
scraperr_api | 2024-12-07 04:34:29,995 INFO supervisord started with pid 7
scraperr_api | 2024-12-07 04:34:30,998 INFO spawned: 'api' with pid 8
scraperr_api | 2024-12-07 04:34:30,999 INFO spawned: 'worker' with pid 9
scraperr_api | 2024-12-07 04:34:32,001 INFO success: api entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | 2024-12-07 04:34:32,001 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO: Will watch for changes in these directories: ['/project']
scraperr_api | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
scraperr_api | INFO: Started reloader process [16] using StatReload
scraperr_api | INFO:main:Starting job worker...
scraperr_api | Traceback (most recent call last):
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
scraperr_api | return _run_code(code, main_globals, None,
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
scraperr_api | exec(code, run_globals)
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 50, in
scraperr_api | asyncio.run(main())
scraperr_api | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
scraperr_api | return loop.run_until_complete(main)
scraperr_api | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
scraperr_api | return future.result()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 45, in main
scraperr_api | await process_job()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 16, in process_job
scraperr_api | job = await get_queued_job()
scraperr_api | File "/project/api/backend/job/job.py", line 39, in get_queued_job
scraperr_api | res = common_query(query)
scraperr_api | File "/project/api/backend/database/common.py", line 44, in query
scraperr_api | _ = cursor.execute(query)
scraperr_api | sqlite3.OperationalError: no such table: jobs
scraperr_api | 2024-12-07 04:34:43,165 WARN exited: worker (exit status 1; not expected)
scraperr_api | 2024-12-07 04:34:44,168 INFO spawned: 'worker' with pid 21
scraperr_api | 2024-12-07 04:34:45,169 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO:main:Starting job worker...
scraperr_api | Traceback (most recent call last):
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
scraperr_api | return _run_code(code, main_globals, None,
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
scraperr_api | exec(code, run_globals)
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 50, in
scraperr_api | asyncio.run(main())
scraperr_api | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
scraperr_api | return loop.run_until_complete(main)
scraperr_api | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
scraperr_api | return future.result()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 45, in main
scraperr_api | await process_job()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 16, in process_job
scraperr_api | job = await get_queued_job()
scraperr_api | File "/project/api/backend/job/job.py", line 39, in get_queued_job
scraperr_api | res = common_query(query)
scraperr_api | File "/project/api/backend/database/common.py", line 44, in query
scraperr_api | _ = cursor.execute(query)
scraperr_api | sqlite3.OperationalError: no such table: jobs
scraperr_api | 2024-12-07 04:34:46,184 WARN exited: worker (exit status 1; not expected)
scraperr_api | 2024-12-07 04:34:47,187 INFO spawned: 'worker' with pid 26
scraperr_api | 2024-12-07 04:34:48,188 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO: Started server process [19]
scraperr_api | INFO: Waiting for application startup.
scraperr_api | INFO: 2024-12-07 04:34:48,873 - api.backend.database.startup - Executing query: CREATE TABLE IF NOT EXISTS jobs (
scraperr_api | id STRING PRIMARY KEY NOT NULL,
scraperr_api | url STRING NOT NULL,
scraperr_api | elements JSON NOT NULL,
scraperr_api | user STRING,
scraperr_api | time_created DATETIME NOT NULL,
scraperr_api | result JSON NOT NULL,
scraperr_api | status STRING NOT NULL,
scraperr_api | chat JSON,
scraperr_api | job_options JSON
scraperr_api | )
scraperr_api | INFO: 2024-12-07 04:34:48,873 - api.backend.database.startup - Executing query:
scraperr_api |
scraperr_api | CREATE TABLE IF NOT EXISTS users (
scraperr_api | email STRING PRIMARY KEY NOT NULL,
scraperr_api | hashed_password STRING NOT NULL,
scraperr_api | full_name STRING,
scraperr_api | disabled BOOLEAN
scraperr_api | )
scraperr_api | INFO: 2024-12-07 04:34:48,873 - api.backend.app - Starting up...
scraperr_api | INFO: Application startup complete.
scraperr_api | INFO:main:Starting job worker...
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
The text was updated successfully, but these errors were encountered:
Describe the bug
There is an error in the Sqlite version. This webpage cannot function properly.
localhost did not send any data.
ERR_EMPTY_RESPONSE
To Reproduce
Steps to reproduce the behavior:
Install with the "docker-compose up -d" command.
docker-compose.yml
services:
scraperr:
depends_on:
image: jpyles0524/scraperr:latest
build:
context: .
dockerfile: docker/frontend/Dockerfile
container_name: scraperr
command: ["npm", "run", "start"]
environment:
ports:
networks:
scraperr_api:
init: True
image: jpyles0524/scraperr_api:latest
build:
context: .
dockerfile: docker/api/Dockerfile
environment:
container_name: scraperr_api
ports:
volumes:
networks:
networks:
web:
Visit (http://localhost:8610/) in the Chrome browser.
See error
This webpage cannot function properly.
localhost did not send any data.
ERR_EMPTY_RESPONSE
Expected behavior
Display web pages normally.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context
log
scraperr | (node:1) ExperimentalWarning: CommonJS module /usr/local/lib/node_modules/npm/node_modules/debug/src/node.js is loading ES Module /usr/local/lib/node_modules/npm/node_modules/supports-color/index.js using require().
2024-12-07 11:10:53 scraperr_api | 2024-12-07 03:10:53,844 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
scraperr | Support for loading ES Module in require() is an experimental feature and might change at any time
scraperr_api | 2024-12-07 03:10:53,847 INFO supervisord started with pid 8
scraperr | (Use
node --trace-warnings ...
to show where the warning was created)scraperr_api | 2024-12-07 03:10:54,850 INFO spawned: 'api' with pid 9
scraperr |
scraperr | > [email protected] start
scraperr | > next start
scraperr |
scraperr_api | 2024-12-07 03:10:54,851 INFO spawned: 'worker' with pid 10
scraperr_api | 2024-12-07 03:10:55,853 INFO success: api entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | 2024-12-07 03:10:55,853 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-12-07 11:10:57 scraperr | ▲ Next.js 14.2.4
scraperr | - Local: http://localhost:3000
scraperr_api | INFO: Will watch for changes in these directories: ['/project']
scraperr_api | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
scraperr |
scraperr | ✓ Starting...
scraperr | ✓ Ready in 3.1s
scraperr |
scraperr | > [email protected] start
scraperr_api | INFO: Started reloader process [17] using StatReload
scraperr | > next start
scraperr_api | INFO:main:Starting job worker...
scraperr_api | Traceback (most recent call last):
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
scraperr |
scraperr | ▲ Next.js 14.2.4
scraperr | - Local: http://localhost:3000
scraperr_api | return _run_code(code, main_globals, None,
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
scraperr_api | exec(code, run_globals)
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 50, in
scraperr_api | asyncio.run(main())
scraperr |
scraperr_api | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
scraperr_api | return loop.run_until_complete(main)
scraperr | ✓ Starting...
scraperr | ✓ Ready in 8.5s
scraperr_api | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
scraperr_api | return future.result()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 45, in main
scraperr_api | await process_job()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 16, in process_job
scraperr_api | job = await get_queued_job()
scraperr_api | File "/project/api/backend/job/job.py", line 39, in get_queued_job
scraperr_api | res = common_query(query)
scraperr_api | File "/project/api/backend/database/common.py", line 44, in query
scraperr_api | _ = cursor.execute(query)
scraperr_api | sqlite3.OperationalError: no such table: jobs
scraperr_api | 2024-12-07 03:11:03,285 WARN exited: worker (exit status 1; not expected)
scraperr_api | 2024-12-07 03:11:04,466 INFO spawned: 'worker' with pid 22
scraperr_api | 2024-12-07 03:11:05,468 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO: Started server process [20]
scraperr_api | INFO: Waiting for application startup.
scraperr_api | INFO: 2024-12-07 03:11:06,857 - api.backend.database.startup - Executing query: CREATE TABLE IF NOT EXISTS jobs (
scraperr_api | id STRING PRIMARY KEY NOT NULL,
scraperr_api | url STRING NOT NULL,
scraperr_api | elements JSON NOT NULL,
scraperr_api | user STRING,
scraperr_api | time_created DATETIME NOT NULL,
scraperr_api | result JSON NOT NULL,
scraperr_api | status STRING NOT NULL,
scraperr_api | chat JSON,
scraperr_api | job_options JSON
scraperr_api | )
scraperr_api | INFO: 2024-12-07 03:11:06,858 - api.backend.database.startup - Executing query:
scraperr_api |
scraperr_api | CREATE TABLE IF NOT EXISTS users (
scraperr_api | email STRING PRIMARY KEY NOT NULL,
scraperr_api | hashed_password STRING NOT NULL,
scraperr_api | full_name STRING,
scraperr_api | disabled BOOLEAN
scraperr_api | )
scraperr_api | INFO: 2024-12-07 03:11:06,858 - api.backend.app - Starting up...
scraperr_api | INFO: Application startup complete.
scraperr_api | INFO:main:Starting job worker...
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | 2024-12-07 04:34:29,887 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
scraperr_api | 2024-12-07 04:34:29,995 INFO supervisord started with pid 7
scraperr_api | 2024-12-07 04:34:30,998 INFO spawned: 'api' with pid 8
scraperr_api | 2024-12-07 04:34:30,999 INFO spawned: 'worker' with pid 9
scraperr_api | 2024-12-07 04:34:32,001 INFO success: api entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | 2024-12-07 04:34:32,001 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO: Will watch for changes in these directories: ['/project']
scraperr_api | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
scraperr_api | INFO: Started reloader process [16] using StatReload
scraperr_api | INFO:main:Starting job worker...
scraperr_api | Traceback (most recent call last):
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
scraperr_api | return _run_code(code, main_globals, None,
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
scraperr_api | exec(code, run_globals)
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 50, in
scraperr_api | asyncio.run(main())
scraperr_api | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
scraperr_api | return loop.run_until_complete(main)
scraperr_api | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
scraperr_api | return future.result()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 45, in main
scraperr_api | await process_job()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 16, in process_job
scraperr_api | job = await get_queued_job()
scraperr_api | File "/project/api/backend/job/job.py", line 39, in get_queued_job
scraperr_api | res = common_query(query)
scraperr_api | File "/project/api/backend/database/common.py", line 44, in query
scraperr_api | _ = cursor.execute(query)
scraperr_api | sqlite3.OperationalError: no such table: jobs
scraperr_api | 2024-12-07 04:34:43,165 WARN exited: worker (exit status 1; not expected)
scraperr_api | 2024-12-07 04:34:44,168 INFO spawned: 'worker' with pid 21
scraperr_api | 2024-12-07 04:34:45,169 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO:main:Starting job worker...
scraperr_api | Traceback (most recent call last):
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
scraperr_api | return _run_code(code, main_globals, None,
scraperr_api | File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
scraperr_api | exec(code, run_globals)
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 50, in
scraperr_api | asyncio.run(main())
scraperr_api | File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
scraperr_api | return loop.run_until_complete(main)
scraperr_api | File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
scraperr_api | return future.result()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 45, in main
scraperr_api | await process_job()
scraperr_api | File "/project/api/backend/worker/job_worker.py", line 16, in process_job
scraperr_api | job = await get_queued_job()
scraperr_api | File "/project/api/backend/job/job.py", line 39, in get_queued_job
scraperr_api | res = common_query(query)
scraperr_api | File "/project/api/backend/database/common.py", line 44, in query
scraperr_api | _ = cursor.execute(query)
scraperr_api | sqlite3.OperationalError: no such table: jobs
scraperr_api | 2024-12-07 04:34:46,184 WARN exited: worker (exit status 1; not expected)
scraperr_api | 2024-12-07 04:34:47,187 INFO spawned: 'worker' with pid 26
scraperr_api | 2024-12-07 04:34:48,188 INFO success: worker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
scraperr_api | INFO: Started server process [19]
scraperr_api | INFO: Waiting for application startup.
scraperr_api | INFO: 2024-12-07 04:34:48,873 - api.backend.database.startup - Executing query: CREATE TABLE IF NOT EXISTS jobs (
scraperr_api | id STRING PRIMARY KEY NOT NULL,
scraperr_api | url STRING NOT NULL,
scraperr_api | elements JSON NOT NULL,
scraperr_api | user STRING,
scraperr_api | time_created DATETIME NOT NULL,
scraperr_api | result JSON NOT NULL,
scraperr_api | status STRING NOT NULL,
scraperr_api | chat JSON,
scraperr_api | job_options JSON
scraperr_api | )
scraperr_api | INFO: 2024-12-07 04:34:48,873 - api.backend.database.startup - Executing query:
scraperr_api |
scraperr_api | CREATE TABLE IF NOT EXISTS users (
scraperr_api | email STRING PRIMARY KEY NOT NULL,
scraperr_api | hashed_password STRING NOT NULL,
scraperr_api | full_name STRING,
scraperr_api | disabled BOOLEAN
scraperr_api | )
scraperr_api | INFO: 2024-12-07 04:34:48,873 - api.backend.app - Starting up...
scraperr_api | INFO: Application startup complete.
scraperr_api | INFO:main:Starting job worker...
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
scraperr_api | INFO:api.backend.job.job:Got queued job: []
The text was updated successfully, but these errors were encountered: