From dfd19fbfa062feac9fca47998673a2686e8b74e9 Mon Sep 17 00:00:00 2001 From: Nico Zanferrari Date: Fri, 14 Jun 2024 22:07:41 +0200 Subject: [PATCH] DOCS: typos --- docs/chapter-06.rst | 2 +- docs/chapter-12.rst | 4 ++-- docs/chapter-16.rst | 12 ++++++------ docs/spelling_wordlist_en.txt | 4 ++++ 4 files changed, 13 insertions(+), 9 deletions(-) diff --git a/docs/chapter-06.rst b/docs/chapter-06.rst index c1cd71b99..8fc7c87d9 100644 --- a/docs/chapter-06.rst +++ b/docs/chapter-06.rst @@ -700,7 +700,7 @@ last_name, sso_id, and action_token (the last two are mostly for internal use). If a ``auth_user`` table is defined before calling ``auth.enable()`` -the provided table withh be used. +the provided table will be used. It is also possible to add ``extra_fields`` to the ``auth_user`` table, for example: diff --git a/docs/chapter-12.rst b/docs/chapter-12.rst index 30f3cafd3..72ce4ae99 100644 --- a/docs/chapter-12.rst +++ b/docs/chapter-12.rst @@ -6,7 +6,7 @@ The Form class provides a high-level API for quickly building CRUD (create, upda especially for working on an existing database table. It can generate and process a form from a list of desired fields and/or from an existing database table. -There are 3 typs of forms: +There are 3 types of forms: CRUD Create forms: @@ -246,7 +246,7 @@ it second argument: File upload field ~~~~~~~~~~~~~~~~~ -We can make a minor modificaiton to our reference model and an upload type file: +We can make a minor modification to our reference model and an upload type file: .. code:: python diff --git a/docs/chapter-16.rst b/docs/chapter-16.rst index 21dd1bd6e..021f99de7 100644 --- a/docs/chapter-16.rst +++ b/docs/chapter-16.rst @@ -11,7 +11,7 @@ Given a task (just a python function), you can schedule async runs of that funct The runs can be a one-off or periodic. They can have timeout. They can be scheduled to run at a given scheduled time. The scheduler works by creating a table ``task_run`` and enqueueing runs of the predefined task as table records. -Each ``task_run`` references a task and contains the input to be passed to that task. The scheduler will caputure the +Each ``task_run`` references a task and contains the input to be passed to that task. The scheduler will capture the task stdout+stderr in a ``db.task_run.log`` and the task output in ``db.task_run.output``. A py4web thread loops and finds the next task that needs to be executed. For each task it creates a worker process @@ -20,8 +20,8 @@ The worker processes are daemons and they only live for the life of one task run responsible for executing that one task in isolation. The main loop is responsible for assigning tasks and timeouts. The system is very robust because the only source of truth is the database and its integrity is guaranteed by -transational safety. Even if py4web is killed, running tasks continue to run unless they complete, fail, or are -explicitely killed. +transactional safety. Even if py4web is killed, running tasks continue to run unless they complete, fail, or are +explicitly killed. Aside for allowing multiple concurrent task runs in execution on one node, it is also possible to run multiple instances of the scheduler on different computing nodes, @@ -58,7 +58,7 @@ Notice that in scaffolding app, the scheduler is created and started in common i You can manage your task runs busing the dashboard or using a ``Grid(path, db.task_run)``. -To prevent database locks (in particular with sqlite) we recommand: +To prevent database locks (in particular with sqlite) we recommend: - Use a different database for the scheduler and everything else - Always ``db.commit()`` as soon as possible after any insert/update/delete @@ -77,8 +77,8 @@ Celery ------ Yes. You can use Celery instead of the build-in scheduler but it adds complexity and it is less robust. -Yet the build-in schduler is designed for long running tasks and the database can become a bottle neck -if you have hundrands running concurrently. Celery may work better if you have more than 100 concurrent +Yet the build-in scheduler is designed for long running tasks and the database can become a bottleneck +if you have hundreds of tasks running concurrently. Celery may work better if you have more than 100 concurrent tasks and/or they are short running tasks. diff --git a/docs/spelling_wordlist_en.txt b/docs/spelling_wordlist_en.txt index f21899cff..2f41f48ad 100644 --- a/docs/spelling_wordlist_en.txt +++ b/docs/spelling_wordlist_en.txt @@ -56,6 +56,7 @@ Dockerfile doesn dropdown dropdowns +enqueueing epub exe executesql @@ -96,6 +97,7 @@ https hx ibm ie +iframe incrementing informix Informix @@ -226,6 +228,8 @@ sql sqlite sso stateful +stderr +stdout stylesheet subclassing subfolder