Skip to content

Commit

Permalink
DOCS: typos
Browse files Browse the repository at this point in the history
  • Loading branch information
nicozanf committed Jun 14, 2024
1 parent 0aa2d4a commit dfd19fb
Show file tree
Hide file tree
Showing 4 changed files with 13 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/chapter-06.rst
Original file line number Diff line number Diff line change
Expand Up @@ -700,7 +700,7 @@ last_name, sso_id, and action_token (the last two are mostly for
internal use).

If a ``auth_user`` table is defined before calling ``auth.enable()``
the provided table withh be used.
the provided table will be used.

It is also possible to add ``extra_fields`` to the ``auth_user`` table,
for example:
Expand Down
4 changes: 2 additions & 2 deletions docs/chapter-12.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The Form class provides a high-level API for quickly building CRUD (create, upda
especially for working on an existing database table. It can generate and process a form from a
list of desired fields and/or from an existing database table.

There are 3 typs of forms:
There are 3 types of forms:

CRUD Create forms:

Expand Down Expand Up @@ -246,7 +246,7 @@ it second argument:
File upload field
~~~~~~~~~~~~~~~~~

We can make a minor modificaiton to our reference model and an upload type file:
We can make a minor modification to our reference model and an upload type file:

.. code:: python
Expand Down
12 changes: 6 additions & 6 deletions docs/chapter-16.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Given a task (just a python function), you can schedule async runs of that funct
The runs can be a one-off or periodic. They can have timeout. They can be scheduled to run at a given scheduled time.

The scheduler works by creating a table ``task_run`` and enqueueing runs of the predefined task as table records.
Each ``task_run`` references a task and contains the input to be passed to that task. The scheduler will caputure the
Each ``task_run`` references a task and contains the input to be passed to that task. The scheduler will capture the
task stdout+stderr in a ``db.task_run.log`` and the task output in ``db.task_run.output``.

A py4web thread loops and finds the next task that needs to be executed. For each task it creates a worker process
Expand All @@ -20,8 +20,8 @@ The worker processes are daemons and they only live for the life of one task run
responsible for executing that one task in isolation. The main loop is responsible for assigning tasks and timeouts.

The system is very robust because the only source of truth is the database and its integrity is guaranteed by
transational safety. Even if py4web is killed, running tasks continue to run unless they complete, fail, or are
explicitely killed.
transactional safety. Even if py4web is killed, running tasks continue to run unless they complete, fail, or are
explicitly killed.

Aside for allowing multiple concurrent task runs in execution on one node,
it is also possible to run multiple instances of the scheduler on different computing nodes,
Expand Down Expand Up @@ -58,7 +58,7 @@ Notice that in scaffolding app, the scheduler is created and started in common i
You can manage your task runs busing the dashboard or using a ``Grid(path, db.task_run)``.
To prevent database locks (in particular with sqlite) we recommand:
To prevent database locks (in particular with sqlite) we recommend:
- Use a different database for the scheduler and everything else
- Always ``db.commit()`` as soon as possible after any insert/update/delete
Expand All @@ -77,8 +77,8 @@ Celery
------
Yes. You can use Celery instead of the build-in scheduler but it adds complexity and it is less robust.
Yet the build-in schduler is designed for long running tasks and the database can become a bottle neck
if you have hundrands running concurrently. Celery may work better if you have more than 100 concurrent
Yet the build-in scheduler is designed for long running tasks and the database can become a bottleneck
if you have hundreds of tasks running concurrently. Celery may work better if you have more than 100 concurrent
tasks and/or they are short running tasks.
Expand Down
4 changes: 4 additions & 0 deletions docs/spelling_wordlist_en.txt
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ Dockerfile
doesn
dropdown
dropdowns
enqueueing
epub
exe
executesql
Expand Down Expand Up @@ -96,6 +97,7 @@ https
hx
ibm
ie
iframe
incrementing
informix
Informix
Expand Down Expand Up @@ -226,6 +228,8 @@ sql
sqlite
sso
stateful
stderr
stdout
stylesheet
subclassing
subfolder
Expand Down

0 comments on commit dfd19fb

Please sign in to comment.