diff --git a/docs/chapter-06.rst b/docs/chapter-06.rst
index c1cd71b9..8fc7c87d 100644
--- a/docs/chapter-06.rst
+++ b/docs/chapter-06.rst
@@ -700,7 +700,7 @@ last_name, sso_id, and action_token (the last two are mostly for
internal use).
If a ``auth_user`` table is defined before calling ``auth.enable()``
-the provided table withh be used.
+the provided table will be used.
It is also possible to add ``extra_fields`` to the ``auth_user`` table,
for example:
diff --git a/docs/chapter-12.rst b/docs/chapter-12.rst
index 405a9abe..72ce4ae9 100644
--- a/docs/chapter-12.rst
+++ b/docs/chapter-12.rst
@@ -6,7 +6,7 @@ The Form class provides a high-level API for quickly building CRUD (create, upda
especially for working on an existing database table. It can generate and process a form from a
list of desired fields and/or from an existing database table.
-There are 3 typs of forms:
+There are 3 types of forms:
CRUD Create forms:
@@ -246,7 +246,7 @@ it second argument:
File upload field
~~~~~~~~~~~~~~~~~
-We can make a minor modificaiton to our reference model and an upload type file:
+We can make a minor modification to our reference model and an upload type file:
.. code:: python
@@ -458,7 +458,7 @@ Note: 'custom' is just a convention, it could be any name that does not clash wi
You can also be more creative and use your HTML in the template instead of using widgets:
-.. code:: html
+.. code:: css
[[extend 'layout.html']]
@@ -475,7 +475,7 @@ You can also be more creative and use your HTML in the template instead of using
[[for color in ['red', 'blue', 'green']:]]
[[pass]]
diff --git a/docs/chapter-16.rst b/docs/chapter-16.rst
index 21dd1bd6..021f99de 100644
--- a/docs/chapter-16.rst
+++ b/docs/chapter-16.rst
@@ -11,7 +11,7 @@ Given a task (just a python function), you can schedule async runs of that funct
The runs can be a one-off or periodic. They can have timeout. They can be scheduled to run at a given scheduled time.
The scheduler works by creating a table ``task_run`` and enqueueing runs of the predefined task as table records.
-Each ``task_run`` references a task and contains the input to be passed to that task. The scheduler will caputure the
+Each ``task_run`` references a task and contains the input to be passed to that task. The scheduler will capture the
task stdout+stderr in a ``db.task_run.log`` and the task output in ``db.task_run.output``.
A py4web thread loops and finds the next task that needs to be executed. For each task it creates a worker process
@@ -20,8 +20,8 @@ The worker processes are daemons and they only live for the life of one task run
responsible for executing that one task in isolation. The main loop is responsible for assigning tasks and timeouts.
The system is very robust because the only source of truth is the database and its integrity is guaranteed by
-transational safety. Even if py4web is killed, running tasks continue to run unless they complete, fail, or are
-explicitely killed.
+transactional safety. Even if py4web is killed, running tasks continue to run unless they complete, fail, or are
+explicitly killed.
Aside for allowing multiple concurrent task runs in execution on one node,
it is also possible to run multiple instances of the scheduler on different computing nodes,
@@ -58,7 +58,7 @@ Notice that in scaffolding app, the scheduler is created and started in common i
You can manage your task runs busing the dashboard or using a ``Grid(path, db.task_run)``.
-To prevent database locks (in particular with sqlite) we recommand:
+To prevent database locks (in particular with sqlite) we recommend:
- Use a different database for the scheduler and everything else
- Always ``db.commit()`` as soon as possible after any insert/update/delete
@@ -77,8 +77,8 @@ Celery
------
Yes. You can use Celery instead of the build-in scheduler but it adds complexity and it is less robust.
-Yet the build-in schduler is designed for long running tasks and the database can become a bottle neck
-if you have hundrands running concurrently. Celery may work better if you have more than 100 concurrent
+Yet the build-in scheduler is designed for long running tasks and the database can become a bottleneck
+if you have hundreds of tasks running concurrently. Celery may work better if you have more than 100 concurrent
tasks and/or they are short running tasks.
diff --git a/docs/conf.py b/docs/conf.py
index d1cbad80..189dc747 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -31,6 +31,8 @@
if '__version__ = ' in line:
values = line.split(sep = ' = ')
current_version = values[1].strip('\n').strip('"')
+ current_version = current_version[2:]
+ current_version = current_version[:-2]
break
release = current_version
version = current_version
diff --git a/docs/spelling_wordlist_en.txt b/docs/spelling_wordlist_en.txt
index f21899cf..2f41f48a 100644
--- a/docs/spelling_wordlist_en.txt
+++ b/docs/spelling_wordlist_en.txt
@@ -56,6 +56,7 @@ Dockerfile
doesn
dropdown
dropdowns
+enqueueing
epub
exe
executesql
@@ -96,6 +97,7 @@ https
hx
ibm
ie
+iframe
incrementing
informix
Informix
@@ -226,6 +228,8 @@ sql
sqlite
sso
stateful
+stderr
+stdout
stylesheet
subclassing
subfolder