You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Mkdocs-Macros needs a testing framework. This is necessary, with (according to Github) over 3500 projects depending on it, some of which are large or have themselves many dependent projects.
History
Important
The project itself was originally based on a simple, big idea 💡 borrowed from the world of wikis: using a templating engine to vastly expand the possibilities of the Markdown language. Any documentation tool needs a templating system.
Jinja2 made this easy, and the initial version of the plugin, back in 2018, was simple. Most of the later complications derived from "real-world" considerations:
The various places from which the placeholders (variables, macros and filters) should come, and
How to integrate the plugin within the build workflow of Mkdocs, controlling which pieces of the page to render or not to render, as well as logging, etc.
The question of how to test the final results arose immediately. I solved it by using the main tool that Mkdocs provides for that purpose: mkdocs serve and by watching the results in a browser. It is a quick and effective method to test anything one wishes.
Caution
However, this way of testing has a key limitation: it is not systematic. As long as there is one developper who has complete command over a simple plugin, it can work. As soon as the code becomes complex, or other developpers submit PRs, the risk of breaking something becomes too great. And with so many dependent projects, a push that breaks the code introduces risks into the lives of other people.
In #241, I summarized the discussions prompted by @timvink, inspired from his experience on mkdocstrings. It all started from the discussion on how to make Mkdocs coexist with other plugins; we agreed we needed a hook (#237); this was done... and then the question arose of how to test the result ❓.
His contribution was essential, because it framed the problem. He also kindly submitted a PR (#239) based on pytest, which contained a good start.
I realized, however, that I would have to take a step back, and think this problem through.
Why it is difficult?
Important
The problem is that a plugin (Mkdocs-Macros) relies, by definition, on the underlying piece of software (Mkdocs) in order to run. So, you have to rely on the debug/testing tools provided by the software itself.
The tool that Mkdocs provides for systematic testing is mkdocs build. It has a log, and can be made to halt in case of warnings (--strict), which is suitable for most applications.
It is however, it is a binary test: the build worked or it didn't. It does not have the granularity (page by page) necessary for testing automatically the things that I had been testing manually, by launching mkdocs serve and checking each page for myself.
Examples are:
Does each resulting page contain the expected result?
Was info in the YAML config file correctly interpreted?
Does the Jinja2 context actually contain the expected variable (key, value)?
Was the page rendered/not reendered?
Of course the log (especially with the --debug option)
I realized that I needed a framework for that.
Caution
Also, programmatically checking the resulting HTML page opens a rabbit hole: after Markdown extensions have been rendered, and headers, footers, javascript scripts, etc. have been added, the code has been altered beyond recognition. And first, we need to locate the html file that corresponds to the original markdown page we wanted to test.
Why I didn't use Mkdocs
One way to solve this issue, might have been to attempt to use the Mkdocs framework itself, .
Aside of the fact that it would have required an intimate knowledge of the intricacies this framework that I don't have, I realized that using Mkdocs to test itself would risk creating assertions that are tautologies or begging the question (accidentally formulated in a way that they can't give a False answer, because they are basically the same thing expressed in two different ways).
Solution
Here is an initial description.
Principle
The best approach was to make a completely distinct test framework.
The Test Framework, executes mkdocs build --debug (and if required, --strict) and then compares the following five inputs:
- Source
1. Source: The original markdown files
2. Config: The YAML configuration file (`config.yaml`)
- Target
1. The success/failure of the building (return code)
2. Log: the logs generated by MkDocs during the rendering process.
3. Target: the rendered markdown files (generated by MkDocs-Macros, using Jinja2).
Each properly formatted log entry has a severity ('INFO'), an optional source ('macros'), a title ('Macros arguments') and an optional payload (any text).
Note
Mkdocs-Macros uses the payload of DEBUG entries to convey the three complete dictionaries of variables, filters and macros generated at the time of on_config.
Target documents
The target documents are raw Markdown documents (after Jinja2 has been rendered), to which the original YAML header has been added. They are adequate to test the result of Mkdocs-Macros, as produced by on_page_markdown()
The framework collects parses each file and provides:
markdown (without the header)
metadata
content rendered into html,
content rendered into plain text
an advanced search method, useful for checking the content.
First Results
A first version of the test framework (test/fixture.py) has been produced.
Caution
This is experimental
The test framework provides a single DocProject object, which contains all elements necessary to test:
Each page (source and target), with markdown, content, etc.
The config file
The success/failure of the build (return code)
The log entries
The placeholders (variables, macros and filters) in their state at on_conf (each page is then completed by its own metadata)
Making cd into the test directory, and running pytest launches the existing tests, on two test documentation projects:
simple
module
The text was updated successfully, but these errors were encountered:
The Issue
Mkdocs-Macros needs a testing framework. This is necessary, with (according to Github) over 3500 projects depending on it, some of which are large or have themselves many dependent projects.
History
Important
The project itself was originally based on a simple, big idea 💡 borrowed from the world of wikis: using a templating engine to vastly expand the possibilities of the Markdown language. Any documentation tool needs a templating system.
Jinja2 made this easy, and the initial version of the plugin, back in 2018, was simple. Most of the later complications derived from "real-world" considerations:
The question of how to test the final results arose immediately. I solved it by using the main tool that Mkdocs provides for that purpose:
mkdocs serve
and by watching the results in a browser. It is a quick and effective method to test anything one wishes.Caution
However, this way of testing has a key limitation: it is not systematic. As long as there is one developper who has complete command over a simple plugin, it can work. As soon as the code becomes complex, or other developpers submit PRs, the risk of breaking something becomes too great. And with so many dependent projects, a push that breaks the code introduces risks into the lives of other people.
What is needed
Hence Mkdocs-Macros needs a testing framework, in view of Continuous Integration on GitHub.
Ideas
It is easier said than done.
In #241, I summarized the discussions prompted by @timvink, inspired from his experience on mkdocstrings. It all started from the discussion on how to make Mkdocs coexist with other plugins; we agreed we needed a hook (#237); this was done... and then the question arose of how to test the result ❓.
His contribution was essential, because it framed the problem. He also kindly submitted a PR (#239) based on pytest, which contained a good start.
I realized, however, that I would have to take a step back, and think this problem through.
Why it is difficult?
Important
The problem is that a plugin (Mkdocs-Macros) relies, by definition, on the underlying piece of software (Mkdocs) in order to run. So, you have to rely on the debug/testing tools provided by the software itself.
The tool that Mkdocs provides for systematic testing is
mkdocs build
. It has a log, and can be made to halt in case of warnings (--strict
), which is suitable for most applications.It is however, it is a binary test: the build worked or it didn't. It does not have the granularity (page by page) necessary for testing automatically the things that I had been testing manually, by launching
mkdocs serve
and checking each page for myself.Examples are:
Of course the log (especially with the
--debug
option)I realized that I needed a framework for that.
Caution
Also, programmatically checking the resulting HTML page opens a rabbit hole: after Markdown extensions have been rendered, and headers, footers, javascript scripts, etc. have been added, the code has been altered beyond recognition. And first, we need to locate the html file that corresponds to the original markdown page we wanted to test.
Why I didn't use Mkdocs
One way to solve this issue, might have been to attempt to use the Mkdocs framework itself, .
Aside of the fact that it would have required an intimate knowledge of the intricacies this framework that I don't have, I realized that using Mkdocs to test itself would risk creating assertions that are tautologies or begging the question (accidentally formulated in a way that they can't give a False answer, because they are basically the same thing expressed in two different ways).
Solution
Here is an initial description.
Principle
The best approach was to make a completely distinct test framework.
The Test Framework, executes
mkdocs build --debug
(and if required,--strict
) and then compares the following five inputs:Notes on the Log
The log is parsed into a list of log objects.
There are three types of log entries:
Each properly formatted log entry has a severity ('INFO'), an optional source ('macros'), a title ('Macros arguments') and an optional payload (any text).
Note
Mkdocs-Macros uses the payload of DEBUG entries to convey the three complete dictionaries of variables, filters and macros generated at the time of
on_config
.Target documents
The target documents are raw Markdown documents (after Jinja2 has been rendered), to which the original YAML header has been added. They are adequate to test the result of Mkdocs-Macros, as produced by
on_page_markdown()
The framework collects parses each file and provides:
First Results
A first version of the test framework (
test/fixture.py
) has been produced.Caution
This is experimental
The test framework provides a single DocProject object, which contains all elements necessary to test:
on_conf
(each page is then completed by its own metadata)Making
cd
into thetest
directory, and runningpytest
launches the existing tests, on two test documentation projects:simple
module
The text was updated successfully, but these errors were encountered: