diff --git a/docs/age_of_empires.md b/docs/age_of_empires.md index d0d3952f7d9..b8f26a491a7 100644 --- a/docs/age_of_empires.md +++ b/docs/age_of_empires.md @@ -12,7 +12,8 @@ author: Lyz - When gathering food from fastest to slowest you can: fish > hunt deer > hunt boar > hunt sheep > gather berries. -# [Basic Opening](https://yewtu.be/watch?v=4cWd9ST-TZc&list=PLQU6KyTfRZCZc-Q9HpfzEaWLvpLz9MdnE&index=0) +# Openings or Build Orders +## [Basic Opening](https://yewtu.be/watch?v=4cWd9ST-TZc&list=PLQU6KyTfRZCZc-Q9HpfzEaWLvpLz9MdnE&index=0) When the match starts: @@ -89,6 +90,46 @@ When the match starts: - Send 5 or 6 of the sheep gatherers to the wood +## [Fast castle boom](https://www.youtube.com/watch?v=JsTNM7j6fs4&t=119) + +# Strategy guides + +## How to play maps + +- How to play Arena: + - [Hera's guide](https://piped.video/watch?v=8gXI4XGMPzQ&t=0) + - [Tatoh game in arena](https://www.youtube.com/watch?v=3qg4Xwm8CAo&t=1211): First match of the series +- [How to play Hideout](https://www.youtube.com/watch?v=DdK8QveBegw&t=652) +- [How to play Blackforest](https://www.youtube.com/watch?v=1V_jsU9PF8Y) + +## Inside the mind of a pro player + +- [Episode 1](https://www.youtube.com/watch?v=54hRmrdzO-I) +- [Episode 2](https://www.youtube.com/watch?v=sZCs6dwH5qk&t=1727) + +# Strategies against civilisations + +I'm using only the mongols, and so far I've seen/heard from the pros the next strategies: + +- Aztecs: + - Steppe lancers good against eagle warriors + - Heavy scorpions against eagle warriors and skirms +- Cumans: + - [Scout, if it drops two TCs in feudal, tower rush into archers](https://www.youtube.com/watch?v=H9QUNtFII1g&t=0) + - [Put initial pressure](https://www.youtube.com/watch?v=R9qaFZzZgBY&t=1925): Nice initial pressure +- Incas: + - Steppe lancers good against eagle warriors + - Heavy scorpions against eagle warriors and skirms +- Khmer: boom, map control, monks and albadiers +- Mayans: + - Steppe lancers good against eagle warriors + - Heavy scorpions against eagle warriors and skirms +- Romans: + - [Hera guide on how to beat them](https://www.youtube.com/watch?v=SA44-Y3XUy0&t=842) +- Tartars: heavy scorpions +- Turks: + - [How to defend against them in Arena](https://www.youtube.com/watch?v=AI_JRA_nCpw&t=3710) + # Newbie pitfalls - Don't go for the berries first it's slower. @@ -98,7 +139,6 @@ When the match starts: control units - Not building the farms around the TC - Don't let the scout die - # Micromanagements ## Workers @@ -138,3 +178,20 @@ When the match starts: ### House building Build new houses when you're 2 of population down to the limit +# Nice games + +## Tournaments + +- 2023 Masters of Arena 7 Final Tatoh vs Vinchester: + - [Casted by T90](https://www.youtube.com/watch?v=3qg4Xwm8CAo&t=1211s) + - [Pov by Tatoh](https://www.youtube.com/watch?v=AI_JRA_nCpw&t=8854) + +## Showmatches + +- [Hera vs TheViper | Battlegrounds 3 | BO5](https://www.youtube.com/watch?v=AlKMRQNMVzo&t=4306) +- [The Viper VS Tatoh PA7](https://www.youtube.com/watch?v=5_p3TXasBHY&t=5319) + +## 1vs1 games + +- [Hindustanis vs Portuguese | Arabia | Hera vs Yo](https://www.youtube.com/watch?v=iZ7eWLLbh34) +- [Dravidians vs Turks | African Clearing | Hera vs Yo](https://www.youtube.com/watch?v=tZyVLDwBfd4) diff --git a/docs/ansible_snippets.md b/docs/ansible_snippets.md index 1bd30f85b4f..5d28c58b224 100644 --- a/docs/ansible_snippets.md +++ b/docs/ansible_snippets.md @@ -4,10 +4,127 @@ date: 20220119 author: Lyz --- +# Run command on a working directory + +```yaml +- name: Change the working directory to somedir/ and run the command as db_owner + ansible.builtin.command: /usr/bin/make_database.sh db_user db_name + become: yes + become_user: db_owner + args: + chdir: somedir/ + creates: /path/to/database +``` + +# [Run handlers in the middle of the tasks file](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html#controlling-when-handlers-run) + +If you need handlers to run before the end of the play, add a task to flush them using the [meta module](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/meta_module.html#meta-module), which executes Ansible actions: + +```yaml +tasks: + - name: Some tasks go here + ansible.builtin.shell: ... + + - name: Flush handlers + meta: flush_handlers + + - name: Some other tasks + ansible.builtin.shell: ... +``` + +The `meta: flush_handlers` task triggers any handlers that have been notified at that point in the play. + +Once handlers are executed, either automatically after each mentioned section or manually by the `flush_handlers meta` task, they can be notified and run again in later sections of the play. + +# [Run command idempotently](https://stackoverflow.com/questions/70956356/no-changed-when-lint-warning-araise-in-the-ansible-playbook) + +```yaml +- name: Register the runner in gitea + become: true + command: act_runner register --config config.yaml --no-interactive --instance {{ gitea_url }} --token {{ gitea_docker_runner_token }} + args: + creates: /var/lib/gitea_docker_runner/.runner +``` + +# Get the correct architecture string + +If you have an `amd64` host you'll get `x86_64`, but sometimes you need the `amd64` string. On those cases you can use the next snippet: + +```yaml +--- +# vars/main.yaml +deb_architecture: + aarch64: arm64 + x86_64: amd64 + +--- +# tasks/main.yaml +- name: Download the act runner binary + become: True + ansible.builtin.get_url: + url: https://dl.gitea.com/act_runner/act_runner-linux-{{ deb_architecture[ansible_architecture] }} + dest: /usr/bin/act_runner + mode: '0755' +``` + +# [Check the instances that are going to be affected by playbook run](https://medium.com/geekculture/a-complete-overview-of-ansible-dynamic-inventory-a9ded104df4c) + +Useful to list the instances of a dynamic inventory + +```bash +ansible-inventory -i aws_ec2.yaml --list +``` + +# [Check if variable is defined or empty](https://www.shellhacks.com/ansible-when-variable-is-defined-exists-empty-true/) + +In Ansible playbooks, it is often a good practice to test if a variable exists and what is its value. + +Particularity this helps to avoid different “VARIABLE IS NOT DEFINED” errors in Ansible playbooks. + +In this context there are several useful tests that you can apply using [Jinja2 filters](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html) in Ansible. + +## Check if Ansible variable is defined (exists) + +```yaml +tasks: + +- shell: echo "The variable 'foo' is defined: '{{ foo }}'" + when: foo is defined + +- fail: msg="The variable 'bar' is not defined" + when: bar is undefined +``` + +## Check if Ansible variable is empty + +```yaml +tasks: + +- fail: msg="The variable 'bar' is empty" + when: bar|length == 0 + +- shell: echo "The variable 'foo' is not empty: '{{ foo }}'" + when: foo|length > 0 +``` + +## Check if Ansible variable is defined and not empty + +```yaml +tasks: + +- shell: echo "The variable 'foo' is defined and not empty" + when: (foo is defined) and (foo|length > 0) + +- fail: msg="The variable 'bar' is not defined or empty" + when: (bar is not defined) or (bar|length == 0) +``` + # Start and enable a systemd service +Typically defined in `handlers/main.yaml`: + ```yaml -- name: Start the service +- name: Restart the service become: true systemd: name: zfs_exporter @@ -16,6 +133,26 @@ author: Lyz state: started ``` +And used in any task: + +```yaml +- name: Create the systemd service + become: true + template: + src: service.j2 + dest: /etc/systemd/system/zfs_exporter.service + notify: Restart the service +``` + +# [Download a file](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/get_url_module.html) + +```yaml +- name: Download foo.conf + ansible.builtin.get_url: + url: http://example.com/path/file.conf + dest: /etc/foo.conf + mode: '0440' +``` # [Download an decompress a tar.gz](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/unarchive_module.html) ```yaml @@ -195,7 +332,8 @@ To make the `command` idempotent you can use a `stat` task before. ```yaml - name: stat foo - stat: path=/path/to/foo + stat: + path: /path/to/foo register: foo_stat - name: Move foo to bar diff --git a/docs/authentik.md b/docs/authentik.md index f8b117c75ff..610e5b228b1 100644 --- a/docs/authentik.md +++ b/docs/authentik.md @@ -886,6 +886,11 @@ Instead of exporting everything from a single instance, there's also the option This export can be triggered via the API or the Web UI by clicking the download button in the flow list. +## [Monitorization](https://goauthentik.io/docs/installation/monitoring) + +I've skimmed through the prometheus metrics exposed at `:9300/metrics` in the core and they aren't that useful :( + + # References * [Source](https://github.com/goauthentik/authentik) diff --git a/docs/bash_snippets.md b/docs/bash_snippets.md index 59362cc7eae..260e0b5fd2f 100644 --- a/docs/bash_snippets.md +++ b/docs/bash_snippets.md @@ -4,6 +4,35 @@ date: 20220827 author: Lyz --- +# [Get the root path of a git repository](https://stackoverflow.com/questions/957928/is-there-a-way-to-get-the-git-root-directory-in-one-command) + +```bash +git rev-parse --show-toplevel +``` + +# [Get epoch gmt time](https://unix.stackexchange.com/questions/384672/getting-epoch-time-from-gmt-time-stamp) + +```bash +date -u '+%s' +``` + +# [Check the length of an array with jq](https://phpfog.com/count-json-array-elements-with-jq/) + +``` +echo '[{"username":"user1"},{"username":"user2"}]' | jq '. | length' +``` + +# [Exit the script if there is an error](https://unix.stackexchange.com/questions/595249/what-does-the-eu-mean-in-bin-bash-eu-at-the-top-of-a-bash-script-or-a) + +```bash +set -eu +``` + +# [Prompt the user for data](https://stackoverflow.com/questions/1885525/how-do-i-prompt-a-user-for-confirmation-in-bash-script) + +```bash +read -p "Ask whatever" choice +``` # [Parse csv with bash](https://www.baeldung.com/linux/csv-parsing) # [Do the remainder or modulus of a number](https://stackoverflow.com/questions/39006278/how-does-bash-modulus-remainder-work) diff --git a/docs/beets.md b/docs/beets.md index f00920dce29..003d9296356 100644 --- a/docs/beets.md +++ b/docs/beets.md @@ -31,9 +31,62 @@ a [library](https://beets.readthedocs.io/en/stable/dev/api.html). # [Installation](https://beets.readthedocs.io/en/stable/guides/main.html#installing) ```bash -pip install beets +pipx install beets ``` +You’ll want to set a few basic options before you start using beets. The [configuration](https://beets.readthedocs.io/en/stable/reference/config.html) is stored in a text file. You can show its location by running `beet config -p`, though it may not exist yet. Run `beet config -e` to edit the configuration in your favorite text editor. The file will start out empty, but here’s good place to start: + +```yaml +# Path to a directory where you’d like to keep your music. +directory: ~/music + +# Database file that keeps an index of your music. +library: ~/data/musiclibrary.db +``` + +The default configuration assumes you want to start a new organized music folder (that directory above) and that you’ll copy cleaned-up music into that empty folder using beets’ `import` command. But you can configure beets to behave many other ways: + +- Start with a new empty directory, but move new music in instead of copying it (saving disk space). Put this in your config file: + + ```yaml + import: + move: yes + ``` + +- Keep your current directory structure; importing should never move or copy files but instead just correct the tags on music. Put the line `copy: no` under the `import:` heading in your config file to disable any copying or renaming. Make sure to point `directory` at the place where your music is currently stored. + +- Keep your current directory structure and do not correct files’ tags: leave files completely unmodified on your disk. (Corrected tags will still be stored in beets’ database, and you can use them to do renaming or tag changes later.) Put this in your config file: + + ```yaml + import: + copy: no + write: no + ``` + + to disable renaming and tag-writing. + +# Usage + +## [Importing your library](https://beets.readthedocs.io/en/stable/guides/main.html#importing-your-library) + +The next step is to import your music files into the beets library database. Because this can involve modifying files and moving them around, data loss is always a possibility, so now would be a good time to make sure you have a recent backup of all your music. We’ll wait. + +There are two good ways to bring your existing library into beets. You can either: (a) quickly bring all your files with all their current metadata into beets’ database, or (b) use beets’ highly-refined autotagger to find canonical metadata for every album you import. Option (a) is really fast, but option (b) makes sure all your songs’ tags are exactly right from the get-go. The point about speed bears repeating: using the autotagger on a large library can take a very long time, and it’s an interactive process. So set aside a good chunk of time if you’re going to go that route. + +If you’ve got time and want to tag all your music right once and for all, do this: + +```bash +$ beet import /path/to/my/music +``` + +(Note that by default, this command will copy music into the directory you specified above. If you want to use your current directory structure, set the import.copy config option.) To take the fast, un-autotagged path, just say: + +```bash +$ beet import -A /my/huge/mp3/library +``` + +Note that you just need to add `-A` for “don’t autotag”. + # References * [Git](https://github.com/beetbox/beets) diff --git a/docs/coding/python/pydantic.md b/docs/coding/python/pydantic.md index 89947bf2590..731880e3be3 100644 --- a/docs/coding/python/pydantic.md +++ b/docs/coding/python/pydantic.md @@ -646,6 +646,10 @@ extension-pkg-whitelist = "pydantic" Or if it fails, add to the line `# pylint: extension-pkg-whitelist`. +# To investigate + +- [Integration of pydantic with pandas](https://pandera.readthedocs.io/en/stable/pydantic_integration.html) + # References - [Docs](https://pydantic-docs.helpmanual.io/) diff --git a/docs/copier.md b/docs/copier.md new file mode 100644 index 00000000000..f048681fc49 --- /dev/null +++ b/docs/copier.md @@ -0,0 +1,876 @@ + +[Copier](https://github.com/copier-org/copier) is a library and CLI app for rendering project templates. + +- Works with local paths and Git URLs. +- Your project can include any file and Copier can dynamically replace values in any kind of text file. +- It generates a beautiful output and takes care of not overwriting existing files unless instructed to do so. + +# [Installation](https://github.com/copier-org/copier) + +```bash +pipx install copier +``` + +Until [this issue is solved](https://github.com/copier-org/copier/issues/1225) you also need to downgrade `pydantic` + +```bash +pipx inject copier 'pydantic<2' +``` + +# [Basic concepts](https://github.com/copier-org/copier#basic-concepts) + +Copier is composed of these main concepts: + +- Templates: They lay out how to generate the subproject. +- Questionaries: They are configured in the template. Answers are used to generate projects. +- Projects: This is where your real program lives. But it is usually generated and/or updated from a template. + +Copier targets these main human audiences: + +- Template creators: Programmers that repeat code too much and prefer a tool to do it for them. + This quote on their docs made my day: + + > Copier doesn't replace the DRY principle... but sometimes you simply can't be DRY and you need a DRYing machine... + +- Template consumers: Programmers that want to start a new project quickly, or that want to evolve it comfortably. + +Non-humans should be happy also by using Copier's CLI or API, as long as their expectations are the same as for those humans... and as long as they have feelings. + +Templates have these goals: + +- [Code scaffolding](https://en.wikipedia.org/wiki/Scaffold_%28programming%29): Help consumers have a working source code tree as quickly as possible. All templates allow scaffolding. +- Code lifecycle management. When the template evolves, let consumers update their projects. Not all templates allow updating. + +Copier tries to have a smooth learning curve that lets you create simple templates that can evolve into complex ones as needed. + +# Usage + +## [Creating a template](https://copier.readthedocs.io/en/latest/creating/) + +A template is a directory: usually the root folder of a Git repository. + +The content of the files inside the project template is copied to the destination without changes, unless they end with `.jinja`. In that case, the templating engine will be used to render them. + +Jinja2 templating is used. Learn more about it by reading [Jinja2 documentation](https://jinja.palletsprojects.com/). + +If a YAML file named `copier.yml` or `copier.yaml` is found in the root of the project, the user will be prompted to fill in or confirm the default values. + +Minimal example: + +``` +📁 my_copier_template # your template project +├── 📄 copier.yml # your template configuration +├── 📁 .git/ # your template is a Git repository +├── 📁 {{project_name}} # a folder with a templated name +│ └── 📄 {{module_name}}.py.jinja # a file with a templated name +└── 📄 {{_copier_conf.answers_file}}.jinja # answers are recorded here +``` + +Where: + +- copier.yml + + ```yaml + # questions + project_name: + type: str + help: What is your project name? + + module_name: + type: str + help: What is your Python module name? + ``` + +- `{{project_name}}/{{module_name}}.py.jinja` + + ```python + print("Hello from {{module_name}}!") + ``` + +- `{{_copier_conf.answers_file}}.jinja` + + ``` + # Changes here will be overwritten by Copier + {{ _copier_answers|to_nice_yaml -}} + ``` + +Generating a project from this template using `copier copy my_copier_template generated_project` answering `super_project` and `world` for the `project_name` and `module_name` questions respectively would create in the following directory and files: + +``` +📁 generated_project +├── 📁 super_project +│ └── 📄 world.py +└── 📄 .copier-answers.yml +``` + +Where: + +- `super_project/world.py` + + ```python + print("Hello from world!") + ``` + +- `.copier-answers.yml` + + ```yaml + # Changes here will be overwritten by Copier + _commit: 0.1.0 + _src_path: gh:your_account/your_template + project_name: super_project + module_name: world + ``` + +### [Template helpers](https://copier.readthedocs.io/en/latest/creating/#template-helpers) + +In addition to [all the features Jinja supports](https://jinja.palletsprojects.com/en/3.1.x/templates/), Copier includes: + +- All functions and filters from [jinja2-ansible-filters](https://gitlab.com/dreamer-labs/libraries/jinja2-ansible-filters/). This includes the `to_nice_yaml` filter, which is used extensively in our context. + +- `_copier_answers` includes the current answers dict, but slightly modified to make it suitable to autoupdate your project safely: + - It doesn't contain secret answers. + - It doesn't contain any data that is not easy to render to JSON or YAML. + - It contains special keys like `_commit` and `_src_path`, indicating how the last template update was done. +- `_copier_conf` includes a representation of the current Copier Worker object, also slightly modified: + - It only contains JSON-serializable data. + - You can serialize it with `{{ _copier_conf|to_json }}`. + - ⚠️ It contains secret answers inside its `.data` key. + - Modifying it doesn't alter the current rendering configuration. + - It contains the current commit hash from the template in `{{ _copier_conf.vcs_ref_hash }}`. + - Contains Operating System-specific directory separator under `sep` key. + +## [Configuring a template](https://copier.readthedocs.io/en/latest/configuring) + +### [The `copier.yaml` file](https://copier.readthedocs.io/en/latest/configuring/#the-copieryml-file) + +The `copier.yml` (or `copier.yaml`) file is found in the root of the template, and it is the main entrypoint for managing your template configuration. + +For each key found, Copier will prompt the user to fill or confirm the values before they become available to the project template. + +This `copier.yml` file: + +```yaml +name_of_the_project: My awesome project +number_of_eels: 1234 +your_email: "" +``` + +Will result in a questionary similar to: + +``` +🎤 name_of_the_project + My awesome project +🎤 number_of_eels (int) + 1234 +🎤 your_email +``` + +Apart from the simplified format, as seen above, Copier supports a more advanced format to ask users for data. To use it, the value must be a dict. + +Supported keys: + +- type: User input must match this type. Options are: `bool`, `float`, `int`, `json`, `str`, `yaml` (default). +- help: Additional text to help the user know what's this question for. +- choices: To restrict possible values. + + A choice can be validated by using the extended syntax with dict-style and tuple-style choices. For example: + + ```yaml + cloud: + type: str + help: Which cloud provider do you use? + choices: + - Any + - AWS + - Azure + - GCP + + iac: + type: str + help: Which IaC tool do you use? + choices: + Terraform: tf + Cloud Formation: + value: cf + validator: "{% if cloud != 'AWS' %}Requires AWS{% endif %}" + Azure Resource Manager: + value: arm + validator: "{% if cloud != 'Azure' %}Requires Azure{% endif %}" + Deployment Manager: + value: dm + validator: "{% if cloud != 'GCP' %}Requires GCP{% endif %}" + ``` + + When the rendered validator is a non-empty string, the choice is disabled and the message is shown. Choice validation is useful when the validity of a choice depends on the answer to a previous question. + +- default: Leave empty to force the user to answer. Provide a default to save them from typing it if it's quite common. When using choices, the default must be the choice value, not its key, and it must match its type. If values are quite long, you can use YAML anchors. +- secret: When true, it hides the prompt displaying asterisks (*****) and doesn't save the answer in the answers file +- placeholder: To provide a visual example for what would be a good value. It is only shown while the answer is empty, so maybe it doesn't make much sense to provide both default and placeholder. +- multiline: When set to `true`, it allows multiline input. This is especially useful when type is json or yaml. +- validator: Jinja template with which to validate the user input. This template will be rendered with the combined answers as variables; it should render nothing if the value is valid, and an error message to show to the user otherwise. +- when: Condition that, if false, skips the question. + + If it is a boolean, it is used directly, but it's a bit absurd in that case. + + If it is a string, it is converted to boolean using a parser similar to YAML, but only for boolean values. + + This is most useful when templated. + + If a question is skipped, its answer will be: + + - The default value, if you're generating the project for the first time. + - The last answer recorded, if you're updating the project. + + ```yaml + project_creator: + type: str + + project_license: + type: str + choices: + - GPLv3 + - Public domain + + copyright_holder: + type: str + default: |- + {% if project_license == 'Public domain' -%} + {#- Nobody owns public projects -#} + nobody + {%- else -%} + {#- By default, project creator is the owner -#} + {{ project_creator }} + {%- endif %} + # Only ask for copyright if project is not in the public domain + when: "{{ project_license != 'Public domain' }}" + ``` + + ```yaml + love_copier: + type: bool # This makes Copier ask for y/n + help: Do you love Copier? + default: yes # Without a default, you force the user to answer + + project_name: + type: str # Any value will be treated raw as a string + help: An awesome project needs an awesome name. Tell me yours. + default: paradox-specifier + validator: >- + {% if not (project_name | regex_search('^[a-z][a-z0-9\-]+$')) %} + project_name must start with a letter, followed one or more letters, digits or dashes all lowercase. + {% endif %} + + rocket_launch_password: + type: str + secret: true # This value will not be logged into .copier-answers.yml + placeholder: my top secret password + + # I'll avoid default and help here, but you can use them too + age: + type: int + validator: "{% if age <= 0 %}Must be positive{% endif %}" + + height: + type: float + + any_json: + help: Tell me anything, but format it as a one-line JSON string + type: json + multiline: true + + any_yaml: + help: Tell me anything, but format it as a one-line YAML string + type: yaml # This is the default type, also for short syntax questions + multiline: true + + your_favorite_book: + # User will choose one of these and your template will get the value + choices: + - The Bible + - The Hitchhiker's Guide to the Galaxy + + project_license: + # User will see only the dict key and choose one, but you will + # get the dict value in your template + choices: + MIT: &mit_text | + Here I can write the full text of the MIT license. + This will be a long text, shortened here for example purposes. + Apache2: | + Full text of Apache2 license. + # When using choices, the default value is the value, **not** the key; + # that's why I'm using the YAML anchor declared above to avoid retyping the + # whole license + default: *mit_text + # You can still define the type, to make sure answers that come from --data + # CLI argument match the type that your template expects + type: str + + close_to_work: + help: Do you live close to your work? + # This format works just like the dict one + choices: + - [at home, I work at home] + - [less than 10km, quite close] + - [more than 10km, not so close] + - [more than 100km, quite far away] + ``` + +#### [Include other YAML files](https://copier.readthedocs.io/en/latest/configuring/#include-other-yaml-files) + +The `copier.yml` file supports multiple documents as well as using the `!include` tag to include settings and questions from other YAML files. This allows you to split up a larger `copier.yml` and enables you to reuse common partial sections from your templates. When multiple documents are used, care has to be taken with questions and settings that are defined in more than one document: + +- A question with the same name overwrites definitions from an earlier document. +- Settings given in multiple documents for `exclude`, `skip_if_exists`, `jinja_extensions` and `secret_questions` are concatenated. +- Other settings (such as `tasks` or `migrations`) overwrite previous definitions for these settings. + +You can use Git submodules to sanely include shared code into templates! + +```yaml +--- +# Copier will load all these files +!include shared-conf/common.*.yml + +# These 3 lines split the several YAML documents +--- +# These two documents include common questions for these kind of projects +!include common-questions/web-app.yml +--- +!include common-questions/python-project.yml +--- + +# Here you can specify any settings or questions specific for your template +_skip_if_exists: + - .password.txt +custom_question: default answer +``` + +that includes questions and settings from `common-questions/python-project.yml` + +```yaml +version: + type: str + help: What is the version of your Python project? + +# Settings like `_skip_if_exists` are merged +_skip_if_exists: + - "pyproject.toml" +``` + +### [Conditional files and directories](https://copier.readthedocs.io/en/latest/configuring/#conditional-files-and-directories) + +You can take advantage of the ability to template file and directory names to make them "conditional", i.e. to only generate them based on the answers given by a user. + +For example, you can ask users if they want to use pre-commit: + +```yaml +use_precommit: + type: bool + default: false + help: Do you want to use pre-commit? +``` + +And then, you can generate a `.pre-commit-config.yaml` file only if they answered "yes": + +``` +📁 your_template +├── 📄 copier.yml +└── 📄 {% if use_precommit %}.pre-commit-config.yaml{% endif %}.jinja +``` + +Note that the chosen template suffix must appear outside of the Jinja condition, otherwise the whole file won't be considered a template and will be copied as such in generated projects. + +You can even use the answers of questions with choices: + +```yaml +ci: + type: str + help: What Continuous Integration service do you want to use? + choices: + GitHub CI: github + GitLab CI: gitlab + default: github +``` + +``` +📁 your_template +├── 📄 copier.yml +├── 📁 {% if ci == 'github' %}.github{% endif %} +│ └── 📁 workflows +│ └── 📄 ci.yml +└── 📄 {% if ci == 'gitlab' %}.gitlab-ci.yml{% endif %}.jinja +``` + +Contrary to files, directories must not end with the template suffix. + +### [Generating a directory structure](https://copier.readthedocs.io/en/latest/configuring/#generating-a-directory-structure) + +You can use answers to generate file names as well as whole directory structures. + +```yaml +package: + type: str + help: Package name +``` + +``` +📁 your_template +├── 📄 copier.yml +└── 📄 {{ package.replace('.', _copier_conf.sep) }}{{ _copier_conf.sep }}__main__.py.jinja +``` + +If you answer `your_package.cli.main` Copier will generate this structure: + +``` +📁 your_project +└── 📁 your_package + └── 📁 cli + └── 📁 main + └── 📄 __main__.py +``` + +You can either use any separator, like `.`, and replace it with `_copier_conf.sep`, like in the example above, or just use `/`. + +### [Importing Jinja templates and macros](https://copier.readthedocs.io/en/latest/configuring/#importing-jinja-templates-and-macros) + +You can [include templates](https://jinja.palletsprojects.com/en/3.1.x/templates/#include) and [import macros](https://jinja.palletsprojects.com/en/3.1.x/templates/#import) to reduce code duplication. A common scenario is the derivation of new values from answers, e.g. computing the slug of a human-readable name: + +- `copier.yaml`: + ```yaml + _exclude: + - name-slug + + name: + type: str + help: A nice human-readable name + + slug: + type: str + help: A slug of the name + default: "{% include 'name-slug.jinja' %}" + ``` + +- `name-slug.jinja` + + ```jinja2 + {# For simplicity ... -#} + {{ name|lower|replace(' ', '-') }} + ``` + +``` +📁 your_template +├── 📄 copier.yml +└── 📄 name-slug.jinja +``` + +It is also possible to include a template in a templated folder name + +``` +📁 your_template +├── 📄 copier.yml +├── 📄 name-slug.jinja +└── 📁 {% include 'name-slug.jinja' %} + └── 📄 __init__.py +``` + +or in a templated file name + +``` +📁 your_template +├── 📄 copier.yml +├── 📄 name-slug.jinja +└── 📄 {% include 'name-slug.jinja' %}.py +``` + +or in the templated content of a text file: + +```toml +# pyproject.toml.jinja + +[project] +name = "{% include 'name-slug.jinja' %}" +``` + +Similarly, a Jinja macro can be defined and imported, e.g. in copier.yml. + +```jinja +slugify.jinja + +{# For simplicity ... -#} +{% macro slugify(value) -%} +{{ value|lower|replace(' ', '-') }} +{%- endmacro %} +``` + +```yaml +# copier.yml + +_exclude: + - slugify + +name: + type: str + help: A nice human-readable name + +slug: + type: str + help: A slug of the name + default: "{% from 'slugify.jinja' import slugify %}{{ slugify(name) }}" +``` + +or in a templated folder name, in a templated file name, or in the templated content of a text file. + +As the number of imported templates and macros grows, you may want to place them in a dedicated directory such as `includes`: + +``` +📁 your_template +├── 📄 copier.yml +└── 📁 includes + ├── 📄 name-slug.jinja + ├── 📄 slugify.jinja + └── 📄 ... +``` + +Then, make sure to exclude this folder in `copier.yml` + +```yaml +_exclude: + - includes +``` + +or use a subdirectory, e.g.: + +```yaml +_subdirectory: template +``` + +To import it you can use either: + +``` +{% include pathjoin('includes', 'name-slug.jinja') %} +``` + +or + +``` +{% from pathjoin('includes', 'slugify.jinja') import slugify %} +``` + +### [Available settings](https://copier.readthedocs.io/en/latest/configuring/#available-settings) + +Remember that the key must be prefixed with an underscore if you use it in the `copier.yml` file. + +Check [the source for a complete list of settings](https://copier.readthedocs.io/en/latest/configuring/#available-settings) + +### [The `.copier.answers.yml` file](https://copier.readthedocs.io/en/latest/configuring/#the-copier-answersyml-file) + +If the destination path exists and a `.copier-answers.yml` file is present there, it will be used to load the last user's answers to the questions made in the `copier.yml` file. + +This makes projects easier to update because when the user is asked, the default answers will be the last ones they used. + +The file must be called exactly `{{ _copier_conf.answers_file }}.jinja` in your template's root folder to allow applying multiple templates to the same subproject. + +The file must have this content: + +```yaml +# Changes here will be overwritten by Copier; NEVER EDIT MANUALLY +{{ _copier_answers|to_nice_yaml -}} +``` + +### [Apply multiple templates to the same subproject](https://copier.readthedocs.io/en/latest/configuring/#applying-multiple-templates-to-the-same-subproject) + +Imagine this scenario: + +- You use one framework that has a public template to generate a project. It's available at https://github.com/example-framework/framework-template.git. +- You have a generic template that you apply to all your projects to use the same pre-commit configuration (formatters, linters, static type checkers...). You have published that in https://gitlab.com/my-stuff/pre-commit-template.git. +- You have a private template that configures your subproject to run in your internal CI. It's found in git@gitlab.example.com:my-company/ci-template.git. + +All 3 templates are completely independent: + +- Anybody can generate a project for the specific framework, no matter if they want to use pre-commit or not. +- You want to share the same pre-commit configurations, no matter if the subproject is for one or another framework. +- You want to have a centralized CI configuration for all your company projects, no matter their pre-commit configuration or the framework they rely on. + +You need to use a different answers file for each one. All of them contain a `{{ _copier_conf.answers_file }}.jinja` file as specified above. Then you apply all the templates to the same project: + +```bash +mkdir my-project +cd my-project +git init +# Apply framework template +copier copy -a .copier-answers.main.yml https://github.com/example-framework/framework-template.git . +git add . +git commit -m 'Start project based on framework template' +# Apply pre-commit template +copier copy -a .copier-answers.pre-commit.yml https://gitlab.com/my-stuff/pre-commit-template.git . +git add . +pre-commit run -a # Just in case 😉 +git commit -am 'Apply pre-commit template' +# Apply internal CI template +copier copy -a .copier-answers.ci.yml git@gitlab.example.com:my-company/ci-template.git . +git add . +git commit -m 'Apply internal CI template' +``` + +Done! + +After a while, when templates get new releases, updates are handled separately for each template: + +```bash +copier update -a .copier-answers.main.yml +copier update -a .copier-answers.pre-commit.yml +copier update -a .copier-answers.ci.yml +``` + +## [Generating a template](https://copier.readthedocs.io/en/latest/generating/) + +You can generate a project from a template using the copier command-line tool: + +```bash +copier copy path/to/project/template path/to/destination +``` + +Or within Python code: + +```bash +copier.run_copy("path/to/project/template", "path/to/destination") +``` + +The "template" parameter can be a local path, an URL, or a shortcut URL: + +- GitHub: `gh:namespace/project` +- GitLab: `gl:namespace/project` + +If Copier doesn't detect your remote URL as a Git repository, make sure it starts with one of `git+https://`, `git+ssh://`, `git@` or `git://`, or it ends with `.git`. + +Use the `--data` command-line argument or the `data` parameter of the `copier.run_copy()` function to pass whatever extra context you want to be available in the templates. The arguments can be any valid Python value, even a function. + +Use the `--vcs-ref` command-line argument to checkout a particular Git ref before generating the project. + +All the available options are described with the `--help-all` option. + +## [Updating a project](https://copier.readthedocs.io/en/latest/updating/) + +The best way to update a project from its template is when all of these conditions are true: + +- The destination folder includes a valid `.copier-answers.yml` file. +- The template is versioned with Git (with tags). +- The destination folder is versioned with Git. + +If that's your case, then just enter the destination folder, make sure `git status` shows it clean, and run: + +```bash +copier update +``` + +This will read all available Git tags, will compare them using PEP 440, and will check out the latest one before updating. To update to the latest commit, add `--vcs-ref=HEAD`. You can use any other Git ref you want. + +When updating, Copier will do its best to respect your project evolution by using the answers you provided when copied last time. However, sometimes it's impossible for Copier to know what to do with a diff code hunk. In those cases, copier handles the conflict in one of two ways, controlled with the `--conflict` option: + +- `--conflict rej`: Creates a separate `.rej` file for each file with conflicts. These files contain the unresolved diffs. +- `--conflict inline` (default): Updates the file with conflict markers. This is quite similar to the conflict markers created when a git merge command encounters a conflict. + +If the update results in conflicts, you should review those manually before committing. + +You probably don't want to lose important changes or to include merge conflicts in your Git history, but if you aren't careful, it's easy to make mistakes. + +That's why the recommended way to prevent these mistakes is to add a pre-commit (or equivalent) hook that forbids committing conflict files or markers. The recommended hook configuration depends on the `conflict` setting you use. + +Never update `.copier-answers.yml` manually!!! + + +If you want to just reuse all previous answers use `copier update --force`. + +### [Migration across Copier major versions](https://copier.readthedocs.io/en/latest/updating/#migration-across-copier-major-versions) + +When there's a new major release of Copier (for example from Copier 5.x to 6.x), there are chances that there's something that changed. Maybe your template will not work as it did before. + +Copier needs to make a copy of the template in its old state with its old answers so it can actually produce a diff with the new state and answers and apply the smart update to the project. To overcome this situation you can: + +- Write good [migrations](https://copier.readthedocs.io/en/latest/configuring/#migrations). +- Then you can test them on your template's CI on a matrix against several Copier versions. +- Or you can just [recopy the project](https://copier.readthedocs.io/en/latest/generating/#regenerating-a-project) when you update to a newer Copier major release. + +## Tasks and migrations + +[tasks](https://copier.readthedocs.io/en/latest/configuring/#tasks) are commands to execute after generating or updating a project from your template. They run ordered, and with the `$STAGE=task` variable in their environment. + +```yaml +# copier.yml + +_tasks: + # Strings get executed under system's default shell + - "git init" + - "rm {{ name_of_the_project }}/README.md" + # Arrays are executed without shell, saving you the work of escaping arguments + - [invoke, "--search-root={{ _copier_conf.src_path }}", after-copy] + # You are able to output the full conf to JSON, to be parsed by your script + - [invoke, end-process, "--full-conf={{ _copier_conf|to_json }}"] + # Your script can be run by the same Python environment used to run Copier + - ["{{ _copier_python }}", task.py] + # OS-specific task (supported values are "linux", "macos", "windows" and `None`) + - >- + {% if _copier_conf.os in ['linux', 'macos'] %} + rm {{ name_of_the_project }}/README.md + {% elif _copier_conf.os == 'windows' %} + Remove-Item {{ name_of_the_project }}/README.md + {% endif %} +``` + +Note: the example assumes you use Invoke as your task manager. But it's just an example. The point is that we're showing how to build and call commands. + +[Migrations](https://copier.readthedocs.io/en/latest/configuring/#migrations) are like tasks, but each item in the list is a dict with these keys: + +- `version`: Indicates the version that the template update has to go through to trigger this migration. It is evaluated using PEP 440. +- `before` (optional): Commands to execute before performing the update. The answers file is reloaded after running migrations in this stage, to let you migrate answer values. +- `after` (optional): Commands to execute after performing the update. + +Migrations will run in the same order as declared in the file (so you could even run a migration for a higher version before running a migration for a lower version if the higher one is declared before and the update passes through both). + +They will only run when new `version >= declared version > old version`. And only when updating (not when copying for the 1st time). + +If the migrations definition contains Jinja code, it will be rendered with the same context as the rest of the template. + +Migration processes will receive these environment variables: + +- `$STAGE`: Either before or after. +- `$VERSION_FROM`: Git commit description of the template as it was before updating. +- `$VERSION_TO`: Git commit description of the template as it will be after updating. +- `$VERSION_CURRENT`: The version detector as you indicated it when describing migration tasks. +- `$VERSION_PEP440_FROM`, `$VERSION_PEP440_TO`, `$VERSION_PEP440_CURRENT`: Same as the above, but normalized into a standard PEP 440 version string indicator. If your scripts use these environment variables to perform migrations, you probably will prefer to use these variables. + +```yaml +# copier.yml + +_migrations: + - version: v1.0.0 + before: + - rm ./old-folder + after: + # {{ _copier_conf.src_path }} points to the path where the template was + # cloned, so it can be helpful to run migration scripts stored there. + - invoke -r {{ _copier_conf.src_path }} -c migrations migrate $VERSION_CURRENT +``` + +# Developing a copier template + +## Avoid doing commits when developing + +While you're developing it's useful to see the changes before making a commit, to do so you can use `copier copy -r HEAD ./src ./dst`. Keep in mind that you won't be able to use `copier update` so the changes will be applied incrementally, not declaratively. So if you make a file in an old run that has been deleted in the source, it won't be removed in the destination. It's a good idea then to remove the destination directory often. + +## [Apply migrations only once](https://github.com/copier-org/copier/issues/240) + +Currently `copier` allows you to run two kind of commands: + +- Tasks: that run each time you either `copy` or `update` +- Migrations: That run only on `update`s if you're coming from a previous version + +But there [isn't yet a way](https://github.com/copier-org/copier/issues/240) to run a task only on the `copy` of a project. Until there is you can embed inside the generated project's Makefile an `init` target that runs the init script. The user will then need to: + +``` +copier copy src dest +cd dest +make init +``` + +Not ideal but it can be a workaround until we have the `pre-copy` tasks. + +Another solution I thought of is to: + +- Create a tag `0.0.0` on the first valid commit of the template +- Create an initial migration script for version `0.1.0`. + +That way instead of doing `copier copy src dest` you can do: + +```bash +copier copy -r 0.0.0 src dest +copier update +``` + +It will run over all the migrations steps you make in the future. A way to tackle this is to eventually release a `1.0.0` and move the `0.1.0` migration script to `1.1.0` using `copier copy -r 1.0.0 src dest`. + +However, @pawamoy thinks that this can eventually backfire because all the versions of the template will not be backward compatible with 0.0.0. If they are now, they probably won't be in the future. This might be because of the template itself, or because of the extensions it uses, or because of the version of Copier it required at the time of each version release. So this can be OK for existing projects, but not when trying to generate new ones. + +## [Create your own jinja extensions](https://github.com/pawamoy/copier-pdm/blob/main/copier.yml) + +You can create your own jinja filters. For example [creating an `extensions.py` file](https://github.com/pawamoy/copier-pdm/blob/main/extensions.py) with the contents: + +```python +import re +import subprocess +import unicodedata +from datetime import date + +from jinja2.ext import Extension + + +def git_user_name(default: str) -> str: + return subprocess.getoutput("git config user.name").strip() or default + + +def git_user_email(default: str) -> str: + return subprocess.getoutput("git config user.email").strip() or default + + +def slugify(value, separator="-"): + value = unicodedata.normalize("NFKD", str(value)).encode("ascii", "ignore").decode("ascii") + value = re.sub(r"[^\w\s-]", "", value.lower()) + return re.sub(r"[-_\s]+", separator, value).strip("-_") + + +class GitExtension(Extension): + def __init__(self, environment): + super().__init__(environment) + environment.filters["git_user_name"] = git_user_name + environment.filters["git_user_email"] = git_user_email + + +class SlugifyExtension(Extension): + def __init__(self, environment): + super().__init__(environment) + environment.filters["slugify"] = slugify + + +class CurrentYearExtension(Extension): + def __init__(self, environment): + super().__init__(environment) + environment.globals["current_year"] = date.today().year +``` + +Then you can [import it in your `copier.yaml` file](https://github.com/pawamoy/copier-pdm/blob/main/copier.yml): + +```yaml +_jinja_extensions: + - copier_templates_extensions.TemplateExtensionLoader + - extensions.py:CurrentYearExtension + - extensions.py:GitExtension + - extensions.py:SlugifyExtension + + +author_fullname: + type: str + help: Your full name + default: "{{ 'Timothée Mazzucotelli' | git_user_name }}" + +author_email: + type: str + help: Your email + default: "{{ 'pawamoy@pm.me' | git_user_email }}" + +repository_name: + type: str + help: Your repository name + default: "{{ project_name | slugify }}" +``` + +You'll need to install `copier-templates-extensions`, if you've installed `copier` with pipx you can: + +```bash +pipx inject copier copier-templates-extensions +``` + +# References + +- [Source](https://github.com/copier-org/copier) +- [Docs](https://copier.readthedocs.io/en/latest/) +- [Example templates](https://github.com/topics/copier-template) diff --git a/docs/devops/aws/aws.md b/docs/devops/aws/aws.md index 48808501884..e6c252e05f2 100644 --- a/docs/devops/aws/aws.md +++ b/docs/devops/aws/aws.md @@ -24,3 +24,7 @@ TBD ```bash aws ec2 stop-instances --instance-ids i-xxxxxxxx ``` + +# References + +- [Compare ec2 instance types](https://instances.vantage.sh/) diff --git a/docs/devops/kubectl/kubectl_commands.md b/docs/devops/kubectl/kubectl_commands.md index 2c2b81d7856..4fe3885ed92 100644 --- a/docs/devops/kubectl/kubectl_commands.md +++ b/docs/devops/kubectl/kubectl_commands.md @@ -405,6 +405,14 @@ kubectl exec {{ pod_name }} -it bash kubectl run --generator=run-pod/v1 -i --tty debian --image=debian -- bash ``` +## [Run a pod in a defined node](https://stackoverflow.com/questions/66972537/can-you-schedule-a-pod-on-a-specific-node-using-kubectl-run) + +Get the node hostnames with `kubectl get nodes`, then override the node with: + +```bash +kubectl run mypod --image ubuntu:18.04 --overrides='{"apiVersion": "v1", "spec": {"nodeSelector": { "kubernetes.io/hostname": "my-node.internal" }}}' --command -- sleep 100000000000000 +``` + ## [Get a root shell of a running container](http://stackoverflow.com/questions/42793382/exec-commands-on-kubernetes-pods-with-root-access) 1. Get the Node where the pod is and the docker ID diff --git a/docs/diffview.md b/docs/diffview.md new file mode 100644 index 00000000000..12ec27c30b4 --- /dev/null +++ b/docs/diffview.md @@ -0,0 +1,59 @@ +[Diffview](https://github.com/sindrets/diffview.nvim) is a single tabpage interface for easily cycling through diffs for all modified files for any git rev. + +# Installation + +If you're using it with NeoGit and Packer use: + +```lua + use { + 'NeogitOrg/neogit', + requires = { + 'nvim-lua/plenary.nvim', + 'sindrets/diffview.nvim', + 'nvim-tree/nvim-web-devicons' + } + } +``` + +# Usage + +## [DiffviewOpen](https://github.com/sindrets/diffview.nvim#diffviewopen-git-rev-options-----paths) + +Calling `:DiffviewOpen` with no args opens a new `Diffview` that compares against the current index. You can also provide any valid git rev to view only changes for that rev. + +Examples: + +- `:DiffviewOpen` +- `:DiffviewOpen HEAD~2` +- `:DiffviewOpen HEAD~4..HEAD~2` +- `:DiffviewOpen d4a7b0d` +- `:DiffviewOpen d4a7b0d^!` +- `:DiffviewOpen d4a7b0d..519b30e` +- `:DiffviewOpen origin/main...HEAD` + +You can also provide additional paths to narrow down what files are shown `:DiffviewOpen HEAD~2 -- lua/diffview plugin`. + +Additional commands for convenience: + +- `:DiffviewClose`: Close the current diffview. You can also use `:tabclose`. +- `:DiffviewToggleFiles`: Toggle the file panel. +- `:DiffviewFocusFiles`: Bring focus to the file panel. +- `:DiffviewRefresh`: Update stats and entries in the file list of the current Diffview. + +With a Diffview open and the default key bindings, you can: + +- Cycle through changed files with `` and `` +- You can stage changes with `-` +- Restore a file with `X` +- Refresh the diffs with `R` +- Go to the file panel with `e` + +# Troubleshooting + +## No valid VCS tool found + +It may be because you have an outdated version of git. To fix it update to the latest one, if it's still not enough, [install it from the backports repo](linux_snippets.md#install-latest-version-of-package-from-backports) + +# References + +- [Source](https://github.com/sindrets/diffview.nvim) diff --git a/docs/docker.md b/docs/docker.md index 822aa94b8b0..c183ded515c 100644 --- a/docs/docker.md +++ b/docs/docker.md @@ -85,6 +85,62 @@ sudo docker run -it --entrypoint /bin/bash [docker_image] # Snippets +## [Add healthcheck to your dockers](https://www.howtogeek.com/devops/how-and-why-to-add-health-checks-to-your-docker-containers/) + +Health checks allow a container to expose its workload’s availability. This stands apart from whether the container is running. If your database goes down, your API server won’t be able to handle requests, even though its Docker container is still running. + +This makes for unhelpful experiences during troubleshooting. A simple `docker ps` would report the container as available. Adding a health check extends the `docker ps` output to include the container’s true state. + +You configure container health checks in your Dockerfile. This accepts a command which the Docker daemon will execute every 30 seconds. Docker uses the command’s exit code to determine your container’s healthiness: + +- `0`: The container is healthy and working normally. +- `1`: The container is unhealthy; the workload may not be functioning. + +Healthiness isn’t checked straightaway when containers are created. The status will show as starting before the first check runs. This gives the container time to execute any startup tasks. A container with a passing health check will show as healthy; an unhealthy container displays unhealthy. + +In docker-compose you can write the healthchecks like the next snippet: + +```yaml +--- +version: '3.4' + +services: + jellyfin: + image: linuxserver/jellyfin:latest + container_name: jellyfin + restart: unless-stopped + healthcheck: + test: curl http://localhost:8096/health || exit 1 + interval: 10s + retries: 5 + start_period: 5s + timeout: 10s +``` + +## [List the dockers of a registry](https://stackoverflow.com/questions/31251356/how-to-get-a-list-of-images-on-docker-registry-v2) + + +List all repositories (effectively images): + +```bash +$: curl -X GET https://myregistry:5000/v2/_catalog +> {"repositories":["redis","ubuntu"]} +``` + +List all tags for a repository: + +```bash +$: curl -X GET https://myregistry:5000/v2/ubuntu/tags/list +> {"name":"ubuntu","tags":["14.04"]} +``` + +If the registry needs authentication you have to specify username and password in the curl command + +```bash +curl -X GET -u : https://myregistry:5000/v2/_catalog +curl -X GET -u : https://myregistry:5000/v2/ubuntu/tags/list +``` + ## Attach a docker to many networks You can't do it through the `docker run` command, there you can only specify one @@ -231,6 +287,12 @@ RUN apt-get update && apt-get install -y \ ADD ./path/to/directory /path/to/destination ``` +## [Append a new path to PATH](https://stackoverflow.com/questions/27093612/in-a-dockerfile-how-to-update-path-environment-variable) + +``` +ENV PATH="${PATH}:/opt/gtk/bin" +``` + # Troubleshooting If you are using a VPN and docker, you're going to have a hard time. diff --git a/docs/git.md b/docs/git.md index f9b55a945ff..23d64543cdb 100644 --- a/docs/git.md +++ b/docs/git.md @@ -733,6 +733,20 @@ git config --global --add push.autoSetupRemote true # Snippets +## Remove tags + +To delete a tag you can run: + +```bash +git tag -d {{tag_name}} +``` + +To remove them remotely do + +```bash +git push --delete origin {{ tag_name }} +``` + ## Revert a commit ```bash diff --git a/docs/gitea.md b/docs/gitea.md index 0f5cb8a0a6a..529e56ffae9 100644 --- a/docs/gitea.md +++ b/docs/gitea.md @@ -51,12 +51,20 @@ ENABLED=true Even if you enable at configuration level you need to manually enable the actions on each repository [until this issue is solved](https://github.com/go-gitea/gitea/issues/23724). -So far there is [only one possible runner](https://gitea.com/gitea/act_runner) which is based on docker and [`act`](https://github.com/nektos/act). Currently, the only way to install act runner is by compiling it yourself, or by using one of the [pre-built binaries](http://dl.gitea.com/act_runner). There is no Docker image or other type of package management yet. At the moment, act runner should be run from the command line. Of course, you can also wrap this binary in something like a system service, supervisord, or Docker container. +So far there is [only one possible runner](https://gitea.com/gitea/act_runner) which is based on docker and [`act`](https://github.com/nektos/act). Currently, the only way to install act runner is by compiling it yourself, or by using one of the [pre-built binaries](https://dl.gitea.com/act_runner). There is no Docker image or other type of package management yet. At the moment, act runner should be run from the command line. Of course, you can also wrap this binary in something like a system service, supervisord, or Docker container. + +You can create the default configuration of the runner with: + +```bash +./act_runner generate-config > config.yaml +``` + +You can tweak there for example the `capacity` so you are able to run more than one workflow in parallel. Before running a runner, you should first register it to your Gitea instance using the following command: ```bash -./act_runner register --no-interactive --instance --token +./act_runner register --config config.yaml --no-interactive --instance --token ``` There are two arguments required, `instance` and `token`. @@ -70,7 +78,7 @@ After registering, a new file named `.runner` will appear in the current directo Finally, it’s time to start the runner. ```bash -./act_runner daemon +./act_runner --config config.yaml daemon ``` You can also create a systemd service so that it starts when the server boots. For example in `/etc/systemd/system/gitea_actions_runner.service: @@ -146,11 +154,193 @@ If you open that up, you’ll see that there is a section called labels, and it You can specify any other docker image. Adding new labels doesn't work yet. +You can start with this dockerfile: + +```dockerfile +FROM node:16-bullseye + +# Configure the labels +LABEL prune=false + +# Configure the AWS credentials +RUN mkdir /root/.aws +COPY files/config /root/.aws/config +COPY files/credentials /root/.aws/credentials + +# Install dependencies +RUN apt-get update && apt-get install -y \ + python3 \ + python3-pip \ + python3-venv \ + screen \ + vim \ + && python3 -m pip install --upgrade pip \ + && rm -rf /var/lib/apt/lists/* + +RUN pip install \ + molecule==5.0.1 \ + ansible==8.0.0 \ + ansible-lint \ + yamllint \ + molecule-plugins[ec2,docker,vagrant] \ + boto3 \ + botocore \ + testinfra \ + pytest + +RUN wget https://download.docker.com/linux/static/stable/x86_64/docker-24.0.2.tgz \ + && tar xvzf docker-24.0.2.tgz \ + && cp docker/* /usr/bin \ + && rm -r docker docker-* +``` + +It's prepared for: + +- Working within an AWS environment +- Run Ansible and molecule +- Build dockers + ### Things that are not ready yet * [Enable actions by default](https://github.com/go-gitea/gitea/issues/23724) * Kubernetes act runner * [Support cron jobs](https://github.com/go-gitea/gitea/pull/22751) +* [Badge for the CI jobs](https://github.com/go-gitea/gitea/issues/23688) + +### Build a docker within a gitea action + +Assuming you're using the custom gitea_runner docker proposed above you can build and upload a docker to a registry with this action: + +```yaml +--- +name: Publish Docker image + +"on": [push] + +jobs: + build-and-push: + runs-on: ubuntu-latest + steps: + - name: Checkout code + uses: https://github.com/actions/checkout@v3 + + - name: Login to Docker Registry + uses: https://github.com/docker/login-action@v2 + with: + registry: my_registry.org + username: ${{ secrets.REGISTRY_USERNAME }} + password: ${{ secrets.REGISTRY_PASSWORD }} + + - name: Set up QEMU + uses: https://github.com/docker/setup-qemu-action@v2 + + - name: Set up Docker Buildx + uses: https://github.com/docker/setup-buildx-action@v2 + + - name: Extract metadata (tags, labels) for Docker + id: meta + uses: https://github.com/docker/metadata-action@v4 + with: + images: my_registry.org/the_name_of_the_docker_to_build + + - name: Build and push + uses: docker/build-push-action@v2 + with: + context: . + platforms: linux/amd64,linux/arm64 + push: true + cache-from: type=registry,ref=my_registry.org/the_name_of_the_docker_to_build:buildcache + cache-to: type=registry,ref=my_registry.org/the_name_of_the_docker_to_build:buildcache,mode=max + tags: ${{ steps.meta.outputs.tags }} + labels: ${{ steps.meta.outputs.labels }} +``` + +It uses a pair of nice features: + +- Multi-arch builds +- [Cache](https://docs.docker.com/build/ci/github-actions/cache/) to speed up the builds + +As it reacts to all events it will build and push: + +- A tag with the branch name on each push to that branch +- a tag with the tag on tag push + +### Bump the version of a repository on commits on master + +- Create a SSH key for the CI to send commits to protected branches. +- Upload the private key to a repo or organization secret called `DEPLOY_SSH_KEY`. +- Upload the public key to the repo configuration deploy keys +- Create the `bump.yaml` file with the next contents: + + ```yaml + --- + name: Bump version + + "on": + push: + branches: + - main + + jobs: + bump_version: + if: "!startsWith(github.event.head_commit.message, 'bump:')" + runs-on: ubuntu-latest + name: "Bump version and create changelog" + steps: + - name: Check out + uses: actions/checkout@v3 + with: + fetch-depth: 0 # Fetch all history + + - name: Configure SSH + run: | + echo "${{ secrets.DEPLOY_SSH_KEY }}" > ~/.ssh/deploy_key + chmod 600 ~/.ssh/deploy_key + dos2unix ~/.ssh/deploy_key + ssh-agent -a $SSH_AUTH_SOCK > /dev/null + ssh-add ~/.ssh/deploy_key + + - name: Bump the version + run: cz bump --changelog --no-verify + + - name: Push changes + run: | + git remote add ssh git@gitea-production.cloud.icij.org:templates/ansible-role.git + git pull ssh main + git push ssh main + git push ssh --tags + ``` + + It assumes that you have `cz` (commitizen) and `dos2unix` installed in your runner. + +### Skip gitea actions job on changes of some files + +There are some expensive CI pipelines that don't need to be run for example if you changed a line in the `README.md`, to skip a pipeline on changes of certain files you can use the `paths-ignore` directive: + +```yaml +--- +name: Ansible Testing + +"on": + push: + paths-ignore: + - 'meta/**' + - Makefile + - README.md + - renovate.json + - CHANGELOG.md + - .cz.toml + - '.gitea/workflows/**' + +jobs: + test: + name: Test + runs-on: ubuntu-latest + steps: + ... +``` + +The only downside is that if you set this pipeline as required in the branch protection, the merge button will look yellow instead of green when the pipeline is skipped. ## [Disable the regular login, use only Oauth](https://discourse.gitea.io/t/solved-removing-default-login-interface/2740/2) @@ -298,6 +488,18 @@ Or you can change [the admin's password](https://discourse.gitea.io/t/how-to-cha gitea --config /etc/gitea/app.ini admin user change-password -u username -p password ``` +# [Gitea client command line tool](https://gitea.com/gitea/tea) + +`tea` is a command line tool to interact with Gitea servers. It still lacks some features but is usable. + +## [Installation](https://gitea.com/gitea/tea#installation) + +- Download the precompiled binary from https://dl.gitea.com/tea/ +- Until [#542](https://gitea.com/gitea/tea/issues/542) is fixed manually create a token with all the permissions +- Run `tea login add` to set your credentials. + + + # References * [Home](https://gitea.io/en-us/) diff --git a/docs/gotify.md b/docs/gotify.md new file mode 100644 index 00000000000..40462d1bcb3 --- /dev/null +++ b/docs/gotify.md @@ -0,0 +1,9 @@ +[Gotify](https://github.com/gotify/server) is a simple server for sending and receiving messages in real-time per WebSocket. + +# Not there yet + +- [Reactions on the notifications](https://github.com/gotify/server/issues/494) + +# References + +- [Source](https://github.com/gotify/server) diff --git a/docs/grafana.md b/docs/grafana.md new file mode 100644 index 00000000000..202c33ca004 --- /dev/null +++ b/docs/grafana.md @@ -0,0 +1,99 @@ +[Grafana](https://grafana.com/grafana) is a web application to create dashboards. + +# [Installation](https://grafana.com/docs/grafana/latest/setup-grafana/installation/docker/#run-grafana-via-docker-compose) + +We're going to install it with docker-compose and connect it to [Authentik](authentik.md). + +## [Create the Authentik connection](https://goauthentik.io/integrations/services/grafana/) + +Assuming that you have [the terraform authentik provider configured](authentik.md), use the next terraform code: + +```hcl +# --------------- +# -- Variables -- +# --------------- + +variable "grafana_name" { + type = string + description = "The name shown in the Grafana application." + default = "Grafana" +} + +variable "grafana_redirect_uri" { + type = string + description = "The redirect url configured on Grafana." +} + +variable "grafana_icon" { + type = string + description = "The icon shown in the Grafana application" + default = "/application-icons/grafana.svg" +} + +# ----------------------- +# -- Application -- +# ----------------------- + +resource "authentik_application" "grafana" { + name = var.grafana_name + slug = "grafana" + protocol_provider = authentik_provider_oauth2.grafana.id + meta_icon = var.grafana_icon + lifecycle { + ignore_changes = [ + # The terraform provider is continuously changing the attribute even though it's set + meta_icon, + ] + } +} + +# -------------------------- +# -- Oauth provider -- +# -------------------------- + +resource "authentik_provider_oauth2" "grafana" { + name = var.grafana_name + client_id = "grafana" + authorization_flow = data.authentik_flow.default-authorization-flow.id + property_mappings = [ + data.authentik_scope_mapping.email.id, + data.authentik_scope_mapping.openid.id, + data.authentik_scope_mapping.profile.id, + ] + redirect_uris = [ + var.grafana_redirect_uri, + ] + signing_key = data.authentik_certificate_key_pair.default.id + access_token_validity = "minutes=120" +} + +data "authentik_certificate_key_pair" "default" { + name = "authentik Self-signed Certificate" +} + +data "authentik_flow" "default-authorization-flow" { + slug = "default-provider-authorization-implicit-consent" +} + +# ------------------- +# -- Outputs -- +# ------------------- + +output "grafana_oauth_id" { + value = authentik_provider_oauth2.grafana.client_id +} + +output "grafana_oauth_secret" { + value = authentik_provider_oauth2.grafana.client_secret +} +``` + +You'll need to upload the `grafana.svg` to your authentik application +you can use the next docker-compose file + +```yaml +``` + +# References + +- [Home](https://grafana.com/grafana) diff --git a/docs/jellyfin.md b/docs/jellyfin.md index fbe6b642344..cd130187b6f 100644 --- a/docs/jellyfin.md +++ b/docs/jellyfin.md @@ -12,12 +12,47 @@ ported to the .NET Core framework to enable full cross-platform support. There are no strings attached, no premium licenses or features, and no hidden agendas: just a team who want to build something better and work together to achieve it. +# Clients + +## [Jellyfin Desktop](https://github.com/jellyfin/jellyfin-media-player) + +### Installation + +- Download the latest deb package from the [releases page](https://github.com/jellyfin/jellyfin-media-player/releases) +- Install the dependencies +- Run `dpkg -i` + +If you're on a TV you may want to [enable the TV mode](https://github.com/jellyfin/jellyfin-media-player/issues/11) so that the remote keys work as expected. The play/pause/next/prev won't work until [this issue is solved](https://github.com/jellyfin/jellyfin-media-player/issues/3), but it's not that bad to use the "Ok" and then navigate with the arrow keys. + +## [Jellycon](https://github.com/jellyfin/jellycon) + +JellyCon is a lightweight Kodi add-on that lets you browse and play media files directly from your Jellyfin server within the Kodi interface. It can be thought of as a thin frontend for a Jellyfin server. + +It's not very pleasant to use though. + +### [Installation](https://github.com/jellyfin/jellycon#installation) + +- Add the Jellyfin kodi addon repository + ```bash + wget https://kodi.jellyfin.org/repository.jellyfin.kodi.zip + ``` +- Open Kodi, go to the settings menu, and navigate to "Add-on Browser" +- Select "Install from Zip File" +- From within Kodi, navigate to "Add-on Browser" +- Select "Install from Repository" +- Choose "Kodi Jellyfin Add-ons", followed by "Video Add-ons" +- Select the JellyCon add-on and choose install + # Missing features - Hide movie or tv show from my gallery: Tracked by these feature requests [1](https://features.jellyfin.org/posts/1072/let-the-user-hide-a-movie-or-tv-show) and [2](https://features.jellyfin.org/posts/116/add-hide-ignore-for-series-seasons-episodes-as-an-alternative-to-favorite) # Troubleshooting +## Transcode files are cleared frequently + +By default they are cleared each day. If you want to keep them you can go to Admin/Scheduled Tasks/Clean Transcode Directory and remove the scheduled task. + ## [Deceptive site ahead](https://github.com/jellyfin/jellyfin-web/issues/4076) It seems that Google is marking the domains that host Jellyfin as deceptive. If it happens to you, your users won't be able to access your instance with Firefox, Chrome nor the Android app. Nice uh? It's kind of scary how google is able to control who can access what in the internet without you signing for it. @@ -245,6 +280,7 @@ not introduced again. * [Jellyfin for apple tv](https://features.jellyfin.org/posts/612/jellyfin-apple-tv-support): tell the people that use the shitty device. +* [Pressing play on a tv show doesn't reproduce the Next Up](https://github.com/jellyfin/jellyfin/issues/9998) # References diff --git a/docs/kodi.md b/docs/kodi.md new file mode 100644 index 00000000000..16befe68b29 --- /dev/null +++ b/docs/kodi.md @@ -0,0 +1,31 @@ + +[Kodi](https://kodi.tv/) is a entertainment center software. It basically converts your device into a smart tv + +# [Installation](https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux) + +If you're trying to install it on Debian based distros (not ubuntu) check [the official docs](https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux#Debian) + +```bash +sudo apt install software-properties-common +sudo add-apt-repository -y ppa:team-xbmc/ppa +sudo apt install kodi +``` + +# Troubleshooting + +## [Movie not recognized by kodi](https://kodi.wiki/view/Incorrect_and_missing_videos) + +Add your own .nfo file with the metadata + +## Import data from nfo files + +If the nfo is separated on each movie, you have to remove it from the library +and import it again, as the scanning doesn't import the data from the nfos. + +## TV show file naming + +[The correct TV show file naming](https://kodi.wiki/view/Naming_video_files/TV_shows) + +# References + +- [Home](https://kodi.tv/) diff --git a/docs/koel.md b/docs/koel.md new file mode 100644 index 00000000000..ebb4b61162d --- /dev/null +++ b/docs/koel.md @@ -0,0 +1,22 @@ +[koel](https://koel.dev/) is a personal music streaming server. + +Note: Use [`mopidy`](mopidy.md) instead + +# Installation + +There are [docker-compose files](https://github.com/koel/docker) to host the service. Although they behave a little bit weird + +For example, you need to [specify the DB_PORT](https://github.com/koel/docker/issues/168). It has had several PR to fix it but weren't merged [1](https://github.com/koel/docker/pull/165/files), [2](https://github.com/koel/docker/pull/162/files). + +# API + +The API is [not very well documented](https://github.com/koel/koel/issues/535): + +- [Here you can see how to authenticate](https://github.com/X-Ryl669/kutr/wiki/Communication-API#authentication) +- [Here are the api docs](https://github.com/koel/koel/blob/master/api-docs/api.yaml#L763) + +# References + +- [Home](https://koel.dev/) +- [Docs](https://docs.koel.dev/#using-docker) +- [Source](https://github.com/koel/koel) diff --git a/docs/linux/cookiecutter.md b/docs/linux/cookiecutter.md index 845bbb00c63..ce193f2582d 100644 --- a/docs/linux/cookiecutter.md +++ b/docs/linux/cookiecutter.md @@ -4,6 +4,9 @@ date: 20200713 author: Lyz --- +NOTE: Use [`copier`](copier.md) instead! + + [Cookiecutter](https://github.com/cookiecutter/cookiecutter) is a command-line utility that creates projects from cookiecutters (project templates). diff --git a/docs/linux/cruft.md b/docs/linux/cruft.md index 5e8bef453e0..164c71ccbf3 100644 --- a/docs/linux/cruft.md +++ b/docs/linux/cruft.md @@ -4,7 +4,7 @@ date: 20201016 author: Lyz --- -Note: [copier](https://github.com/copier-org/copier) looks a more maintained solution nowadays. +NOTE: Use [`copier`](copier.md) instead! [cruft](https://cruft.github.io/cruft/) allows you to maintain all the necessary boilerplate for packaging and building projects separate from the code diff --git a/docs/linux/vim/vim_plugins.md b/docs/linux/vim/vim_plugins.md index 68e09fbcb58..a4c04ce163d 100644 --- a/docs/linux/vim/vim_plugins.md +++ b/docs/linux/vim/vim_plugins.md @@ -307,6 +307,52 @@ Test.vim consists of a core which provides an abstraction over running any kind of tests from the command-line. Concrete test runners are then simply plugged in, so they all work in the same unified way. +# [DiffView](https://github.com/sindrets/diffview.nvim) + +Single tabpage interface for easily cycling through diffs for all modified files for any git rev. + +## Installation + +If you use `Packer` in your `plugins.lua` file add: + +```lua + use { + 'sindrets/diffview.nvim', + requires = { + 'nvim-tree/nvim-web-devicons' + } + } +``` + +Then configure it with: + +```lua +-- Configure diff viewer +require("diffview").setup({ + keymaps = { + view = { + { { "n", "v" }, "dc", ":DiffviewClose" }, + } + } +}) + +vim.cmd[[ + " Enter the diff window + nmap dv :DiffviewOpen + " +]] +``` + +That way you can open the diff window with `do` and close it with `dc` (only if you are in one of the buffers) + +Some nice keymaps of the diff window: + +- ``: go to the next file +- `-`: Stage/unstage the changes +- `]x`: next conflict +- `[x`: previous conflict +- `X`: On the file panel to discard the changes + # Issues ## Vim-Abolish diff --git a/docs/linux/zfs.md b/docs/linux/zfs.md index 2ebcc6522af..3f399ebd04e 100644 --- a/docs/linux/zfs.md +++ b/docs/linux/zfs.md @@ -64,6 +64,140 @@ zfs get all {{ pool_name }} zfs get compressratio {{ filesystem }} ``` +## [Rename or move a dataset](https://docs.oracle.com/cd/E19253-01/819-5461/gamnq/index.html) + +NOTE: if you want to rename the topmost dataset look at [rename the topmost dataset](#rename-the-topmost-dataset) instead. +File systems can be renamed by using the `zfs rename` command. You can perform the following operations: + +- Change the name of a file system. +- Relocate the file system within the ZFS hierarchy. +- Change the name of a file system and relocate it within the ZFS hierarchy. + +The following example uses the `rename` subcommand to rename of a file system from `kustarz` to `kustarz_old`: + +```bash +zfs rename tank/home/kustarz tank/home/kustarz_old +``` + +The following example shows how to use zfs `rename` to relocate a file system: + +```bash +zfs rename tank/home/maybee tank/ws/maybee +``` + +In this example, the `maybee` file system is relocated from `tank/home` to `tank/ws`. When you relocate a file system through rename, the new location must be within the same pool and it must have enough disk space to hold this new file system. If the new location does not have enough disk space, possibly because it has reached its quota, rename operation fails. + +The rename operation attempts an unmount/remount sequence for the file system and any descendent file systems. The rename command fails if the operation is unable to unmount an active file system. If this problem occurs, you must forcibly unmount the file system. + +You'll loose the snapshots though, as explained below. + +### [Rename the topmost dataset](https://www.solaris-cookbook.eu/solaris/solaris-zpool-rename/) + +If you want to rename the topmost dataset you [need to rename the pool too](https://github.com/openzfs/zfs/issues/4681) as these two are tied. + +```bash +$: zpool status -v + + pool: tets + state: ONLINE + scrub: none requested +config: + + NAME STATE READ WRITE CKSUM + tets ONLINE 0 0 0 + c0d1 ONLINE 0 0 0 + c1d0 ONLINE 0 0 0 + c1d1 ONLINE 0 0 0 + +errors: No known data errors +``` + +To fix this, first export the pool: + +```bash +$ zpool export tets +``` + +And then imported it with the correct name: + +```bash +$ zpool import tets test +``` + +After the import completed, the pool contains the correct name: + +```bash +$ zpool status -v + + pool: test + state: ONLINE + scrub: none requested +config: + + NAME STATE READ WRITE CKSUM + test ONLINE 0 0 0 + c0d1 ONLINE 0 0 0 + c1d0 ONLINE 0 0 0 + c1d1 ONLINE 0 0 0 + +errors: No known data errors +``` + +Now you may need to fix the ZFS mountpoints for each dataset + +```bash +zfs set mountpoint="/opt/zones/[Newmountpoint]" [ZFSPOOL/[ROOTor other filesystem] +``` + +## [Rename or move snapshots](https://docs.oracle.com/cd/E19253-01/819-5461/gbion/index.html) + +If the dataset has snapshots you need to rename them too. They must be renamed within the same pool and dataset from which they were created though. For example: + +```bash +zfs rename tank/home/cindys@083006 tank/home/cindys@today +``` + +In addition, the following shortcut syntax is equivalent to the preceding syntax: + +```bash +zfs rename tank/home/cindys@083006 today +``` + +The following snapshot rename operation is not supported because the target pool and file system name are different from the pool and file system where the snapshot was created: + +```bash +$: zfs rename tank/home/cindys@today pool/home/cindys@saturday +cannot rename to 'pool/home/cindys@today': snapshots must be part of same +dataset +``` + +You can recursively rename snapshots by using the `zfs rename -r` command. For example: + +```bash +$: zfs list +NAME USED AVAIL REFER MOUNTPOINT +users 270K 16.5G 22K /users +users/home 76K 16.5G 22K /users/home +users/home@yesterday 0 - 22K - +users/home/markm 18K 16.5G 18K /users/home/markm +users/home/markm@yesterday 0 - 18K - +users/home/marks 18K 16.5G 18K /users/home/marks +users/home/marks@yesterday 0 - 18K - +users/home/neil 18K 16.5G 18K /users/home/neil +users/home/neil@yesterday 0 - 18K - +$: zfs rename -r users/home@yesterday @2daysago +$: zfs list -r users/home +NAME USED AVAIL REFER MOUNTPOINT +users/home 76K 16.5G 22K /users/home +users/home@2daysago 0 - 22K - +users/home/markm 18K 16.5G 18K /users/home/markm +users/home/markm@2daysago 0 - 18K - +users/home/marks 18K 16.5G 18K /users/home/marks +users/home/marks@2daysago 0 - 18K - +users/home/neil 18K 16.5G 18K /users/home/neil +users/home/neil@2daysago 0 - 18K - +``` + # Installation ## Install the required programs @@ -113,7 +247,6 @@ First read the [ZFS storage planning](zfs_storage_planning.md) article and then zpool create \ -o ashift=12 \ -o autoexpand=on \ - -o compression=lz4 \ main raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd \ log mirror \ /dev/disk/by-id/nvme-eui.e823gqkwadgp32uhtpobsodkjfl2k9d0-part4 \ @@ -126,8 +259,6 @@ main raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd \ Where: * `-o ashift=12`: Adjusts the disk sector size to the disks in use. -* `-o canmount=off`: Don't mount the main pool, we'll mount the filesystems. -* `-o compression=lz4`: Enable compression by default * `/dev/sda /dev/sdb /dev/sdc /dev/sdd` are the rotational data disks configured in RAIDZ1 * We set two partitions in mirror for the ZLOG * We set two partitions in stripe for the L2ARC @@ -157,6 +288,8 @@ zfs create \ main/lyz ``` +If you want to use a passphrase instead [you can use the `zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase` command. + I'm assuming that `compression` was set in the pool. You can check the created filesystems with `zfs list` @@ -317,7 +450,7 @@ Additionally, deleting snapshots can increase the amount of space that is unique Note: The value for a snapshot’s space referenced property is the same as that for the file system when the snapshot was created. -You can display the amount of space that is consumed by snapshots and descendant file systems by using the `zfs list -o space` command. +You can display the amount of space or size that is consumed by snapshots and descendant file systems by using the `zfs list -o space` command. ```bash # zfs list -o space -r rpool @@ -346,6 +479,65 @@ Other space properties are: * LUSED: The amount of space that is "logically" consumed by this dataset and all its descendents. It ignores the effect of `compression` and `copies` properties, giving a quantity closer to the amount of data that aplication ssee. However it does include space consumed by metadata. * REFER: The amount of data that is accessible by this dataset, which may or may not be shared with other dataserts in the pool. When a snapshot or clone is created, it initially references the same amount of space as the filesystem or snapshot it was created from, since its contents are identical. +## [See the differences between two backups](https://docs.oracle.com/cd/E36784_01/html/E36835/gkkqz.html) + +To identify the differences between two snapshots, use syntax similar to the following: + +```bash +$ zfs diff tank/home/tim@snap1 tank/home/tim@snap2 +M /tank/home/tim/ ++ /tank/home/tim/fileB +``` + +The following table summarizes the file or directory changes that are identified by the `zfs diff` command. + +| File or Directory Change | Identifier | +| --- | --- | +| File or directory has been modified or file or directory link has changed | M | +| File or directory is present in the older snapshot but not in the more recent snapshot | — | +| File or directory is present in the more recent snapshot but not in the older snapshot | + | +| File or directory has been renamed | R | + +## Create a cold backup of a series of datasets + +If you've used the `-o keyformat=raw -o keylocation=file:///etc/zfs/keys/home.key` arguments to encrypt your datasets you can't use a `keyformat=passphase` encryption on the cold storage device. You need to copy those keys on the disk. One way of doing it is to: + +- Create a 100M LUKS partition protected with a passphrase where you store the keys. +- The rest of the space is left for a partition for the zpool. + +# Troubleshooting + +## [Clear a permanent ZFS error in a healthy pool](https://serverfault.com/questions/576898/clear-a-permanent-zfs-error-in-a-healthy-pool) + +Sometimes when you do a `zpool status` you may see that the pool is healthy but that there are "Permanent errors" that may point to files themselves or directly to memory locations. + +You can read [this long discussion](https://github.com/openzfs/zfs/discussions/9705) on what does these permanent errors mean, but what solved the issue for me was to run a new scrub + +`zpool scrub my_pool` + +It takes a long time to run, so be patient. + +## ZFS pool is in suspended mode + +Probably because you've unplugged a device without unmounting it. + +If you want to remount the device [you can follow these steps](https://github.com/openzfsonosx/zfs/issues/104#issuecomment-30344347) to symlink the new devfs entries to where zfs thinks the vdev is. That way you can regain access to the pool without a reboot. + +So if zpool status says the vdev is /dev/disk2s1, but the reattached drive is at disk4, then do the following: + +```bash +cd /dev +sudo rm -f disk2s1 +sudo ln -s disk4s1 disk2s1 +sudo zpool clear -F WD_1TB +sudo zpool export WD_1TB +sudo rm disk2s1 +sudo zpool import WD_1TB +``` + +If you don't care about the zpool anymore, sadly your only solution is to [reboot the server](https://github.com/openzfs/zfs/issues/5242). Real ugly, so be careful when you umount zpools. + + # Learning I've found that learning about ZFS was an interesting, intense and time diff --git a/docs/linux_snippets.md b/docs/linux_snippets.md index 95ff72b0c05..2245c0b1025 100644 --- a/docs/linux_snippets.md +++ b/docs/linux_snippets.md @@ -4,6 +4,75 @@ date: 20200826 author: Lyz --- +# [Get the current git branch](https://stackoverflow.com/questions/6245570/how-do-i-get-the-current-branch-name-in-git) + +```bash +git branch --show-current +``` + +# Install latest version of package from backports + +Add the backports repository: + +```bash +vi /etc/apt/sources.list.d/bullseye-backports.list +``` + +``` +deb http://deb.debian.org/debian bullseye-backports main contrib +deb-src http://deb.debian.org/debian bullseye-backports main contrib +``` + +Configure the package to be pulled from backports + +```bash +vi /etc/apt/preferences.d/90_zfs +``` + +``` +Package: src:zfs-linux +Pin: release n=bullseye-backports +Pin-Priority: 990 +``` + +# [Rename multiple files matching a pattern](https://stackoverflow.com/questions/6840332/rename-multiple-files-by-replacing-a-particular-pattern-in-the-filenames-using-a) + + +There is `rename` that looks nice, but you need to install it. Using only `find` you can do: + +```bash +find . -name '*yml' -exec bash -c 'echo mv $0 ${0/yml/yaml}' {} \; +``` + +If it shows what you expect, remove the `echo`. + +# [Force ssh to use password authentication](https://superuser.com/questions/1376201/how-do-i-force-ssh-to-use-password-instead-of-key) + +```bash +ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no exampleUser@example.com +``` +# [Do a tail -f with grep](https://stackoverflow.com/questions/23395665/tail-f-grep) + +```bash +tail -f file | grep --line-buffered my_pattern +``` + +# [Check if a program exists in the user's PATH](https://stackoverflow.com/questions/592620/how-can-i-check-if-a-program-exists-from-a-bash-script) + +```bash +command -v +``` + +Example use: + +```bash +if ! command -v &> /dev/null +then + echo " could not be found" + exit +fi +``` + # [Reset failed systemd services](https://unix.stackexchange.com/questions/418792/systemctl-remove-unit-from-failed-list) Use systemctl to remove the failed status. To reset all units with failed status: @@ -42,6 +111,12 @@ This will make your machine display the boot options for 5 seconds before it boo ssh -D 9090 -N -f user@host ``` +If you need to forward an external port to a local one [you can use](https://linuxize.com/post/how-to-setup-ssh-tunneling/) + +```bash +ssh -L LOCAL_PORT:DESTINATION:DESTINATION_PORT [USER@]SSH_SERVER +``` + If you need a more powerful solution you can try [sshuttle](https://sshuttle.readthedocs.io/en/stable/overview.html) # [Fix the SSH client kex_exchange_identification: read: Connection reset by peer error](https://stackoverflow.com/questions/69394001/how-can-i-fix-kex-exchange-identification-read-connection-reset-by-peer) diff --git a/docs/mediatracker.md b/docs/mediatracker.md new file mode 100644 index 00000000000..a6c1876d477 --- /dev/null +++ b/docs/mediatracker.md @@ -0,0 +1,59 @@ +[MediaTracker](https://github.com/bonukai/MediaTracker) is a self hosted media tracker for movies, tv shows, video games, books and audiobooks + +# [Installation](https://github.com/bonukai/MediaTracker#installation) + +With docker compose: + +```yaml +version: "3" +services: + mediatracker: + container_name: mediatracker + ports: + - 7481:7481 + volumes: + - /home/YOUR_HOME_DIRECTORY/.config/mediatracker/data:/storage + - assetsVolume:/assets + environment: + SERVER_LANG: en + TMDB_LANG: en + AUDIBLE_LANG: us + TZ: Europe/London + image: bonukai/mediatracker:latest + +volumes: + assetsVolume: null +``` + +If you attach more than one docker network the container becomes unreachable :S. + +## Install the jellyfin plugin + +They created a [Jellyfin plugin](https://github.com/bonukai/jellyfin-plugin-mediatracker) so that all scrobs are sent automatically to the mediatracker + +- Add new Repository in Jellyfin (Dashboard -> Plugins -> Repositories -> +) from url `https://raw.githubusercontent.com/bonukai/jellyfin-plugin-mediatracker/main/manifest.json` +- Install MediaTracker plugin from Catalogue (Dashboard -> Plugins -> Catalogue) + +# Usage + +Some tips: + +- Add the shows you want to watch to the watchlist so that it's easier to find them +- When you're ending an episode, click on the episode number on the watchlist element and then rate the episode itself. + +## Lists + +You can create public lists to share with the rest of the users, the way to share it though [is a bit archaic so far](https://github.com/bonukai/MediaTracker/issues/527), it's only through the list link, in the interface they won't be able to see it. + +# Troubleshooting + +## Can't find a book + +The metadata provider is not yet very usable :( I wouldn't recommend mediatracker (as of July of 2023) to track your book rating. + +[Ryot](https://github.com/IgnisDa/ryot) seems to work better with books, but it still doesn't have a [jellyfin scrobbler](https://github.com/IgnisDa/ryot/issues/187). + +# References + +- [Source](https://github.com/bonukai/MediaTracker) +- [Issues](https://github.com/bonukai/MediaTracker/issues) diff --git a/docs/molecule.md b/docs/molecule.md index b0b9cf06d44..5436f569416 100644 --- a/docs/molecule.md +++ b/docs/molecule.md @@ -67,6 +67,17 @@ This version is seen as a clean-up or refactoring release, not expected to requi ## To v3.0.0 +# Troubleshooting + +## [Molecule doesn't find the `molecule.yaml` file](https://github.com/ansible-community/molecule/discussions/3344) + + +This is expected default behavior since Molecule searches for scenarios using the `molecule/*/molecule.yml` glob. But if you would like to change the suffix to yaml, you can do that if you set the `MOLECULE_GLOB` environment variable like this: + +```bash +export MOLECULE_GLOB='molecule/*/molecule.yaml' +``` + # References - [Source](https://github.com/ansible-community/molecule) diff --git a/docs/newsletter/2023_w08.md b/docs/newsletter/2023_w08.md index 182929ba855..6062b0cb805 100644 --- a/docs/newsletter/2023_w08.md +++ b/docs/newsletter/2023_w08.md @@ -299,4 +299,4 @@ nnoremap gb ``` - Defining `mapleader` and/or using `` may be useful if you change your mind often on what key to use a leader but it won't be of any use if your mappings are stable. \ No newline at end of file + Defining `mapleader` and/or using `` may be useful if you change your mind often on what key to use a leader but it won't be of any use if your mappings are stable. diff --git a/docs/python_jinja2.md b/docs/python_jinja2.md index 675bb53d2fe..eb0155ac47a 100644 --- a/docs/python_jinja2.md +++ b/docs/python_jinja2.md @@ -316,6 +316,20 @@ Use the `none` test (not to be confused with Python's `None` object!): {% endif %} ``` +# Snippets + +## [Escape jinja expansion on a jinja template](https://stackoverflow.com/questions/25359898/escape-jinja2-syntax-in-a-jinja2-template) + +```jinja +{% raw %} + +Anything in this block is treated as raw text, +including {{ curly braces }} and +{% other block-like syntax %} + +{% endraw %} +``` + # References * [Docs](https://jinja.palletsprojects.com) diff --git a/docs/qbittorrent.md b/docs/qbittorrent.md index ba1005e9073..853f84921ca 100644 --- a/docs/qbittorrent.md +++ b/docs/qbittorrent.md @@ -84,6 +84,10 @@ this happens while you're away from your infrastructure it can be even worse. Something you can do in these cases is to have another client configured so you can spawn it fast and import the torrents that are under the Hit and Run threat. +# Tools + +- [qbittools](https://github.com/buroa/qbittools): a feature rich CLI for the management of torrents in qBittorrent. +- [qbit_manage](https://github.com/StuffAnThings/qbit_manage): tool will help manage tedious tasks in qBittorrent and automate them. # References - [Home](https://www.qbittorrent.org/) diff --git a/docs/sanoid.md b/docs/sanoid.md index 6f7b4eb567b..0ec22cdd8bd 100644 --- a/docs/sanoid.md +++ b/docs/sanoid.md @@ -111,6 +111,14 @@ To check the logs use `journalctl -eu sanoid`. To manage the snapshots look at the [`zfs`](zfs.md#restore-a-backup) article. +## Prune snapshots + +If you want to manually prune the snapshots after you tweaked `sanoid.conf` you can run: + +```bash +sanoid --prune-snapshots +``` + # [Syncoid](https://github.com/jimsalterjrs/sanoid/wiki/Syncoid) `Sanoid` also includes a replication tool, `syncoid`, which facilitates the asynchronous incremental replication of ZFS filesystems. A typical `syncoid` command might look like this: @@ -131,7 +139,7 @@ Which would push-replicate the specified ZFS filesystem from the local host to r syncoid root@remotehost:data/images/vm backup/images/vm ``` -Which would pull-replicate the filesystem from the remote host to the local system over an SSH tunnel. +Which would pull-replicate the filesystem from the remote host to the local system over an SSH tunnel. In case of doubt [using the pull strategy is always desired](https://github.com/jimsalterjrs/sanoid/issues/666) `Syncoid` supports recursive replication (replication of a dataset and all its child datasets) and uses mbuffer buffering, lzop compression, and pv progress bars if the utilities are available on the systems used. If ZFS supports resumeable send/receive streams on both the source and target those will be enabled as default. It also automatically supports and enables resume of interrupted replication when both source and target support this feature. @@ -160,6 +168,70 @@ Also note that `post_snapshot_script` cannot be used with `syncoid` especially w So this approach does not work and has to be done independently, it seems. The good news is that the SystemD service of `Type= oneshot` can have several `Execstart=` lines. +## Send encrypted backups to a encrypted dataset + +`syncoid`'s default behaviour is to create the destination dataset without encryption so the snapshots are transferred and can be read without encryption. You can check this with the `zfs get encryption,keylocation,keyformat` command both on source and destination. + +To prevent this from happening you have to [pass the `--sendoptions='w'](https://github.com/jimsalterjrs/sanoid/issues/548) to `syncoid` so that it tells zfs to send a raw stream. If you do so, you also need to [transfer the key file](https://github.com/jimsalterjrs/sanoid/issues/648) to the destination server so that it can do a `zfs loadkey` and then mount the dataset. For example: + +```bash +server-host:$ sudo zfs list -t filesystem +NAME USED AVAIL REFER MOUNTPOINT +server_data 232M 38.1G 230M /var/server_data +server_data/log 111K 38.1G 111K /var/server_data/log +server_data/mail 111K 38.1G 111K /var/server_data/mail +server_data/nextcloud 111K 38.1G 111K /var/server_data/nextcloud +server_data/postgres 111K 38.1G 111K /var/server_data/postgres + +server-host:$ sudo zfs get keylocation server_data/nextcloud +NAME PROPERTY VALUE SOURCE +server_data/nextcloud keylocation file:///root/zfs_dataset_nextcloud_pass local + +server-host:$ sudo syncoid --recursive --skip-parent --sendoptions=w server_data root@192.168.122.94:backup_pool +INFO: Sending oldest full snapshot server_data/log@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem: +17.0KiB 0:00:00 [1.79MiB/s] [=================================================> ] 34% +INFO: Updating new target filesystem with incremental server_data/log@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:12:55 (~ 15 KB): +41.2KiB 0:00:00 [78.4KiB/s] [===================================================================================================================================================] 270% +INFO: Sending oldest full snapshot server_data/mail@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem: +17.0KiB 0:00:00 [ 921KiB/s] [=================================================> ] 34% +INFO: Updating new target filesystem with incremental server_data/mail@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:13:14 (~ 15 KB): +41.2KiB 0:00:00 [49.4KiB/s] [===================================================================================================================================================] 270% +INFO: Sending oldest full snapshot server_data/nextcloud@autosnap_2021-06-18_18:33:42_yearly (~ 49 KB) to new target filesystem: +17.0KiB 0:00:00 [ 870KiB/s] [=================================================> ] 34% +INFO: Updating new target filesystem with incremental server_data/nextcloud@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:13:42 (~ 15 KB): +41.2KiB 0:00:00 [50.4KiB/s] [===================================================================================================================================================] 270% +INFO: Sending oldest full snapshot server_data/postgres@autosnap_2021-06-18_18:33:42_yearly (~ 50 KB) to new target filesystem: +17.0KiB 0:00:00 [1.36MiB/s] [===============================================> ] 33% +INFO: Updating new target filesystem with incremental server_data/postgres@autosnap_2021-06-18_18:33:42_yearly ... syncoid_caedrium.com_2021-06-22:10:14:11 (~ 15 KB): +41.2KiB 0:00:00 [48.9KiB/s] [===================================================================================================================================================] 270% + +server-host:$ sudo scp /root/zfs_dataset_nextcloud_pass 192.168.122.94: +``` + +```bash +backup-host:$ sudo zfs set keylocation=file:///root/zfs_dataset_nextcloud_pass backup_pool/nextcloud +backup-host:$ sudo zfs load-key backup_pool/nextcloud +backup-host:$ sudo zfs mount backup_pool/nextcloud +``` + +If you also want to keep the `encryptionroot` you need to [let zfs take care of the recursion instead of syncoid](https://github.com/jimsalterjrs/sanoid/issues/614). In this case you can't use syncoid's stuff like `--exclude` from the manpage of zfs: + +``` +-R, --replicate + Generate a replication stream package, which will replicate the specified file system, and all descendent file systems, up to the named snapshot. When received, all properties, snap‐ + shots, descendent file systems, and clones are preserved. + + If the -i or -I flags are used in conjunction with the -R flag, an incremental replication stream is generated. The current values of properties, and current snapshot and file system + names are set when the stream is received. If the -F flag is specified when this stream is received, snapshots and file systems that do not exist on the sending side are destroyed. + If the -R flag is used to send encrypted datasets, then -w must also be specified. +``` + +In this case this should work: + +```bash +/sbin/syncoid --recursive --force-delete --sendoptions="Rw" zpool/backups zfs-recv@10.29.3.27:zpool/backups +``` + # Troubleshooting ## [Syncoid no tty present and no askpass program specified](https://sidhion.com/blog/posts/zfs-syncoid-slow/) diff --git a/docs/terraform.md b/docs/terraform.md index 767358a99f3..5ed2e79f244 100644 --- a/docs/terraform.md +++ b/docs/terraform.md @@ -172,7 +172,7 @@ We can automate all the above to be executed before we do a commit using the [pre-commit](https://pre-commit.com/) framework. ```bash -sudo pip install pre-commit +pip install pre-commit cd $proyectoConTerraform echo """repos: - repo: git://github.com/antonbabenko/pre-commit-terraform @@ -903,6 +903,37 @@ You can set the `TF_LOG` environmental variable to one of the log levels `TRACE`, `DEBUG`, `INFO`, `WARN` or `ERROR` to change the verbosity of the logs. To remove the debug traces run `unset TF_LOG`. + + +# Snippets + +## [Create a list of resources based on a list of strings](https://developer.hashicorp.com/terraform/language/meta-arguments/count) + +```hcl +variable "subnet_ids" { + type = list(string) +} + +resource "aws_instance" "server" { + # Create one instance for each subnet + count = length(var.subnet_ids) + + ami = "ami-a1b2c3d4" + instance_type = "t2.micro" + subnet_id = var.subnet_ids[count.index] + + tags = { + Name = "Server ${count.index}" + } +} +``` + +If you want to use this generated list on another resource extracting for example the id you can use + +```hcl +aws_instance.server.*.id +``` + # References * [Docs](https://www.terraform.io/docs/index.html) diff --git a/docs/vim.md b/docs/vim.md index 26d3c289901..438f4d42d23 100644 --- a/docs/vim.md +++ b/docs/vim.md @@ -661,7 +661,7 @@ I've been using `vim-fugitive` for some years now and it works very well but is At a first look `lazygit` is too much and `neogit` a little more verbose than `vim-fugitive` but it looks closer to my current workflow. I'm going to try `neogit` then. -## [Neogit](https://github.com/TimUntersberger/neogit) +## [Neogit](https://github.com/Neogit/neogit) ### [Installation](https://github.com/TimUntersberger/neogit#installation) @@ -686,11 +686,6 @@ neogit.setup({ ### Improve the commit message window - - - - - [create custom keymaps with lua](https://blog.devgenius.io/create-custom-keymaps-in-neovim-with-lua-d1167de0f2c2) [create specific bindings for a file type](https://stackoverflow.com/questions/72984648/neovim-lua-how-to-use-different-mappings-depending-on-file-type) https://neovim.discourse.group/t/how-to-create-an-auto-command-for-a-specific-filetype-in-neovim-0-7/2404 @@ -949,6 +944,11 @@ require('telescope').setup{ # Tips +## [Run a command when opening vim](https://vi.stackexchange.com/questions/846/how-can-i-start-vim-and-then-execute-a-particular-command-that-includes-a-fro) + +```bash +nvim -c ':DiffViewOpen' +``` ## Run lua snippets Run lua snippet within neovim with `:lua `. Useful to test the commands before binding it to keys. diff --git a/docs/zfs_exporter.md b/docs/zfs_exporter.md index 8c0ea40cb00..6e4144d81c8 100644 --- a/docs/zfs_exporter.md +++ b/docs/zfs_exporter.md @@ -126,7 +126,7 @@ The people of [Awesome Prometheus Alerts](https://samber.github.io/awesome-prome ```yaml - alert: ZfsPoolOutOfSpace expr: zfs_pool_free_bytes * 100 / zfs_pool_size_bytes < 10 and ON (instance, device, mountpoint) zfs_pool_readonly == 0 - for: 0m + for: 5m labels: severity: warning annotations: @@ -135,7 +135,7 @@ The people of [Awesome Prometheus Alerts](https://samber.github.io/awesome-prome - alert: ZfsPoolUnhealthy expr: zfs_pool_health > 0 - for: 0m + for: 5m labels: severity: critical annotations: @@ -144,12 +144,13 @@ The people of [Awesome Prometheus Alerts](https://samber.github.io/awesome-prome - alert: ZfsCollectorFailed expr: zfs_scrape_collector_success != 1 - for: 0m + for: 5m labels: severity: warning annotations: summary: ZFS collector failed (instance {{ $labels.instance }}) description: "ZFS collector for {{ $labels.instance }} has failed to collect information\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + ``` ### Snapshot alerts @@ -157,14 +158,9 @@ The people of [Awesome Prometheus Alerts](https://samber.github.io/awesome-prome You can also monitor the status of the snapshots. ```yaml - - record: zfs_dataset_snapshot_bytes - # This expression is not real for datasets that have children, so we're going to create this metric only for those datasets that don't have children - # I'm also going to assume that the datasets that have children don't hold data - expr: zfs_dataset_used_bytes - zfs_dataset_used_by_dataset_bytes and zfs_dataset_used_by_dataset_bytes > 200e3 - - alert: ZfsDatasetWithNoSnapshotsError expr: zfs_dataset_used_by_dataset_bytes{type="filesystem"} > 200e3 unless on (hostname,filesystem) count by (hostname, filesystem, job) (zfs_dataset_used_bytes{type="snapshot"}) > 1 - for: 0m + for: 5m labels: severity: error annotations: @@ -172,8 +168,8 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeFrequentlySizeError - expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='frequently'})[60m:15m]) == 0 - for: 0m + expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='frequently'})[60m:15m]) == 0 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[60m:15m]) == 4 + for: 5m labels: severity: error annotations: @@ -181,8 +177,8 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system or the data has not changed in the last hour\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeHourlySizeError - expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='hourly'})[2h:30m]) == 0 - for: 0m + expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='hourly'})[2h:30m]) == 0 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[2h:30m]) == 4 + for: 5m labels: severity: error annotations: @@ -190,8 +186,8 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system or the data has not changed in the last hour\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeDailySizeError - expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='daily'})[2d:8h]) == 0 - for: 0m + expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='daily'})[2d:12h]) == 0 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[2d:12h]) == 4 + for: 5m labels: severity: error annotations: @@ -199,8 +195,8 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system or the data has not changed in the last hour\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeMonthlySizeError - expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='monthly'})[2d:8h]) == 0 - for: 0m + expr: increase(sum by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='monthly'})[60d:15d]) == 0 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[60d:15d]) == 4 + for: 5m labels: severity: error annotations: @@ -208,8 +204,8 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system or the data has not changed in the last hour\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeFrequentlyUnexpectedNumberError - expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{snapshot_type="frequently",type="snapshot"}) < 4)[16m:1m]) < 1 - for: 0m + expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{snapshot_type="frequently",type="snapshot"}) < 4)[16m:8m]) < 1 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[16m:8m]) == 2 + for: 5m labels: severity: error annotations: @@ -217,8 +213,8 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeHourlyUnexpectedNumberError - expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{snapshot_type="hourly",type="snapshot"}) < 24)[1h10m:15m]) < 1 - for: 0m + expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{snapshot_type="hourly",type="snapshot"}) < 24)[1h10m:10m]) < 1 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[1h10m:10m]) == 7 + for: 5m labels: severity: error annotations: @@ -226,8 +222,8 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeDailyUnexpectedNumberError - expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='daily'}) < 30)[25h:5h]) < 1 - for: 0m + expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='daily'}) < 30)[1d2h:2h]) < 1 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[1d2h:2h]) == 13 + for: 5m labels: severity: error annotations: @@ -235,17 +231,21 @@ You can also monitor the status of the snapshots. description: "There might be an error on the snapshot system\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" - alert: ZfsSnapshotTypeMonthlyUnexpectedNumberError - expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='monthly'}) < 6)[30d1h:10d]) < 1 - for: 0m + expr: increase((count by (hostname, filesystem, job) (zfs_dataset_used_bytes{type='snapshot',snapshot_type='monthly'}) < 6)[31d:1d]) < 1 and count_over_time(zfs_dataset_used_bytes{type="filesystem"}[31d:1d]) == 31 + for: 5m labels: severity: error annotations: summary: The number of the monthly snapshots has not changed for the dataset {{ $labels.filesystem }} at {{ $labels.hostname }}. description: "There might be an error on the snapshot system\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + - record: zfs_dataset_snapshot_bytes + # This expression is not real for datasets that have children, so we're going to create this metric only for those datasets that don't have children + # I'm also going to assume that the datasets that have children don't hold data + expr: zfs_dataset_used_bytes - zfs_dataset_used_by_dataset_bytes and zfs_dataset_used_by_dataset_bytes > 200e3 - alert: ZfsSnapshotTooMuchSize - expr: zfs_dataset_snapshot_bytes / zfs_dataset_used_by_dataset_bytes > 2 and zfs_dataset_snapshot_bytes > 100e6 - for: 0m + expr: zfs_dataset_snapshot_bytes / zfs_dataset_used_by_dataset_bytes > 2 and zfs_dataset_snapshot_bytes > 10e9 + for: 5m labels: severity: warning annotations: @@ -253,6 +253,31 @@ You can also monitor the status of the snapshots. description: "The snapshots of the dataset {{ $labels.filesystem }} at {{ $labels.hostname }} use more than two times the data space\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" ``` +### Useful inhibits + +Some you may want to inhibit some of these rules for some of your datasets. These subsections should be added to the `alertmanager.yml` file under the `inhibit_rules` field. + +#### Ignore snapshots on some datasets + +Sometimes you don't want to do snapshots on a dataset + +```yaml +- target_matchers: + - alertname = ZfsDatasetWithNoSnapshotsError + - hostname = my_server_1 + - filesystem = tmp +``` + +#### Ignore snapshots growth + +Sometimes you don't mind if the size of the data saved in the filesystems doesn't change too much between snapshots doesn't change much specially in the most frequent backups because you prefer to keep the backup cadence. It's interesting to have the alert though so that you can get notified of the datasets that don't change that much so you can tweak your backup policy (even if zfs snapshots are almost free). + +```yaml + - target_matchers: + - alertname =~ "ZfsSnapshotType(Frequently|Hourly)SizeError" + - filesystem =~ "(media/(docs|music))" +``` + # References - [Source](https://github.com/pdf/zfs_exporter) diff --git a/mkdocs.yml b/mkdocs.yml index 408ed0c0daa..a0dcb6bd643 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -331,6 +331,7 @@ nav: - Dependency managers: - Pip-tools: devops/pip_tools.md - Automating Processes: + - copier: copier.md - cookiecutter: linux/cookiecutter.md - cruft: linux/cruft.md - renovate: renovate.md @@ -398,7 +399,9 @@ nav: - Github cli: gh.md - goaccess: goaccess.md - google chrome: linux/google_chrome.md + - Gotify: gotify.md - Chromium: chromium.md + - Grafana: grafana.md - Graylog: graylog.md - HAProxy: linux/haproxy.md - Hard drive health: hard_drive_health.md @@ -409,8 +412,11 @@ nav: - ffmpeg: ffmpeg.md - Khal: khal.md - Kitty: kitty.md + - Kodi: kodi.md + - Koel: koel.md - LUKS: linux/luks/luks.md - mbsync: mbsync.md + - Mediatracker: mediatracker.md - mkdocs: linux/mkdocs.md - Mopidy: mopidy.md - monica: linux/monica.md @@ -437,6 +443,7 @@ nav: - Vim Plugins: linux/vim/vim_plugins.md - Write Neovim Plugins: write_neovim_plugins.md - Treesitter: treesitter.md + - Diffview: diffview.md - Tridactyl: tridactyl.md - VSCodium: vscodium.md - Wake on Lan: wake_on_lan.md