Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow users to upload/crawl new files only #34

Open
sunu opened this issue Sep 1, 2021 · 0 comments
Open

Allow users to upload/crawl new files only #34

sunu opened this issue Sep 1, 2021 · 0 comments
Labels
feature-request New feature or request

Comments

@sunu
Copy link
Contributor

sunu commented Sep 1, 2021

alephclient crawldir sends the entire directory to Aleph even if most of the content is already processed in a previous crawl. It should be possible to upload only the newer files to Aleph without uploading the rest.

@Rosencrantz Rosencrantz added this to Aleph Nov 2, 2022
@Rosencrantz Rosencrantz moved this to 🏷️ Triage in Aleph Nov 2, 2022
@Rosencrantz Rosencrantz moved this from 🏷️ Triage to Feature Backlog in Aleph Nov 2, 2022
lyz-code added a commit to lyz-code/blue-book that referenced this issue Sep 7, 2023
Gain early map control with scouts, then switch into steppe lancers and front siege, finally castle in the face when you clicked to imperial.

- [Example Hera vs Mr.Yo in TCI](https://yewtu.be/watch?v=20bktCBldcw)

feat(aleph#Ingest gets stuck): Ingest gets stuck

It looks that Aleph doesn't yet give an easy way to debug it. It can be seen in the next webs:

- [Improve the UX for bulk uploading and processing of large number of files](alephdata/aleph#2124)
- [Document ingestion gets stuck effectively at 100%](alephdata/aleph#1839)
- [Display detailed ingestion status to see if everything is alright and when the collection is ready](alephdata/aleph#1525)

Some interesting ideas I've extracted while diving into these issues is that:

- You can also upload files using the [`alephclient` python command line tool](https://github.com/alephdata/alephclient)
- Some of the files might fail to be processed without leaving any hint to the uploader or the viewer.
  - This results in an incomplete dataset and the users don't get to know that the dataset is incomplete. This is problematic if the completeness of the dataset is crucial for an investigation.
  - There is no way to upload only the files that failed to be processed without re-uploading the entire set of documents or manually making a list of the failed documents and re-uploading them
  - There is no way for uploaders or Aleph admins to see an overview of processing errors to figure out why some files are failing to be processed without going through docker logs (which is not very user-friendly)
- There was an attempt to [improve the way ingest-files manages the pending tasks](alephdata/aleph#2127), it's merged into the [release/4.0.0](https://github.com/alephdata/ingest-file/tree/release/4.0.0) branch, but it has [not yet arrived `main`](alephdata/ingest-file#423).

There are some tickets that attempt to address these issues on the command line:

- [Allow users to upload/crawl new files only](alephdata/alephclient#34)
- [Check if alephclient crawldir was 100% successful or not](alephdata/alephclient#35)

I think it's interesting either to contribute to `alephclient` to solve those issues or if it's complicated create a small python script to detect which files were not uploaded and try to reindex them and/or open issues that will prevent future ingests to fail.

feat(ansible_snippets#Ansible condition that uses a regexp): Ansible condition that uses a regexp

```yaml
- name: Check if an instance name or hostname matches a regex pattern
  when: inventory_hostname is not match('molecule-.*')
  fail:
    msg: "not a molecule instance"
```

feat(ansible_snippets#Ansible-lint doesn't find requirements): Ansible-lint doesn't find requirements

It may be because you're using `requirements.yaml` instead of `requirements.yml`. Create a temporal link from one file to the other, run the command and then remove the link.

It will work from then on even if you remove the link. `¯\(°_o)/¯`

feat(ansible_snippets#Run task only once): Run task only once

Add `run_once: true` on the task definition:

```yaml
- name: Do a thing on the first host in a group.
  debug:
    msg: "Yay only prints once"
  run_once: true
```

feat(aws_snippets#Invalidate a cloudfront distribution

```bash
aws cloudfront create-invalidation --paths "/pages/about" --distribution-id my-distribution-id
```

feat(bash_snippets#Remove the lock screen in ubuntu): Remove the lock screen in ubuntu

Create the `/usr/share/glib-2.0/schemas/90_ubuntu-settings.gschema.override` file with the next content:

```ini
[org.gnome.desktop.screensaver]
lock-enabled = false
[org.gnome.settings-daemon.plugins.power]
idle-dim = false
```

Then reload the schemas with:

```bash
sudo glib-compile-schemas /usr/share/glib-2.0/schemas/
```

feat(bash_snippets#How to deal with HostContextSwitching alertmanager alert): How to deal with HostContextSwitching alertmanager alert

A context switch is described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. A context switch is required for every interrupt and every task that the scheduler picks.

Context switching can be due to multitasking, Interrupt handling , user & kernel mode switching. The interrupt rate will naturally go high, if there is higher network traffic, or higher disk traffic. Also it is dependent on the application which every now and then invoking system calls.

If the cores/CPU's are not sufficient to handle load of threads created by application will also result in context switching.

It is not a cause of concern until performance breaks down. This is expected that CPU will do context switching. One shouldn't verify these data at first place since there are many statistical data which should be analyzed prior to looking into kernel activities. Verify the CPU, memory and network usage during this time.

You can see which process is causing issue with the next command:

```bash

10:15:24 AM     UID     PID     cswch/s         nvcswch/s       Command
10:15:27 AM     0       1       162656.7        16656.7         systemd
10:15:27 AM     0       9       165451.04       15451.04        ksoftirqd/0
10:15:27 AM     0       10      158628.87       15828.87        rcu_sched
10:15:27 AM     0       11      156147.47       15647.47        migration/0
10:15:27 AM     0       17      150135.71       15035.71        ksoftirqd/1
10:15:27 AM     0       23      129769.61       12979.61        ksoftirqd/2
10:15:27 AM     0       29      2238.38         238.38          ksoftirqd/3
10:15:27 AM     0       43      1753            753             khugepaged
10:15:27 AM     0       443     1659            165             usb-storage
10:15:27 AM     0       456     1956.12         156.12          i915/signal:0
10:15:27 AM     0       465     29550           29550           kworker/3:1H-xfs-log/dm-3
10:15:27 AM     0       490     164700          14700           kworker/0:1H-kblockd
10:15:27 AM     0       506     163741.24       16741.24        kworker/1:1H-xfs-log/dm-3
10:15:27 AM     0       594     154742          154742          dmcrypt_write/2
10:15:27 AM     0       629     162021.65       16021.65        kworker/2:1H-kblockd
10:15:27 AM     0       715     147852.48       14852.48        xfsaild/dm-1
10:15:27 AM     0       886     150706.86       15706.86        irq/131-iwlwifi
10:15:27 AM     0       966     135597.92       13597.92        xfsaild/dm-3
10:15:27 AM     81      1037    2325.25         225.25          dbus-daemon
10:15:27 AM     998     1052    118755.1        11755.1         polkitd
10:15:27 AM     70      1056    158248.51       15848.51        avahi-daemon
10:15:27 AM     0       1061    133512.12       455.12          rngd
10:15:27 AM     0       1110    156230          16230           cupsd
10:15:27 AM     0       1192    152298.02       1598.02         sssd_nss
10:15:27 AM     0       1247    166132.99       16632.99        systemd-logind
10:15:27 AM     0       1265    165311.34       16511.34        cups-browsed
10:15:27 AM     0       1408    10556.57        1556.57         wpa_supplicant
10:15:27 AM     0       1687    3835            3835            splunkd
10:15:27 AM     42      1773    3728            3728            Xorg
10:15:27 AM     42      1996    3266.67         266.67          gsd-color
10:15:27 AM     0       3166    32036.36        3036.36         sssd_kcm
10:15:27 AM     119349  3194    151763.64       11763.64        dbus-daemon
10:15:27 AM     119349  3199    158306          18306           Xorg
10:15:27 AM     119349  3242    15.28           5.8             gnome-shell

pidstat -wt 3 10  > /tmp/pidstat-t.out

Linux 4.18.0-80.11.2.el8_0.x86_64 (hostname)    09/08/2020  _x86_64_    (4 CPU)

10:15:15 AM   UID      TGID       TID   cswch/s   nvcswch/s  Command
10:15:19 AM     0         1         -   152656.7   16656.7   systemd
10:15:19 AM     0         -         1   152656.7   16656.7   |__systemd
10:15:19 AM     0         9         -   165451.04  15451.04  ksoftirqd/0
10:15:19 AM     0         -         9   165451.04  15451.04  |__ksoftirqd/0
10:15:19 AM     0        10         -   158628.87  15828.87  rcu_sched
10:15:19 AM     0         -        10   158628.87  15828.87  |__rcu_sched
10:15:19 AM     0        23         -   129769.61  12979.61  ksoftirqd/2
10:15:19 AM     0         -        23   129769.61  12979.33  |__ksoftirqd/2
10:15:19 AM     0        29         -   32424.5    2445      ksoftirqd/3
10:15:19 AM     0         -        29   32424.5    2445      |__ksoftirqd/3
10:15:19 AM     0        43         -   334        34        khugepaged
10:15:19 AM     0         -        43   334        34        |__khugepaged
10:15:19 AM     0       443         -   11465      566       usb-storage
10:15:19 AM     0         -       443   6433       93        |__usb-storage
10:15:19 AM     0       456         -   15.41      0.00      i915/signal:0
10:15:19 AM     0         -       456   15.41      0.00      |__i915/signal:0
10:15:19 AM     0       715         -   19.34      0.00      xfsaild/dm-1
10:15:19 AM     0         -       715   19.34      0.00      |__xfsaild/dm-1
10:15:19 AM     0       886         -   23.28      0.00      irq/131-iwlwifi
10:15:19 AM     0         -       886   23.28      0.00      |__irq/131-iwlwifi
10:15:19 AM     0       966         -   19.67      0.00      xfsaild/dm-3
10:15:19 AM     0         -       966   19.67      0.00      |__xfsaild/dm-3
10:15:19 AM    81      1037         -   6.89       0.33      dbus-daemon
10:15:19 AM    81         -      1037   6.89       0.33      |__dbus-daemon
10:15:19 AM     0      1038         -   11567.31   4436      NetworkManager
10:15:19 AM     0         -      1038   1.31       0.00      |__NetworkManager
10:15:19 AM     0         -      1088   0.33       0.00      |__gmain
10:15:19 AM     0         -      1094   1340.66    0.00      |__gdbus
10:15:19 AM   998      1052         -   118755.1   11755.1   polkitd
10:15:19 AM   998         -      1052   32420.66   25545     |__polkitd
10:15:19 AM   998         -      1132   0.66       0.00      |__gdbus
```

Then with help of PID which is causing issue, one can get all system calls details:
Raw

```bash
```

Let this command run for a few minutes while the load/context switch rates are high. It is safe to run this on a production system so you could run it on a good system as well to provide a comparative baseline. Through strace, one can debug & troubleshoot the issue, by looking at system calls the process has made.

feat(bash_snippets#Redirect stderr of all subsequent commands of a script to a file): Redirect stderr of all subsequent commands of a script to a file

```bash
{
    somecommand
    somecommand2
    somecommand3
} 2>&1 | tee -a $DEBUGLOG
```

feat(diffview#Use the same binding to open and close the diffview windows): Use the same binding to open and close the diffview windows

```lua
vim.keymap.set('n', 'dv', function()
  if next(require('diffview.lib').views) == nil then
    vim.cmd('DiffviewOpen')
  else
    vim.cmd('DiffviewClose')
  end
end)
```

fix(gitea#Using `paths-filter` custom action): Using `paths-filter` custom action to skip job actions

```
jobs:
  test:
    if: "!startsWith(github.event.head_commit.message, 'bump:')"
    name: Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout the codebase
        uses: https://github.com/actions/checkout@v3

      - name: Check if we need to run the molecule tests
        uses: https://github.com/dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            molecule:
              - 'defaults/**'
              - 'tasks/**'
              - 'handlers/**'
              - 'tasks/**'
              - 'templates/**'
              - 'molecule/**'
              - 'requirements.yaml'
              - '.github/workflows/tests.yaml'

      - name: Run Molecule tests
        if: steps.filter.outputs.molecule == 'true'
        run: make molecule
```

You can find more examples on how to use `paths-filter` [here](https://github.com/dorny/paths-filter#examples ).

feat(gitsigns): Introduce gitsigns

[Gitsigns](https://github.com/lewis6991/gitsigns.nvim) is a neovim plugin to create git decorations similar to the vim plugin [gitgutter](https://github.com/airblade/vim-gitgutter) but written purely in Lua.

Installation:

Add to your `plugins.lua` file:

```lua
  use {'lewis6991/gitsigns.nvim'}
```

Install it with `:PackerInstall`.

Configure it in your `init.lua` with:

```lua
-- Configure gitsigns
require('gitsigns').setup({
  on_attach = function(bufnr)
    local gs = package.loaded.gitsigns

    local function map(mode, l, r, opts)
      opts = opts or {}
      opts.buffer = bufnr
      vim.keymap.set(mode, l, r, opts)
    end

    -- Navigation
    map('n', ']c', function()
      if vim.wo.diff then return ']c' end
      vim.schedule(function() gs.next_hunk() end)
      return '<Ignore>'
    end, {expr=true})

    map('n', '[c', function()
      if vim.wo.diff then return '[c' end
      vim.schedule(function() gs.prev_hunk() end)
      return '<Ignore>'
    end, {expr=true})

    -- Actions
    map('n', '<leader>gs', gs.stage_hunk)
    map('n', '<leader>gr', gs.reset_hunk)
    map('v', '<leader>gs', function() gs.stage_hunk {vim.fn.line('.'), vim.fn.line('v')} end)
    map('v', '<leader>gr', function() gs.reset_hunk {vim.fn.line('.'), vim.fn.line('v')} end)
    map('n', '<leader>gS', gs.stage_buffer)
    map('n', '<leader>gu', gs.undo_stage_hunk)
    map('n', '<leader>gR', gs.reset_buffer)
    map('n', '<leader>gp', gs.preview_hunk)
    map('n', '<leader>gb', function() gs.blame_line{full=true} end)
    map('n', '<leader>gb', gs.toggle_current_line_blame)
    map('n', '<leader>gd', gs.diffthis)
    map('n', '<leader>gD', function() gs.diffthis('~') end)
    map('n', '<leader>ge', gs.toggle_deleted)

    -- Text object
    map({'o', 'x'}, 'ih', ':<C-U>Gitsigns select_hunk<CR>')
  end
})
```

Usage:

Some interesting bindings:

- `]c`: Go to next diff chunk
- `[c`: Go to previous diff chunk
- `<leader>gs`: Stage chunk, it works both in normal and visual mode
- `<leader>gr`: Restore chunk from index, it works both in normal and visual mode
- `<leader>gp`: Preview diff, you can use it with `]c` and `[c` to see all the chunk diffs
- `<leader>gb`: Show the git blame of the line as a shadowed comment

fix(grafana): Install grafana

```yaml
---
version: "3.8"
services:
  grafana:
    image: grafana/grafana-oss:${GRAFANA_VERSION:-latest}
    container_name: grafana
    restart: unless-stopped
    volumes:
      - data:/var/lib/grafana
    networks:
      - grafana
      - monitorization
      - swag
    env_file:
      - .env
    depends_on:
      - db
  db:
    image: postgres:${DATABASE_VERSION:-15}
    restart: unless-stopped
    container_name: grafana-db
    environment:
      - POSTGRES_DB=${GF_DATABASE_NAME:-grafana}
      - POSTGRES_USER=${GF_DATABASE_USER:-grafana}
      - POSTGRES_PASSWORD=${GF_DATABASE_PASSWORD:?database password required}
    networks:
      - grafana
    volumes:
      - db-data:/var/lib/postgresql/data
    env_file:
      - .env

networks:
  grafana:
    external:
      name: grafana
  monitorization:
    external:
      name: monitorization
  swag:
    external:
      name: swag

volumes:
  data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/grafana/app
  db-data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/grafana/database
```

Where the `monitorization` network is where prometheus and the rest of the stack listens, and `swag` the network to the gateway proxy.

It uses the `.env` file to store the required [configuration](#configure-grafana), to connect grafana with authentik you need to add the next variables:

```bash

GF_AUTH_GENERIC_OAUTH_ENABLED="true"
GF_AUTH_GENERIC_OAUTH_NAME="authentik"
GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>"
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>"
GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email"
GF_AUTH_GENERIC_OAUTH_AUTH_URL="https://authentik.company/application/o/authorize/"
GF_AUTH_GENERIC_OAUTH_TOKEN_URL="https://authentik.company/application/o/token/"
GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/"
GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o/<Slug of the application from above>/end-session/"
GF_AUTH_OAUTH_AUTO_LOGIN="true"
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
```

In the configuration above you can see an example of a role mapping. Upon login, this configuration looks at the groups of which the current user is a member. If any of the specified group names are found, the user will be granted the resulting role in Grafana.

In the example shown above, one of the specified group names is "Grafana Admins". If the user is a member of this group, they will be granted the "Admin" role in Grafana. If the user is not a member of the "Grafana Admins" group, it moves on to see if the user is a member of the "Grafana Editors" group. If they are, they are granted the "Editor" role. Finally, if the user is not found to be a member of either of these groups, it fails back to granting the "Viewer" role.

Also make sure in your configuration that `root_url` is set correctly, otherwise your redirect url might get processed incorrectly. For example, if your grafana instance is running on the default configuration and is accessible behind a reverse proxy at https://grafana.company, your redirect url will end up looking like this, https://grafana.company/. If you get `user does not belong to org` error when trying to log into grafana for the first time via OAuth, check if you have an organization with the ID of 1, if not, then you have to add the following to your grafana config:

```ini
[users]
auto_assign_org = true
auto_assign_org_id = <id-of-your-default-organization>
```

Once you've made sure that the oauth works, go to `/admin/users` and remove the `admin` user.

feat(grafana#Configure grafana): Configure grafana

Grafana has default and custom configuration files. You can customize your Grafana instance by modifying the custom configuration file or by using environment variables. To see the list of settings for a Grafana instance, refer to [View server settings](https://grafana.com/docs/grafana/latest/administration/stats-and-license/#view-server-settings).

To override an option use `GF_<SectionName>_<KeyName>`. Where the `section name` is the text within the brackets. Everything should be uppercase, `.` and `-` should be replaced by `_`. For example, if you have these configuration settings:

```ini
instance_name = ${HOSTNAME}

[security]
admin_user = admin

[auth.google]
client_secret = 0ldS3cretKey

[plugin.grafana-image-renderer]
rendering_ignore_https_errors = true

[feature_toggles]
enable = newNavigation
```

You can override variables on Linux machines with:

```bash
export GF_DEFAULT_INSTANCE_NAME=my-instance
export GF_SECURITY_ADMIN_USER=owner
export GF_AUTH_GOOGLE_CLIENT_SECRET=newS3cretKey
export GF_PLUGIN_GRAFANA_IMAGE_RENDERER_RENDERING_IGNORE_HTTPS_ERRORS=true
export GF_FEATURE_TOGGLES_ENABLE=newNavigation
```

And in the docker compose you can edit the `.env` file. Mine looks similar to:

```bash
GRAFANA_VERSION=latest
GF_DEFAULT_INSTANCE_NAME="production"
GF_SERVER_ROOT_URL="https://your.domain.org"

GF_DATABASE_TYPE=postgres
DATABASE_VERSION=15
GF_DATABASE_HOST=grafana-db:5432
GF_DATABASE_NAME=grafana
GF_DATABASE_USER=grafana
GF_DATABASE_PASSWORD="change-for-a-long-password"
GF_DATABASE_SSL_MODE=disable

GF_AUTH_GENERIC_OAUTH_ENABLED="true"
GF_AUTH_GENERIC_OAUTH_NAME="authentik"
GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>"
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>"
GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email"
GF_AUTH_GENERIC_OAUTH_AUTH_URL="https://authentik.company/application/o/authorize/"
GF_AUTH_GENERIC_OAUTH_TOKEN_URL="https://authentik.company/application/o/token/"
GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/"
GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o/<Slug of the application from above>/end-session/"
GF_AUTH_OAUTH_AUTO_LOGIN="true"
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
```

feat(grafana#Configure datasources): Configure datasources

You can manage data sources in Grafana by adding YAML configuration files in the `provisioning/datasources` directory. Each config file can contain a list of datasources to add or update during startup. If the data source already exists, Grafana reconfigures it to match the provisioned configuration file.

The configuration file can also list data sources to automatically delete, called `deleteDatasources`. Grafana deletes the data sources listed in `deleteDatasources` before adding or updating those in the datasources list.

For example to [configure a Prometheus datasource](https://grafana.com/docs/grafana/latest/datasources/prometheus/) use:

```yaml
apiVersion: 1

datasources:
  - name: Prometheus
    type: prometheus
    access: proxy
    # Access mode - proxy (server in the UI) or direct (browser in the UI).
    url: http://prometheus:9090
    jsonData:
      httpMethod: POST
      manageAlerts: true
      prometheusType: Prometheus
      prometheusVersion: 2.44.0
      cacheLevel: 'High'
      disableRecordingRules: false
      incrementalQueryOverlapWindow: 10m
      exemplarTraceIdDestinations: []
```

feat(grafana#Configure dashboards): Configure dashboards

You can manage dashboards in Grafana by adding one or more YAML config files in the `provisioning/dashboards` directory. Each config file can contain a list of dashboards providers that load dashboards into Grafana from the local filesystem.

Create one file called `dashboards.yaml` with the next contents:

```yaml
---
apiVersion: 1
providers:
  - name: default # A uniquely identifiable name for the provider
    type: file
    options:
      path: /etc/grafana/provisioning/dashboards/definitions
```

Then inside the config directory of your docker compose create the directory `provisioning/dashboards/definitions` and add the json of the dashboards themselves. You can download them from the dashboard pages. For example:

- [Node Exporter](https://grafana.com/grafana/dashboards/1860-node-exporter-full/)
- [Blackbox Exporter](https://grafana.com/grafana/dashboards/13659-blackbox-exporter-http-prober/)
- [Alertmanager](https://grafana.com/grafana/dashboards/9578-alertmanager/)

feat(grafana#Configure the plugins): Configure the plugins

To install plugins in the Docker container, complete the following steps:

- Pass the plugins you want to be installed to Docker with the `GF_INSTALL_PLUGINS` environment variable as a comma-separated list.
- This sends each plugin name to `grafana-cli plugins install ${plugin}` and installs them when Grafana starts.

For example:

```bash
docker run -d -p 3000:3000 --name=grafana \
  -e "GF_INSTALL_PLUGINS=grafana-clock-panel, grafana-simple-json-datasource" \
  grafana/grafana-oss
```

To specify the version of a plugin, add the version number to the `GF_INSTALL_PLUGINS` environment variable. For example: `GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1`.

To install a plugin from a custom URL, use the following convention to specify the URL: `<url to plugin zip>;<plugin install folder name>`. For example: `GF_INSTALL_PLUGINS=https://github.com/VolkovLabs/custom-plugin.zip;custom-plugin`.

feat(jellyfin#Forgot Password. Please try again within your home network to initiate the password reset process.): Forgot Password. Please try again within your home network to initiate the password reset process.

If you're an external jellyfin user you can't reset your password unless you are part of the LAN. This is done because the reset password process is simple and insecure.

If you don't care about that and still think that the internet is a happy and safe place [here](https://wiki.jfa-go.com/docs/password-resets/) and [here](hrfee/jellyfin-accounts#12) are some instructions on how to bypass the security measure.

For more information also read [1](jellyfin/jellyfin#2282) and [2](jellyfin/jellyfin#2869).

feat(lindy): New Charleston, lindy and solo jazz videos

Charleston:

- The DecaVita Sisters:
   - [Freestyle Lindy Hop & Charleston](https://www.youtube.com/watch?v=OV6ZDuczkag)
   - [Moby "Honey"](https://www.youtube.com/watch?v=ciMFQnwfp50)

Solo Jazz:

- [Pedro Vieira at Little Big Swing Camp 2022](https://yewtu.be/watch?v=pmxn2uIVuUY)

Lindy Hop:

- The DecaVita Sisters:
   - [Compromise - agreement in the moment](https://youtu.be/3DhD2u5Eyv8?si=2WKisSvEB3Z8TVMy)
   - [Lindy hop improv](https://www.youtube.com/watch?v=qkdxcdeicLE)

feat(matrix): How to install matrix

```bash
sudo apt install -y wget apt-transport-https
sudo wget -O /usr/share/keyrings/element-io-archive-keyring.gpg https://packages.element.io/debian/element-io-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/element-io-archive-keyring.gpg] https://packages.element.io/debian/ default main" | sudo tee /etc/apt/sources.list.d/element-io.list
sudo apt update
sudo apt install element-desktop
```

fix(mediatracker#alternatives): Update ryot comparison with mediatracker

[Ryot](https://github.com/IgnisDa/ryot) has a better web design, it also has a [jellyfin scrobbler](IgnisDa/ryot#195), although it's not [yet stable](IgnisDa/ryot#187). There are other UI tweaks that is preventing me from migrating to ryot such as [the easier media rating](IgnisDa/ryot#284) and [the percentage over five starts rating system](IgnisDa/ryot#283).

feat(molecule#Get variables from the environment): Get variables from the environment

You can configure your `molecule.yaml` file to read variables from the environment with:

```yaml
provisioner:
  name: ansible
  inventory:
    group_vars:
      all:
        my_secret: ${MY_SECRET}
```

It's useful to have a task that checks if this secret exists:

```yaml
- name: Verify that the secret is set
  fail:
    msg: 'Please export my_secret: export MY_SECRET=$(pass show my_secret)'
  run_once: true
  when: my_secret == None
```

In the CI you can set it as a secret in the repository.

feat(retroarch): Install retroarch instructions

To add the stable branch to your system type:

```bash
sudo add-apt-repository ppa:libretro/stable
sudo apt-get update
sudo apt-get install retroarch
```

Go to Main Menu/Online Updater and then update everything you can:

- Update Core Info Files
- Update Assets
- Update controller Profiles
- Update Databases
- Update Overlays
- Update GLSL Shaders

feat(vim): Update treesitter language definitions

To do so you need to run:

```vim
:TSInstall <language>
```

To update the parsers run

```vim
:TSUpdate
```

feat(vim#Telescope changes working directory when opening a file): Telescope changes working directory when opening a file

In my case was due to a snippet I have to remember the folds:

```
vim.cmd[[
  augroup remember_folds
    autocmd!
    autocmd BufWinLeave * silent! mkview
    autocmd BufWinEnter * silent! loadview
  augroup END
]]
```

It looks that it had saved a view with the other working directory so when a file was loaded the `cwd` changed. To solve it I created a new `mkview` in the correct directory.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request New feature or request
Projects
No open projects
Status: 🚀Feature Backlog
Development

No branches or pull requests

1 participant