Skip to content

Commit

Permalink
docs: update admonitions to new shortcode format (#14735)
Browse files Browse the repository at this point in the history
(cherry picked from commit 3d6d35e)
  • Loading branch information
JStickler committed Nov 4, 2024
1 parent a6bcd39 commit eb00928
Show file tree
Hide file tree
Showing 39 changed files with 419 additions and 176 deletions.
12 changes: 6 additions & 6 deletions docs/sources/alert/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,22 +202,22 @@ Another great use case is alerting on high cardinality sources. These are things

Creating these alerts in LogQL is attractive because these metrics can be extracted at _query time_, meaning we don't suffer the cardinality explosion in our metrics store.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
As an example, we can use LogQL v2 to help Loki to monitor _itself_, alerting us when specific tenants have queries that take longer than 10s to complete! To do so, we'd use the following query: `sum by (org_id) (rate({job="loki-prod/query-frontend"} |= "metrics.go" | logfmt | duration > 10s [1m])`.
{{% /admonition %}}
{{< /admonition >}}

## Interacting with the Ruler

### Cortextool
Because the rule files are identical to Prometheus rule files, we can interact with the Loki Ruler via [`cortextool`](https://github.com/grafana/cortex-tools#rules). The CLI is in early development, but it works with both Loki and Cortex. Pass the `--backend=loki` option when using it with Loki.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Not all commands in cortextool currently support Loki.
{{% /admonition %}}
{{< /admonition >}}

{{% admonition type="note" %}}
{{< admonition type="note" >}}
cortextool was intended to run against multi-tenant Loki, commands need an `--id=` flag set to the Loki instance ID or set the environment variable `CORTEX_TENANT_ID`. If Loki is running in single tenant mode, the required ID is `fake`.
{{% /admonition %}}
{{< /admonition >}}

An example workflow is included below:

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/community/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,10 @@ $ git commit -m "docs: fix spelling error"
$ git push -u fork HEAD
```

{{% admonition type="note" %}}
{{< admonition type="note" >}}
If you downloaded Loki using `go get`, the message `package github.com/grafana/loki: no Go files in /go/src/github.com/grafana/loki`
is normal and requires no actions to resolve.
{{% /admonition %}}
{{< /admonition >}}

### Building

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -129,9 +129,9 @@ The performance losses against the current approach includes:

Loki regularly combines multiple blocks into a chunk and "flushes" it to storage. In order to ensure that reads over flushed chunks remain as performant as possible, we will re-order a possibly-overlapping set of blocks into a set of blocks that maintain monotonically increasing order between them. From the perspective of the rest of Loki’s components (queriers/rulers fetching chunks from storage), nothing has changed.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
**In the case that data for a stream is ingested in order, this is effectively a no-op, making it well optimized for in-order writes (which is both the requirement and default in Loki currently). Thus, this should have little performance impact on ordered data while enabling Loki to ingest unordered data.**
{{% /admonition %}}
{{< /admonition >}}


#### Chunk Durations
Expand All @@ -153,9 +153,9 @@ The second is simple to implement and an effective way to ensure Loki can ingest

We also cut chunks according to the `sync_period`. The first timestamp ingested past this bound will trigger a cut. This process aids in increasing chunk determinism and therefore our deduplication ratio in object storage because chunks are [content addressed](https://en.wikipedia.org/wiki/Content-addressable_storage). With the removal of our ordering constraint, it's possible that in some cases the synchronization method will not be as effective, such as during concurrent writes to the same stream across this bound.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
**It's important to mention that this is possible today with the current ordering constraint, but we'll be increasing the likelihood by removing it.**
{{% /admonition %}}
{{< /admonition >}}

```
Figure 5
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,18 @@ branch is then used for all the Stable Releases, and all Patch Releases for that

The name of the release branch should be `release-VERSION_PREFIX`, such as `release-2.9.x`.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Branches are only made for VERSION_PREFIX; do not create branches for the full VERSION such as `release-v2.9.1` or `release-2.9.1`.
{{% /admonition %}}
{{< /admonition >}}

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Don't create any other branches that are prefixed with `release` when creating PRs or those branches will collide with our automated release build publish rules.
{{% /admonition %}}
{{< /admonition >}}

1. Create a label to make backporting PRs to this branch easy.

The name of the label should be `backport release-VERSION_PREFIX`, such as `backport release-2.9.x`.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Note there is space in the label name. The label name must follow this naming convention to trigger CI related jobs.
{{% /admonition %}}
{{< /admonition >}}
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,6 @@ Upgrade the Loki version to the new release version in documents, examples, json
LOKI_NEW_VERSION=$VERSION ./tools/release_update_tags.sh
```

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Do not upgrade the version numbers in the `operator/` directory as @periklis and team have a different process to upgrade the Operator version.
{{% /admonition %}}
{{< /admonition >}}
8 changes: 4 additions & 4 deletions docs/sources/get-started/deployment-modes.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ Query parallelization is limited by the number of instances and the setting `max

The simple scalable deployment is the default configuration installed by the [Loki Helm Chart]({{< relref "../setup/install/helm" >}}). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).

{{% admonition type="note" %}}
{{< admonition type="note" >}}
This deployment mode is sometimes referred to by the acronym SSD for simple scalable deployment, not to be confused with solid state drives. Loki uses an object store.
{{% /admonition %}}
{{< /admonition >}}

Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets. These targets can be scaled independently, letting you customize your Loki deployment to meet your business needs for log ingestion and log query so that your infrastructure costs better match how you use Loki.

Expand Down Expand Up @@ -75,13 +75,13 @@ For release 2.9 the components are:
- Ruler
- Table Manager (deprecated)

{{% admonition type="tip" %}}
{{< admonition type="tip" >}}
You can see the complete list of targets for your version of Loki by running Loki with the flag `-list-targets`, for example:

```bash
docker run docker.io/grafana/loki:2.9.2 -config.file=/etc/loki/local-config.yaml -list-targets
```
{{% /admonition %}}
{{< /admonition >}}

![Microservices mode diagram](../microservices-mode.png "Microservices mode")

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/get-started/labels/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ Labels in Loki perform a very important task: They define a stream. More specifi

If you are familiar with Prometheus, the term used there is series; however, Prometheus has an additional dimension: metric name. Loki simplifies this in that there are no metric names, just labels, and we decided to use streams instead of series.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Structured metadata do not define a stream, but are metadata attached to a log line.
See [structured metadata]({{< relref "./structured-metadata" >}}) for more information.
{{% /admonition %}}
{{< /admonition >}}

## Format

Expand Down
13 changes: 7 additions & 6 deletions docs/sources/get-started/labels/structured-metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ description: Describes how to enable structure metadata for logs and how to quer
---
# What is structured metadata

{{% admonition type="warning" %}}
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. (See [Schema Config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#schema-config)) for more details about schema versions. )
{{% /admonition %}}
{{< admonition type="warning" >}}
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#schema-config) for more details about schema versions.
{{< /admonition >}}

Selecting proper, low cardinality labels is critical to operating and querying Loki effectively. Some metadata, especially infrastructure related metadata, can be difficult to embed in log lines, and is too high cardinality to effectively store as indexed labels (and therefore reducing performance of the index).

Expand Down Expand Up @@ -36,8 +36,9 @@ See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/<LOK

With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/logstash/).

{{% admonition type="warning" %}}
There are defaults for how much structured metadata can be attached per log line.
{{< admonition type="warning" >}}
Structured metadata size is taken into account while asserting ingestion rate limiting.
Along with that, there are separate limits on how much structured metadata can be attached per log line.
```
# Maximum size accepted for structured metadata per log line.
# CLI flag: -limits.max-structured-metadata-size
Expand All @@ -47,7 +48,7 @@ There are defaults for how much structured metadata can be attached per log line
# CLI flag: -limits.max-structured-metadata-entries-count
[max_structured_metadata_entries_count: <int> | default = 128]
```
{{% /admonition %}}
{{< /admonition >}}

## Querying structured metadata

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/operations/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ A list of open-source reverse proxies you can use:
- [OAuth2 proxy](https://github.com/oauth2-proxy/oauth2-proxy)
- [HAProxy](https://www.haproxy.org/)

{{% admonition type="note" %}}
{{< admonition type="note" >}}
When using Loki in multi-tenant mode, Loki requires the HTTP header
`X-Scope-OrgID` to be set to a string identifying the tenant; the responsibility
of populating this value should be handled by the authenticating reverse proxy.
For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.{{% /admonition %}}
For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.{{< /admonition >}}

For information on authenticating Promtail, see the documentation for [how to
configure Promtail]({{< relref "../send-data/promtail/configuration" >}}).
4 changes: 2 additions & 2 deletions docs/sources/operations/automatic-stream-sharding.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ per-stream rate limit.
```

1. Optionally enable `logging_enabled` for debugging stream sharding.
{{% admonition type="note" %}}
{{< admonition type="note" >}}
This may affect the ingestion performance of Loki.
{{% /admonition %}}
{{< /admonition >}}

```yaml
limits_config:
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/blocking-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ overrides:
- hash: 2943214005 # hash of {stream="stdout",pod="loki-canary-9w49x"}
types: filter,limited
```
{{% admonition type="note" %}}
{{< admonition type="note" >}}
Changes to these configurations **do not require a restart**; they are defined in the [runtime configuration file](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#runtime-configuration-file).
{{% /admonition %}}
{{< /admonition >}}
The available query types are:
Expand All @@ -53,9 +53,9 @@ is logged with every query request in the `query-frontend` and `querier` logs, f
level=info ts=2023-03-30T09:08:15.2614555Z caller=metrics.go:152 component=frontend org_id=29 latency=fast
query="{stream=\"stdout\",pod=\"loki-canary-9w49x\"}" query_hash=2943214005 query_type=limited range_type=range ...
```
{{% admonition type="note" %}}
{{< admonition type="note" >}}
The order of patterns is preserved, so the first matching pattern will be used.
{{% /admonition %}}
{{< /admonition >}}

## Observing blocked queries

Expand Down
Loading

0 comments on commit eb00928

Please sign in to comment.