Skip to content

Commit

Permalink
docs: update admonitions to new shortcode format (#14735) (#14759)
Browse files Browse the repository at this point in the history
  • Loading branch information
JStickler authored Nov 4, 2024
1 parent 651db67 commit 9195c3d
Show file tree
Hide file tree
Showing 37 changed files with 173 additions and 172 deletions.
8 changes: 4 additions & 4 deletions docs/sources/alert/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,18 +202,18 @@ Another great use case is alerting on high cardinality sources. These are things

Creating these alerts in LogQL is attractive because these metrics can be extracted at _query time_, meaning we don't suffer the cardinality explosion in our metrics store.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
As an example, we can use LogQL v2 to help Loki to monitor _itself_, alerting us when specific tenants have queries that take longer than 10s to complete! To do so, we'd use the following query: `sum by (org_id) (rate({job="loki-prod/query-frontend"} |= "metrics.go" | logfmt | duration > 10s [1m])`.
{{% /admonition %}}
{{< /admonition >}}

## Interacting with the Ruler

### Lokitool
Because the rule files are identical to Prometheus rule files, we can interact with the Loki Ruler via `lokitool`.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
lokitool is intended to run against multi-tenant Loki. The commands need an `--id=` flag set to the Loki instance ID or set the environment variable `LOKI_TENANT_ID`. If Loki is running in single tenant mode, the required ID is `fake`.
{{% /admonition %}}
{{< /admonition >}}

An example workflow is included below:

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/community/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,10 @@ $ git commit -m "docs: fix spelling error"
$ git push -u fork HEAD
```

{{% admonition type="note" %}}
{{< admonition type="note" >}}
If you downloaded Loki using `go get`, the message `package github.com/grafana/loki: no Go files in /go/src/github.com/grafana/loki`
is normal and requires no actions to resolve.
{{% /admonition %}}
{{< /admonition >}}

### Building

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -129,9 +129,9 @@ The performance losses against the current approach includes:

Loki regularly combines multiple blocks into a chunk and "flushes" it to storage. In order to ensure that reads over flushed chunks remain as performant as possible, we will re-order a possibly-overlapping set of blocks into a set of blocks that maintain monotonically increasing order between them. From the perspective of the rest of Loki’s components (queriers/rulers fetching chunks from storage), nothing has changed.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
**In the case that data for a stream is ingested in order, this is effectively a no-op, making it well optimized for in-order writes (which is both the requirement and default in Loki currently). Thus, this should have little performance impact on ordered data while enabling Loki to ingest unordered data.**
{{% /admonition %}}
{{< /admonition >}}


#### Chunk Durations
Expand All @@ -153,9 +153,9 @@ The second is simple to implement and an effective way to ensure Loki can ingest

We also cut chunks according to the `sync_period`. The first timestamp ingested past this bound will trigger a cut. This process aids in increasing chunk determinism and therefore our deduplication ratio in object storage because chunks are [content addressed](https://en.wikipedia.org/wiki/Content-addressable_storage). With the removal of our ordering constraint, it's possible that in some cases the synchronization method will not be as effective, such as during concurrent writes to the same stream across this bound.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
**It's important to mention that this is possible today with the current ordering constraint, but we'll be increasing the likelihood by removing it.**
{{% /admonition %}}
{{< /admonition >}}

```
Figure 5
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,18 @@ branch is then used for all the Stable Releases, and all Patch Releases for that

The name of the release branch should be `release-VERSION_PREFIX`, such as `release-2.9.x`.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Branches are only made for VERSION_PREFIX; do not create branches for the full VERSION such as `release-2.9.1`.
{{% /admonition %}}
{{< /admonition >}}

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Don't create any other branches that are prefixed with `release` when creating PRs or those branches will collide with our automated release build publish rules.
{{% /admonition %}}
{{< /admonition >}}

1. Create a label to make backporting PRs to this branch easy.

The name of the label should be `backport release-VERSION_PREFIX`, such as `backport release-2.9.x`.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Note there is space in the label name. The label name must follow this naming convention to trigger CI related jobs.
{{% /admonition %}}
{{< /admonition >}}
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,6 @@ Upgrade the Loki version to the new release version in documents, examples, json
LOKI_NEW_VERSION=$VERSION ./tools/release_update_tags.sh
```

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Do not upgrade the version numbers in the `operator/` directory as @periklis and team have a different process to upgrade the Operator version.
{{% /admonition %}}
{{< /admonition >}}
8 changes: 4 additions & 4 deletions docs/sources/get-started/deployment-modes.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,9 @@ Query parallelization is limited by the number of instances and the setting `max

The simple scalable deployment is the default configuration installed by the [Loki Helm Chart]({{< relref "../setup/install/helm" >}}). This deployment mode is the easiest way to deploy Loki at scale. It strikes a balance between deploying in [monolithic mode](#monolithic-mode) or deploying each component as a [separate microservice](#microservices-mode).

{{% admonition type="note" %}}
{{< admonition type="note" >}}
This deployment mode is sometimes referred to by the acronym SSD for simple scalable deployment, not to be confused with solid state drives. Loki uses an object store.
{{% /admonition %}}
{{< /admonition >}}

Loki’s simple scalable deployment mode separates execution paths into read, write, and backend targets. These targets can be scaled independently, letting you customize your Loki deployment to meet your business needs for log ingestion and log query so that your infrastructure costs better match how you use Loki.

Expand Down Expand Up @@ -75,13 +75,13 @@ For release 2.9 the components are:
- Ruler
- Table Manager (deprecated)

{{% admonition type="tip" %}}
{{< admonition type="tip" >}}
You can see the complete list of targets for your version of Loki by running Loki with the flag `-list-targets`, for example:

```bash
docker run docker.io/grafana/loki:2.9.2 -config.file=/etc/loki/local-config.yaml -list-targets
```
{{% /admonition %}}
{{< /admonition >}}

![Microservices mode diagram](../microservices-mode.png "Microservices mode")

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/get-started/labels/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ Labels in Loki perform a very important task: They define a stream. More specifi

If you are familiar with Prometheus, the term used there is series; however, Prometheus has an additional dimension: metric name. Loki simplifies this in that there are no metric names, just labels, and we decided to use streams instead of series.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Structured metadata do not define a stream, but are metadata attached to a log line.
See [structured metadata]({{< relref "./structured-metadata" >}}) for more information.
{{% /admonition %}}
{{< /admonition >}}

## Format

Expand Down
11 changes: 6 additions & 5 deletions docs/sources/get-started/labels/structured-metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ description: Describes how to enable structure metadata for logs and how to quer
---
# What is structured metadata

{{% admonition type="warning" %}}
{{< admonition type="warning" >}}
Structured metadata was added to chunk format V4 which is used if the schema version is greater or equal to `13`. See [Schema Config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/storage/#schema-config) for more details about schema versions.
{{% /admonition %}}
{{< /admonition >}}

Selecting proper, low cardinality labels is critical to operating and querying Loki effectively. Some metadata, especially infrastructure related metadata, can be difficult to embed in log lines, and is too high cardinality to effectively store as indexed labels (and therefore reducing performance of the index).

Expand Down Expand Up @@ -36,8 +36,9 @@ See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/<LOK

With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/logstash/).

{{% admonition type="warning" %}}
There are defaults for how much structured metadata can be attached per log line.
{{< admonition type="warning" >}}
Structured metadata size is taken into account while asserting ingestion rate limiting.
Along with that, there are separate limits on how much structured metadata can be attached per log line.
```
# Maximum size accepted for structured metadata per log line.
# CLI flag: -limits.max-structured-metadata-size
Expand All @@ -47,7 +48,7 @@ There are defaults for how much structured metadata can be attached per log line
# CLI flag: -limits.max-structured-metadata-entries-count
[max_structured_metadata_entries_count: <int> | default = 128]
```
{{% /admonition %}}
{{< /admonition >}}

## Querying structured metadata

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/operations/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ A list of open-source reverse proxies you can use:
- [OAuth2 proxy](https://github.com/oauth2-proxy/oauth2-proxy)
- [HAProxy](https://www.haproxy.org/)

{{% admonition type="note" %}}
{{< admonition type="note" >}}
When using Loki in multi-tenant mode, Loki requires the HTTP header
`X-Scope-OrgID` to be set to a string identifying the tenant; the responsibility
of populating this value should be handled by the authenticating reverse proxy.
For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.{{% /admonition %}}
For more information, read the [multi-tenancy]({{< relref "./multi-tenancy" >}}) documentation.{{< /admonition >}}

For information on authenticating Promtail, see the documentation for [how to
configure Promtail]({{< relref "../send-data/promtail/configuration" >}}).
4 changes: 2 additions & 2 deletions docs/sources/operations/automatic-stream-sharding.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ per-stream rate limit.
```

1. Optionally enable `logging_enabled` for debugging stream sharding.
{{% admonition type="note" %}}
{{< admonition type="note" >}}
This may affect the ingestion performance of Loki.
{{% /admonition %}}
{{< /admonition >}}

```yaml
limits_config:
Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/blocking-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ overrides:
- hash: 2943214005 # hash of {stream="stdout",pod="loki-canary-9w49x"}
types: filter,limited
```
{{% admonition type="note" %}}
{{< admonition type="note" >}}
Changes to these configurations **do not require a restart**; they are defined in the [runtime configuration file](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#runtime-configuration-file).
{{% /admonition %}}
{{< /admonition >}}
The available query types are:
Expand All @@ -53,9 +53,9 @@ is logged with every query request in the `query-frontend` and `querier` logs, f
level=info ts=2023-03-30T09:08:15.2614555Z caller=metrics.go:152 component=frontend org_id=29 latency=fast
query="{stream=\"stdout\",pod=\"loki-canary-9w49x\"}" query_hash=2943214005 query_type=limited range_type=range ...
```
{{% admonition type="note" %}}
{{< admonition type="note" >}}
The order of patterns is preserved, so the first matching pattern will be used.
{{% /admonition %}}
{{< /admonition >}}

## Observing blocked queries

Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/recording-rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@ is that Prometheus will, for example, reject a remote-write request with 100 sam
When the `ruler` starts up, it will load the WALs for the tenants who have recording rules. These WAL files are stored
on disk and are loaded into memory.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
WALs are loaded one at a time upon start-up. This is a current limitation of the Loki ruler.
For this reason, it is adviseable that the number of rule groups serviced by a ruler be kept to a reasonable size, since
_no rule evaluation occurs while WAL replay is in progress (this includes alerting rules)_.
{{% /admonition %}}
{{< /admonition >}}


### Truncation
Expand All @@ -56,10 +56,10 @@ excessively large due to truncation.

See Mimir's guide for [configuring Grafana Mimir hash rings](/docs/mimir/latest/configure/configure-hash-rings/) for scaling the ruler using a ring.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
The `ruler` shards by rule _group_, not by individual rules. This is an artifact of the fact that Prometheus
recording rules need to run in order since one recording rule can reuse another - but this is not possible in Loki.
{{% /admonition %}}
{{< /admonition >}}

## Deployment

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/operations/scalability.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,9 @@ this will result in far lower `ruler` resource usage because the majority of the
The LogQL queries coming from the `ruler` will be executed against the given `query-frontend` service.
Requests will be load-balanced across all `query-frontend` IPs if the `dns:///` prefix is used.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Queries that fail to execute are _not_ retried.
{{% /admonition %}}
{{< /admonition >}}

### Limits and Observability

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/operations/storage/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,9 +77,9 @@ See the [AWS deployment section](https://grafana.com/docs/loki/<LOKI_VERSION>/co

### DynamoDB

{{% admonition type="note" %}}
{{< admonition type="note" >}}
DynamoDB support is deprecated and will be removed in a future release.
{{% /admonition %}}
{{< /admonition >}}

When using DynamoDB for the index, the following permissions are needed:

Expand Down
20 changes: 10 additions & 10 deletions docs/sources/operations/storage/boltdb-shipper.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,19 @@ weight: 200
---
# Single Store BoltDB (boltdb-shipper)

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Single store BoltDB Shipper is a legacy storage option recommended for Loki 2.0 through 2.7.x and is not recommended for new deployments. The [TSDB](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/storage/tsdb/) is the recommended index for Loki 2.8 and newer.
{{% /admonition %}}
{{< /admonition >}}

BoltDB Shipper lets you run Grafana Loki without any dependency on NoSQL stores for storing index.
It locally stores the index in BoltDB files instead and keeps shipping those files to a shared object store i.e the same object store which is being used for storing chunks.
It also keeps syncing BoltDB files from shared object store to a configured local directory for getting index entries created by other services of same Loki cluster.
This helps run Loki with one less dependency and also saves costs in storage since object stores are likely to be much cheaper compared to cost of a hosted NoSQL store or running a self hosted instance of Cassandra.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
BoltDB shipper works best with 24h periodic index files. It is a requirement to have the index period set to 24h for either active or upcoming usage of boltdb-shipper.
If boltdb-shipper already has created index files with 7 days period, and you want to retain previous data, add a new schema config using boltdb-shipper with a future date and index files period set to 24h.
{{% /admonition %}}
{{< /admonition >}}

## Example Configuration

Expand Down Expand Up @@ -76,9 +76,9 @@ they both having shipped files for day `18371` and `18372` with prefix `loki_ind
...
```

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Loki also adds a timestamp to names of the files to randomize the names to avoid overwriting files when running Ingesters with same name and not have a persistent storage. Timestamps not shown here for simplification.
{{% /admonition %}}
{{< /admonition >}}

Let us talk about more in depth about how both Ingesters and Queriers work when running them with BoltDB Shipper.

Expand All @@ -89,9 +89,9 @@ and the BoltDB Shipper looks for new and updated files in that directory at 1 mi
When running Loki in microservices mode, there could be multiple ingesters serving write requests.
Each ingester generates BoltDB files locally.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
To avoid any loss of index when an ingester crashes, we recommend running ingesters as a StatefulSet (when using Kubernetes) with a persistent storage for storing index files.
{{% /admonition %}}
{{< /admonition >}}

When chunks are flushed, they are available for reads in the object store instantly. The index is not available instantly, since we upload every 15 minutes with the BoltDB shipper.
Ingesters expose a new RPC for letting queriers query the ingester's local index for chunks which were recently flushed, but its index might not be available yet with queriers.
Expand Down Expand Up @@ -137,9 +137,9 @@ While using `boltdb-shipper` avoid configuring WriteDedupe cache since it is use
Compactor is a BoltDB Shipper specific service that reduces the index size by deduping the index and merging all the files to a single file per table.
We recommend running a Compactor since a single Ingester creates 96 files per day which include a lot of duplicate index entries and querying multiple files per table adds up the overall query latency.

{{% admonition type="note" %}}
{{< admonition type="note" >}}
There should be only one compactor instance running at a time that otherwise could create problems and may lead to data loss.
{{% /admonition %}}
{{< /admonition >}}

Example compactor configuration with GCS:

Expand Down
8 changes: 4 additions & 4 deletions docs/sources/operations/storage/legacy-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@ weight: 1000
---
# Legacy storage

{{% admonition type="warning" %}}
{{< admonition type="warning" >}}
The concepts described on this page are considered legacy and pre-date the single store storage introduced in Loki 2.0.
The usage of legacy storage for new installations is highly discouraged and documentation is meant for informational
purposes in case of upgrade to a single store.
{{% /admonition %}}
{{< /admonition >}}

The **chunk store** is the Loki long-term data store, designed to support
interactive querying and sustained writing without the need for background
Expand All @@ -27,11 +27,11 @@ maintenance tasks. It consists of:
- [Amazon S3](https://aws.amazon.com/s3)
- [Google Cloud Storage](https://cloud.google.com/storage/)

{{% admonition type="note" %}}
{{< admonition type="note" >}}
Unlike the other core components of Loki, the chunk store is not a separate
service, job, or process, but rather a library embedded in the two services
that need to access Loki data: the [ingester]({{< relref "../../get-started/components#ingester" >}}) and [querier]({{< relref "../../get-started/components#querier" >}}).
{{% /admonition %}}
{{< /admonition >}}

The chunk store relies on a unified interface to the
"[NoSQL](https://en.wikipedia.org/wiki/NoSQL)" stores (DynamoDB, Bigtable, and
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/operations/storage/logs-deletion.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ Log entry deletion relies on configuration of the custom logs retention workflow
Enable log entry deletion by setting `retention_enabled` to true in the compactor's configuration and setting and `deletion_mode` to `filter-only` or `filter-and-delete` in the runtime config.
`delete_request_store` also needs to be configured when retention is enabled to process delete requests, this determines the storage bucket that stores the delete requests.

{{% admonition type="warning" %}}
{{< admonition type="warning" >}}
Be very careful when enabling retention. It is strongly recommended that you also enable versioning on your objects in object storage to allow you to recover from accidental misconfiguration of a retention setting. If you want to enable deletion but not not want to enforce retention, configure the `retention_period` setting with a value of `0s`.
{{% /admonition %}}
{{< /admonition >}}

Because it is a runtime configuration, `deletion_mode` can be set per-tenant, if desired.

Expand Down
Loading

0 comments on commit 9195c3d

Please sign in to comment.