Skip to content

Commit

Permalink
Merge branch 'main' into addDesc
Browse files Browse the repository at this point in the history
  • Loading branch information
JStickler authored Jan 5, 2024
2 parents 837298e + e1a8141 commit b538f8c
Show file tree
Hide file tree
Showing 109 changed files with 5,579 additions and 1,174 deletions.
4 changes: 3 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@

##### Enhancements

* [11363](https://github.com/grafana/loki/pull/11477) **MichelHollands**: support GET for /ingester/shutdown
* [11571](https://github.com/grafana/loki/pull/11571) **MichelHollands**: Add a metrics.go log line for requests from querier to ingester
* [11477](https://github.com/grafana/loki/pull/11477) **MichelHollands**: support GET for /ingester/shutdown
* [11363](https://github.com/grafana/loki/pull/11363) **kavirajk**: bugfix(memcached): Make memcached batch fetch truely context aware.
* [11319](https://github.com/grafana/loki/pull/11319) **someStrangerFromTheAbyss**: Helm: Add extraContainers to the write pods.
* [11243](https://github.com/grafana/loki/pull/11243) **kavirajk**: Inflight-logging: Add extra metadata to inflight requests logging.
Expand Down Expand Up @@ -44,6 +45,7 @@
* [11284](https://github.com/grafana/loki/pull/11284) **ashwanthgoli** Config: Adds `frontend.max-query-capacity` to tune per-tenant query capacity.
* [11539](https://github.com/grafana/loki/pull/11539) **kaviraj,ashwanthgoli** Support caching /series and /labels query results
* [11545](https://github.com/grafana/loki/pull/11545) **dannykopping** Force correct memcached timeout when fetching chunks.
* [11589](https://github.com/grafana/loki/pull/11589) **ashwanthgoli** Results Cache: Adds `query_length_served` cache stat to measure the length of the query served from cache.

##### Fixes
* [11074](https://github.com/grafana/loki/pull/11074) **hainenber** Fix panic in lambda-promtail due to mishandling of empty DROP_LABELS env var.
Expand Down
12 changes: 12 additions & 0 deletions docs/sources/reference/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,10 @@ The API accepts several formats for timestamps:
* A floating point number is a Unix timestamp with fractions of a second.
* A string in `RFC3339` and `RFC3339Nano` format, as supported by Go's [time](https://pkg.go.dev/time) package.

{{% admonition type="note" %}}
When using `/api/v1/push`, you must send the timestamp as a string and not a number, otherwise the endpoint will return a 400 error.
{{% /admonition %}}

### Statistics

Query endpoints such as `/loki/api/v1/query` and `/loki/api/v1/query_range` return a set of statistics about the query execution. Those statistics allow users to understand the amount of data processed and at which speed.
Expand Down Expand Up @@ -404,6 +408,14 @@ curl -u "Tenant1|Tenant2|Tenant3:$API_TOKEN" \
--data-urlencode 'query=sum(rate({job="varlogs"}[10m])) by (level)' | jq
```


To query against your hosted log tenant in Grafana Cloud, use the **User** and **URL** values provided in the Loki logging service details of your Grafana Cloud stack. You can find this information in the [Cloud Portal](https://grafana.com/docs/grafana-cloud/account-management/cloud-portal/#your-grafana-cloud-stack). Use an access policy token in your queries for authentication. The password in this example is an access policy token that has been defined in the `API_TOKEN` environment variable:
```bash
curl -u "User:$API_TOKEN" \
-G -s "<URL-PROVIDED-IN-LOKI-DATA-SOURCE-SETTINGS>/loki/api/v1/query" \
--data-urlencode 'query=sum(rate({job="varlogs"}[10m])) by (level)' | jq
```

## Query logs within a range of time

```
Expand Down
7 changes: 4 additions & 3 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ require (
github.com/gorilla/mux v1.8.0
github.com/gorilla/websocket v1.5.0
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2
github.com/grafana/dskit v0.0.0-20231120170505-765e343eda4f
github.com/grafana/dskit v0.0.0-20240104111617-ea101a3b86eb
github.com/grafana/go-gelf/v2 v2.0.1
github.com/grafana/gomemcache v0.0.0-20231204155601-7de47a8c3cb0
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd
Expand All @@ -66,7 +66,7 @@ require (
github.com/jmespath/go-jmespath v0.4.0
github.com/joncrlsn/dque v0.0.0-20211108142734-c2ef48c5192a
github.com/json-iterator/go v1.1.12
github.com/klauspost/compress v1.16.7
github.com/klauspost/compress v1.17.3
github.com/klauspost/pgzip v1.2.5
github.com/mattn/go-ieproxy v0.0.1
github.com/minio/minio-go/v7 v7.0.61
Expand Down Expand Up @@ -115,7 +115,7 @@ require (

require (
github.com/Azure/go-autorest/autorest v0.11.29
github.com/DataDog/sketches-go v1.4.2
github.com/DataDog/sketches-go v1.4.4
github.com/DmitriyVTitov/size v1.5.0
github.com/IBM/go-sdk-core/v5 v5.13.1
github.com/IBM/ibm-cos-sdk-go v1.10.0
Expand Down Expand Up @@ -235,6 +235,7 @@ require (
github.com/googleapis/enterprise-certificate-proxy v0.2.5 // indirect
github.com/googleapis/gax-go/v2 v2.12.0 // indirect
github.com/gophercloud/gophercloud v1.5.0 // indirect
github.com/grafana/pyroscope-go/godeltaprof v0.1.6 // indirect
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
Expand Down
14 changes: 8 additions & 6 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -231,8 +231,8 @@ github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/datadog-go v2.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/DataDog/sketches-go v1.4.2 h1:gppNudE9d19cQ98RYABOetxIhpTCl4m7CnbRZjvVA/o=
github.com/DataDog/sketches-go v1.4.2/go.mod h1:xJIXldczJyyjnbDop7ZZcLxJdV3+7Kra7H1KMgpgkLk=
github.com/DataDog/sketches-go v1.4.4 h1:dF52vzXRFSPOj2IjXSWLvXq3jubL4CI69kwYjJ1w5Z8=
github.com/DataDog/sketches-go v1.4.4/go.mod h1:XR0ns2RtEEF09mDKXiKZiQg+nfZStrq1ZuL1eezeZe0=
github.com/DataDog/zstd v1.3.5/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
github.com/DmitriyVTitov/size v1.5.0 h1:/PzqxYrOyOUX1BXj6J9OuVRVGe+66VL4D9FlUaW515g=
github.com/DmitriyVTitov/size v1.5.0/go.mod h1:le6rNI4CoLQV1b9gzp1+3d7hMAD/uu2QcJ+aYbNgiU0=
Expand Down Expand Up @@ -995,8 +995,8 @@ github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWm
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2 h1:qhugDMdQ4Vp68H0tp/0iN17DM2ehRo1rLEdOFe/gB8I=
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2/go.mod h1:w/aiO1POVIeXUQyl0VQSZjl5OAGDTL5aX+4v0RA1tcw=
github.com/grafana/dskit v0.0.0-20231120170505-765e343eda4f h1:gyojr97YeWZ70pKNakWv5/tKwBHuLy3icnIeCo9gQr4=
github.com/grafana/dskit v0.0.0-20231120170505-765e343eda4f/go.mod h1:8dsy5tQOkeNQyjXpm5mQsbCu3H5uzeBD35MzRQFznKU=
github.com/grafana/dskit v0.0.0-20240104111617-ea101a3b86eb h1:AWE6+kvtE18HP+lRWNUCyvymyrFSXs6TcS2vXIXGIuw=
github.com/grafana/dskit v0.0.0-20240104111617-ea101a3b86eb/go.mod h1:kkWM4WUV230bNG3urVRWPBnSJHs64y/0RmWjftnnn0c=
github.com/grafana/go-gelf/v2 v2.0.1 h1:BOChP0h/jLeD+7F9mL7tq10xVkDG15he3T1zHuQaWak=
github.com/grafana/go-gelf/v2 v2.0.1/go.mod h1:lexHie0xzYGwCgiRGcvZ723bSNyNI8ZRD4s0CLobh90=
github.com/grafana/gocql v0.0.0-20200605141915-ba5dc39ece85 h1:xLuzPoOzdfNb/RF/IENCw+oLVdZB4G21VPhkHBgwSHY=
Expand All @@ -1005,6 +1005,8 @@ github.com/grafana/gomemcache v0.0.0-20231204155601-7de47a8c3cb0 h1:aLBiDMjTtXx2
github.com/grafana/gomemcache v0.0.0-20231204155601-7de47a8c3cb0/go.mod h1:PGk3RjYHpxMM8HFPhKKo+vve3DdlPUELZLSDEFehPuU=
github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe h1:yIXAAbLswn7VNWBIvM71O2QsgfgW9fRXZNR0DXe6pDU=
github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/grafana/pyroscope-go/godeltaprof v0.1.6 h1:nEdZ8louGAplSvIJi1HVp7kWvFvdiiYg3COLlTwJiFo=
github.com/grafana/pyroscope-go/godeltaprof v0.1.6/go.mod h1:Tk376Nbldo4Cha9RgiU7ik8WKFkNpfds98aUzS8omLE=
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd h1:PpuIBO5P3e9hpqBD0O/HjhShYuM6XE0i/lbE6J94kww=
github.com/grafana/regexp v0.0.0-20221122212121-6b5c0a4cb7fd/go.mod h1:M5qHK+eWfAv8VR/265dIuEpL3fNfeC21tXXp9itM24A=
github.com/grafana/tail v0.0.0-20230510142333-77b18831edf0 h1:bjh0PVYSVVFxzINqPFYJmAmJNrWPgnVjuSdYJGHmtFU=
Expand Down Expand Up @@ -1239,8 +1241,8 @@ github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+o
github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/klauspost/compress v1.11.0/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
github.com/klauspost/compress v1.16.7/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/klauspost/compress v1.17.3 h1:qkRjuerhUU1EmXLYGkSH6EZL+vPSxIrYjLNAK4slzwA=
github.com/klauspost/compress v1.17.3/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.5 h1:0E5MSMDEoAulmXNFquVs//DdoomxaoTY1kUhbc/qbZg=
Expand Down
60 changes: 44 additions & 16 deletions pkg/bloomcompactor/bloomcompactor.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ import (
"context"
"fmt"
"math"
"math/rand"
"os"
"time"

Expand Down Expand Up @@ -303,7 +304,7 @@ func (c *Compactor) compactUsers(ctx context.Context, logger log.Logger, sc stor
continue
}

// Skip this table if it is too new/old for the tenant limits.
// Skip this table if it is too old for the tenant limits.
now := model.Now()
tableMaxAge := c.limits.BloomCompactorMaxTableAge(tenant)
if tableMaxAge > 0 && tableInterval.Start.Before(now.Add(-tableMaxAge)) {
Expand Down Expand Up @@ -352,20 +353,24 @@ func (c *Compactor) compactTenant(ctx context.Context, logger log.Logger, sc sto
}

// Tokenizer is not thread-safe so we need one per goroutine.
NGramLength := c.limits.BloomNGramLength(tenant)
NGramSkip := c.limits.BloomNGramSkip(tenant)
bt := v1.NewBloomTokenizer(NGramLength, NGramSkip, c.btMetrics)
nGramLen := c.limits.BloomNGramLength(tenant)
nGramSkip := c.limits.BloomNGramSkip(tenant)
bt := v1.NewBloomTokenizer(nGramLen, nGramSkip, c.btMetrics)

errs := multierror.New()
rs, err := c.sharding.GetTenantSubRing(tenant).GetAllHealthy(RingOp)
if err != nil {
return err
}
tokenRanges := bloomutils.GetInstanceWithTokenRange(c.cfg.Ring.InstanceID, rs.Instances)
for _, tr := range tokenRanges {
level.Debug(logger).Log("msg", "got token range for instance", "id", tr.Instance.Id, "min", tr.MinToken, "max", tr.MaxToken)
}

_ = sc.indexShipper.ForEach(ctx, tableName, tenant, func(isMultiTenantIndex bool, idx shipperindex.Index) error {
if isMultiTenantIndex {
// Skip multi-tenant indexes
level.Debug(logger).Log("msg", "skipping multi-tenant index", "table", tableName, "index", idx.Name())
return nil
}

Expand Down Expand Up @@ -396,13 +401,19 @@ func (c *Compactor) compactTenant(ctx context.Context, logger log.Logger, sc sto
//All seriesMetas given a table within fp of this compactor shard
seriesMetas = append(seriesMetas, seriesMeta{seriesFP: fingerprint, seriesLbs: labels, chunkRefs: temp})
},
labels.MustNewMatcher(labels.MatchEqual, "", ""),
)

if err != nil {
errs.Add(err)
return nil
}

if len(seriesMetas) == 0 {
level.Debug(logger).Log("msg", "skipping index because it does not have any matching series", "table", tableName, "index", idx.Name())
return nil
}

job := NewJob(tenant, tableName, idx.Path(), seriesMetas)
jobLogger := log.With(logger, "job", job.String())
c.metrics.compactionRunJobStarted.Inc()
Expand Down Expand Up @@ -486,12 +497,13 @@ func (c *Compactor) runCompact(ctx context.Context, logger log.Logger, job Job,
localDst := createLocalDirName(c.cfg.WorkingDirectory, job)
blockOptions := v1.NewBlockOptions(bt.GetNGramLength(), bt.GetNGramSkip())

defer func() {
//clean up the bloom directory
if err := os.RemoveAll(localDst); err != nil {
level.Error(logger).Log("msg", "failed to remove block directory", "dir", localDst, "err", err)
}
}()
// TODO(poyzannur) enable once debugging is over
//defer func() {
// //clean up the bloom directory
// if err := os.RemoveAll(localDst); err != nil {
// level.Error(logger).Log("msg", "failed to remove block directory", "dir", localDst, "err", err)
// }
//}()

var resultingBlock bloomshipper.Block
defer func() {
Expand All @@ -507,6 +519,7 @@ func (c *Compactor) runCompact(ctx context.Context, logger log.Logger, job Job,
return nil
} else if len(metasMatchingJob) == 0 {
// No matching existing blocks for this job, compact all series from scratch
level.Info(logger).Log("msg", "No matching existing blocks for this job, compact all series from scratch")

builder, err := NewPersistentBlockBuilder(localDst, blockOptions)
if err != nil {
Expand All @@ -522,6 +535,7 @@ func (c *Compactor) runCompact(ctx context.Context, logger log.Logger, job Job,

} else if len(blocksMatchingJob) > 0 {
// When already compacted metas exists, we need to merge all blocks with amending blooms with new series
level.Info(logger).Log("msg", "already compacted metas exists, use mergeBlockBuilder")

var populate = createPopulateFunc(ctx, logger, job, storeClient, bt)

Expand Down Expand Up @@ -560,12 +574,14 @@ func (c *Compactor) runCompact(ctx context.Context, logger log.Logger, job Job,
level.Error(logger).Log("msg", "failed compressing bloom blocks into tar file", "err", err)
return err
}
defer func() {
err = os.Remove(archivePath)
if err != nil {
level.Error(logger).Log("msg", "failed removing archive file", "err", err, "file", archivePath)
}
}()

// TODO(poyzannur) enable once debugging is over
//defer func() {
// err = os.Remove(archivePath)
// if err != nil {
// level.Error(logger).Log("msg", "failed removing archive file", "err", err, "file", archivePath)
// }
//}()

// Do not change the signature of PutBlocks yet.
// Once block size is limited potentially, compactNewChunks will return multiple blocks, hence a list is appropriate.
Expand All @@ -583,9 +599,21 @@ func (c *Compactor) runCompact(ctx context.Context, logger log.Logger, job Job,
// TODO delete old metas in later compactions
// After all is done, create one meta file and upload to storage
meta := bloomshipper.Meta{
MetaRef: bloomshipper.MetaRef{
Ref: bloomshipper.Ref{
TenantID: job.tenantID,
TableName: job.tableName,
MinFingerprint: uint64(job.minFp),
MaxFingerprint: uint64(job.maxFp),
StartTimestamp: job.from,
EndTimestamp: job.through,
Checksum: rand.Uint32(), // Discuss if checksum is needed for Metas, why should we read all data again.
},
},
Tombstones: blocksMatchingJob,
Blocks: activeBloomBlocksRefs,
}

err = c.bloomShipperClient.PutMeta(ctx, meta)
if err != nil {
level.Error(logger).Log("msg", "failed uploading meta.json to storage", "err", err)
Expand Down
14 changes: 10 additions & 4 deletions pkg/bloomcompactor/chunkcompactor.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ import (
"os"
"path/filepath"

"github.com/google/uuid"

"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/common/model"
Expand Down Expand Up @@ -148,7 +150,7 @@ func buildBlockFromBlooms(
}

func createLocalDirName(workingDir string, job Job) string {
dir := fmt.Sprintf("bloomBlock-%s-%s-%s-%s-%d-%d", job.tableName, job.tenantID, job.minFp, job.maxFp, job.from, job.through)
dir := fmt.Sprintf("bloomBlock-%s-%s-%s-%s-%d-%d-%s", job.tableName, job.tenantID, job.minFp, job.maxFp, job.from, job.through, uuid.New().String())
return filepath.Join(workingDir, dir)
}

Expand All @@ -167,7 +169,7 @@ func compactNewChunks(
return bloomshipper.Block{}, err
}

bloomIter := newLazyBloomBuilder(ctx, job, storeClient, bt, fpRate)
bloomIter := newLazyBloomBuilder(ctx, job, storeClient, bt, fpRate, logger)

// Build and upload bloomBlock to storage
block, err := buildBlockFromBlooms(ctx, logger, builder, bloomIter, job)
Expand All @@ -186,6 +188,7 @@ type lazyBloomBuilder struct {
client chunkClient
bt compactorTokenizer
fpRate float64
logger log.Logger

cur v1.SeriesWithBloom // retured by At()
err error // returned by Err()
Expand All @@ -195,21 +198,22 @@ type lazyBloomBuilder struct {
// which are used by the blockBuilder to write a bloom block.
// We use an interator to avoid loading all blooms into memory first, before
// building the block.
func newLazyBloomBuilder(ctx context.Context, job Job, client chunkClient, bt compactorTokenizer, fpRate float64) *lazyBloomBuilder {
func newLazyBloomBuilder(ctx context.Context, job Job, client chunkClient, bt compactorTokenizer, fpRate float64, logger log.Logger) *lazyBloomBuilder {
return &lazyBloomBuilder{
ctx: ctx,
metas: v1.NewSliceIter(job.seriesMetas),
client: client,
tenant: job.tenantID,
bt: bt,
fpRate: fpRate,
logger: logger,
}
}

func (it *lazyBloomBuilder) Next() bool {
if !it.metas.Next() {
it.err = io.EOF
it.cur = v1.SeriesWithBloom{}
level.Debug(it.logger).Log("msg", "No seriesMeta")
return false
}
meta := it.metas.At()
Expand All @@ -219,13 +223,15 @@ func (it *lazyBloomBuilder) Next() bool {
if err != nil {
it.err = err
it.cur = v1.SeriesWithBloom{}
level.Debug(it.logger).Log("err in getChunks", err)
return false
}

it.cur, err = buildBloomFromSeries(meta, it.fpRate, it.bt, chks)
if err != nil {
it.err = err
it.cur = v1.SeriesWithBloom{}
level.Debug(it.logger).Log("err in buildBloomFromSeries", err)
return false
}
return true
Expand Down
4 changes: 3 additions & 1 deletion pkg/bloomcompactor/chunkcompactor_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,8 @@ func TestChunkCompactor_CompactNewChunks(t *testing.T) {
}

func TestLazyBloomBuilder(t *testing.T) {
logger := log.NewNopLogger()

label := labels.FromStrings("foo", "bar")
fp1 := model.Fingerprint(100)
fp2 := model.Fingerprint(999)
Expand Down Expand Up @@ -167,7 +169,7 @@ func TestLazyBloomBuilder(t *testing.T) {
mbt := &mockBloomTokenizer{}
mcc := &mockChunkClient{}

it := newLazyBloomBuilder(context.Background(), job, mcc, mbt, fpRate)
it := newLazyBloomBuilder(context.Background(), job, mcc, mbt, fpRate, logger)

// first seriesMeta has 1 chunks
require.True(t, it.Next())
Expand Down
5 changes: 5 additions & 0 deletions pkg/bloomgateway/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -426,3 +426,8 @@ func (*mockRing) ShuffleShardWithLookback(_ string, _ int, _ time.Duration, _ ti
func (*mockRing) CleanupShuffleShardCache(_ string) {
panic("unimplemented")
}

func (r *mockRing) GetTokenRangesForInstance(_ string) (ring.TokenRanges, error) {
tr := ring.TokenRanges{0, math.MaxUint32}
return tr, nil
}
4 changes: 2 additions & 2 deletions pkg/compactor/deletion/request_handler_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ func TestCancelDeleteRequestHandler(t *testing.T) {
store.getErr = errors.New("something bad")
h := NewDeleteRequestHandler(store, 0, nil)

req := buildRequest("org id", ``, "", "")
req := buildRequest("orgid", ``, "", "")
params := req.URL.Query()
params.Set("request_id", "test-request")
req.URL.RawQuery = params.Encode()
Expand Down Expand Up @@ -411,7 +411,7 @@ func TestGetAllDeleteRequestsHandler(t *testing.T) {
store.getAllErr = errors.New("something bad")
h := NewDeleteRequestHandler(store, 0, nil)

req := buildRequest("org id", ``, "", "")
req := buildRequest("orgid", ``, "", "")
params := req.URL.Query()
params.Set("request_id", "test-request")
req.URL.RawQuery = params.Encode()
Expand Down
Loading

0 comments on commit b538f8c

Please sign in to comment.