diff --git a/website/docs/getting-started/installation.md b/website/docs/getting-started/installation.md index 4c491f7d9..8fa691932 100644 --- a/website/docs/getting-started/installation.md +++ b/website/docs/getting-started/installation.md @@ -86,6 +86,6 @@ public void Configure( } ``` -Now you can create Message Handlers or access Producers and start exchanging events trouh Kafka. +Now you can create Message Handlers or access Producers and start exchanging events through Kafka. diff --git a/website/docs/guides/admin/web-api.md b/website/docs/guides/admin/web-api.md index bc10878fe..de4381d83 100644 --- a/website/docs/guides/admin/web-api.md +++ b/website/docs/guides/admin/web-api.md @@ -177,7 +177,7 @@ await consumerAdmin.RestartConsumerAsync(consumerName); ``` #### Reset Offsets -Reset the offset of all topics listening by the Kafka consumers with the name and groupId informed. To achieve this, KafkaFlow needs to stop the consumers, search for the lowest offset value in each topic/partition, commit these offsets, and restart the consumers. This operation causes a rebalance between the consumers. ** All topic messages will be reprocessed ** +Reset the offset of all topics listening by the Kafka consumers with the name and groupId informed. To achieve this, KafkaFlow needs to stop the consumers, search for the lowest offset value in each topic/partition, commit these offsets, and restart the consumers. This operation causes a rebalance between the consumers. **All topic messages will be reprocessed** Endpoint diff --git a/website/docs/guides/consumers/dynamic-workers-configuration.md b/website/docs/guides/consumers/dynamic-workers-configuration.md index 3941d13bb..4720f01ca 100644 --- a/website/docs/guides/consumers/dynamic-workers-configuration.md +++ b/website/docs/guides/consumers/dynamic-workers-configuration.md @@ -39,7 +39,7 @@ Configuring Dynamic Worker Configuration is straightforward with the fluent inte ) ``` -In this example, the number of worker threads is adjusted dynamically based on whether it's a peak hour or off-peak hour. You can implement your custom logic in the `WithWorkersCount`` method to suit your application's specific requirements. +In this example, the number of worker threads is adjusted dynamically based on whether it's a peak hour or off-peak hour. You can implement your custom logic in the `WithWorkersCount` method to suit your application's specific requirements. That's it! Your KafkaFlow consumer will now dynamically adjust the number of worker threads based on your custom logic and the specified evaluation interval. diff --git a/website/docs/guides/middlewares/middlewares.md b/website/docs/guides/middlewares/middlewares.md index 5ba0011d5..1008b25bf 100644 --- a/website/docs/guides/middlewares/middlewares.md +++ b/website/docs/guides/middlewares/middlewares.md @@ -44,7 +44,7 @@ The message will be delivered as a byte array to the first middleware; you will ## When Producing -The middlewares are called when the `Produce` or `PoduceAsync` of the `IMessageProducer` is called. After all the middlewares execute, the message will be published to Kafka. +The middlewares are called when the `Produce` or `ProduceAsync` of the `IMessageProducer` is called. After all the middlewares execute, the message will be published to Kafka. ## Creating a middleware diff --git a/website/docs/guides/middlewares/serializer-middleware.md b/website/docs/guides/middlewares/serializer-middleware.md index 8005240c5..fded57bf8 100644 --- a/website/docs/guides/middlewares/serializer-middleware.md +++ b/website/docs/guides/middlewares/serializer-middleware.md @@ -43,7 +43,7 @@ services.AddKafka(kafka => kafka resolver => new JsonMessageSerializer(...), resolver => new YourTypeResolver(...)) // or - .AddSingleTypeSerializer() + .AddSingleTypeSerializer() // or .AddSingleTypeSerializer(resolver => new JsonMessageSerializer(...)) ...