A flexible and configurable Go package for automatically retrying operations that may fail intermittently.
- Supports various backoff strategies:
- Linear
- Constant
- Exponential with jitter
- Random interval
- Custom backoff strategy
- Generic result handling with
DoWithResult
- Context cancellation support
- Custom logging capabilities
- Conditional retries based on error types
- Configurable through functional options
- Comprehensive error handling with standard errors package integration
- Error wrapping with attempt information
To install the retry
package, run:
go get github.com/simp-lee/retry
package main
import (
"fmt"
"time"
"github.com/simp-lee/retry"
)
func main() {
// Retry the operation up to 5 times with a 2-second linear backoff
err := retry.Do(someFunction,
retry.WithTimes(5),
retry.WithLinearBackoff(2*time.Second))
if err != nil {
if retryErr, ok := err.(*retry.Error); ok {
fmt.Printf("Operation failed after %d attempts. Errors: %v\n", retryErr.MaxRetries, retryErr.Errors)
} else {
fmt.Printf("Operation failed: %v\n", err)
}
} else {
fmt.Println("Operation succeeded")
}
}
func someFunction() error {
// Your operation that might fail
return nil
}
For functions that return both a value and an error, use DoWithResult
. This function uses Go 1.18+ generics to provide type-safe handling of any return type:
result, err := retry.DoWithResult(func() (string, error) {
// Function that returns a string result and possibly an error
return "success", nil
}, retry.WithTimes(3))
if err != nil {
fmt.Printf("Operation failed: %v\n", err)
} else {
fmt.Printf("Operation succeeded with result: %s\n", result)
}
retry.Do(someFunction, retry.WithTimes(5), retry.WithLinearBackoff(2*time.Second))
// Retry intervals: 2s, 4s, 6s, 8s, 10s
retry.Do(someFunction, retry.WithTimes(5), retry.WithConstantBackoff(2*time.Second))
// Retry intervals: 2s, 2s, 2s, 2s, 2s
retry.Do(someFunction, retry.WithTimes(4), retry.WithExponentialBackoff(1*time.Second, 10*time.Second, 500*time.Millisecond))
// Retry intervals: 1s (+jitter), 2s (+jitter), 4s (+jitter), 8s (+jitter)
retry.Do(someFunction, retry.WithTimes(5), retry.WithRandomIntervalBackoff(1*time.Second, 3*time.Second))
// Retry intervals: random values between 1s and 3s
type CustomBackoffStrategy struct {
MaxInterval time.Duration
}
func (c *CustomBackoffStrategy) CalculateInterval(attempt int) time.Duration {
interval := time.Duration(attempt * attempt) * time.Second // quadratic backoff
if interval > c.MaxInterval {
return c.MaxInterval
}
return interval
}
func (c *CustomBackoffStrategy) Name() string {
return "Custom"
}
customBackoff := &CustomBackoffStrategy{
MaxInterval: 10 * time.Second,
}
retry.Do(someFunction, retry.WithTimes(5), retry.WithCustomBackoff(customBackoff))
// Retry intervals: 0s, 1s, 4s, 9s, 10s (quadratic growth with cap)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
err := retry.Do(someFunction,
retry.WithTimes(5),
retry.WithLinearBackoff(2*time.Second),
retry.WithContext(ctx))
logFunc := func(format string, args ...interface{}) {
slog.Warn(fmt.Sprintf(format, args...))
}
err := retry.Do(someFunction,
retry.WithTimes(5),
retry.WithConstantBackoff(2*time.Second),
retry.WithLogger(logFunc))
Retry only when specific errors occur:
// Only retry on network errors, not on validation errors
condition := func(err error) bool {
var netErr *net.OpError
return errors.As(err, &netErr)
}
err := retry.Do(someFunction,
retry.WithTimes(5),
retry.WithConstantBackoff(2*time.Second),
retry.WithRetryCondition(condition))
Add attempt information to errors:
err := retry.Do(someFunction,
retry.WithTimes(3),
retry.WithConstantBackoff(1*time.Second),
retry.WithErrorWrapping(true))
// Errors will be wrapped as: "attempt 1 failed: original error"
The package provides robust error handling with standard library compatibility:
err := retry.Do(someFunction)
if err != nil {
if retry.IsRetryError(err) {
fmt.Printf("All %d retry attempts failed\n", retry.GetAttemptsCount(err))
// Get all errors from retry attempts
allErrors := retry.GetRetryErrors(err)
fmt.Printf("Errors encountered: %d\n", len(allErrors))
// Check for specific error types
if errors.Is(err, io.EOF) {
fmt.Println("One of the retry attempts encountered EOF")
}
// Handle network errors differently
var netErr *net.OpError
if errors.As(err, &netErr) {
fmt.Printf("Network error occurred: %v\n", netErr.Err)
}
} else if errors.Is(err, context.Canceled) {
fmt.Println("Retry was canceled")
} else if errors.Is(err, context.DeadlineExceeded) {
fmt.Println("Retry timed out")
} else {
fmt.Printf("Other error occurred: %v\n", err)
}
return
}
fmt.Println("Operation succeeded")
retry.Do(retryFunc RetryFunc, options ...Option) error
- Execute a function with retriesretry.DoWithResult[T any](retryFunc func() (T, error), options ...Option) (T, error)
- Execute a function that returns a value and handle retries
retry.WithTimes(maxRetries int) Option
- Set maximum number of retry attemptsretry.WithLinearBackoff(interval time.Duration) Option
- Use linear backoff strategyretry.WithConstantBackoff(interval time.Duration) Option
- Use constant backoff strategyretry.WithExponentialBackoff(initialInterval, maxInterval, maxJitter time.Duration) Option
- Use exponential backoff with jitterretry.WithRandomIntervalBackoff(minInterval, maxInterval time.Duration) Option
- Use random interval backoffretry.WithCustomBackoff(backoff Backoff) Option
- Use a custom backoff strategyretry.WithContext(ctx context.Context) Option
- Set context for cancellationretry.WithLogger(logFunc func(format string, args ...interface{})) Option
- Set custom loggerretry.WithRetryCondition(condition RetryConditionFunc) Option
- Set condition for selective retriesretry.WithErrorWrapping(wrap bool) Option
- Enable/disable error wrapping with attempt information
retry.IsRetryError(err error) bool
- Check if an error is a retry errorretry.GetAttemptsCount(err error) int
- Get the number of attempts maderetry.GetRetryErrors(err error) []error
- Get all errors from retry attempts
- Use retries for transient failures, not for business logic errors.
- Choose appropriate retry counts and backoff strategies based on your specific use case.
- Always set a maximum retry time or count to prevent infinite loops.
- Use context for timeouts to ensure your retries don't run indefinitely.
- Be mindful of the impact of retries on the system you're interacting with.
- Use custom logging to monitor and debug retry behavior.
- For APIs or remote services, consider using exponential backoff with jitter to prevent thundering herd problems.
- Use conditional retries to avoid retrying on permanent errors.
- When dealing with specific error types, use
errors.Is
anderrors.As
with retry errors to check for specific error conditions. - Monitor the retry count and duration to identify frequent failures that might need broader system investigation.
Contributions are welcome! Please open an issue or submit a pull request with your changes. Make sure to include tests for new features or bug fixes.
This project is licensed under the MIT License.