-
Notifications
You must be signed in to change notification settings - Fork 59
Description
Summary
This issue proposes performance optimizations for @octokit/auth-app
to reduce crypto overhead and improve efficiency for high-volume GitHub Apps. The current implementation performs expensive cryptographic operations on every JWT generation, which can become a bottleneck in production environments with frequent authentication requests.
Current State Analysis
What's Already Optimized ✅
- Installation access tokens: Cached using
toad-cache
with 59-minute TTL (15,000 token capacity) - Token deduplication: Concurrent requests for same installation are deduplicated
What's Missing ⚠️
- JWT generation caching: No caching for App-level JWTs (10-minute lifetime)
- Private key parsing: PEM/PKCS#8 conversion happens on every JWT generation
- Crypto key import:
subtle.importKey()
called repeatedly for the same private key - Proactive token refresh: No early refresh before expiration
Optimization Opportunities
1. JWT Token Caching (High Impact)
Current behavior:
// Every call to auth({ type: "app" }) generates a new JWT
const appAuth = await auth({ type: "app" });
Issue: The get-app-authentication.ts
generates a new JWT on every call, even though JWTs have a 10-minute lifetime. This involves:
- Private key string manipulation (newline replacement)
- PEM to DER conversion via
getDERfromPEM()
- PKCS#8 format validation and conversion
- RSA key import via
subtle.importKey()
- RSA-SHA256 signature generation via
subtle.sign()
Recommendation: Cache JWT tokens with TTL-based invalidation
// Cache JWT tokens similar to installation tokens
const jwtCache = new Map();
const JWT_CACHE_TTL = 9 * 60 * 1000; // 9 minutes (1 minute buffer)
async function getCachedJWT(appId, privateKey) {
const cacheKey = `jwt:${appId}`;
const cached = jwtCache.get(cacheKey);
if (cached && cached.expiresAt > Date.now()) {
return cached.token;
}
const jwt = await generateJWT(appId, privateKey);
jwtCache.set(cacheKey, {
token: jwt.token,
expiresAt: Date.now() + JWT_CACHE_TTL
});
return jwt.token;
}
Estimated impact: 95%+ reduction in crypto operations for repeated app-level authentications
2. Private Key Parsing Cache (Medium-High Impact)
Current behavior in universal-github-app-jwt
:
// lib/get-token.js - runs on EVERY JWT generation
const privateKeyWithNewlines = privateKey.replace(/\\n/g, '\n');
await convertPrivateKey(privateKeyWithNewlines);
const der = getDERfromPEM(convertedKey);
const cryptoKey = await crypto.subtle.importKey(/* ... */);
Issue: Private key processing involves:
- String manipulation (newline replacement)
- PKCS#8 format conversion
- PEM to DER conversion
- WebCrypto key import (expensive async crypto operation)
Recommendation: Cache the imported CryptoKey object
const cryptoKeyCache = new WeakMap(); // Use WeakMap for memory safety
async function getCachedCryptoKey(privateKey) {
if (cryptoKeyCache.has(privateKey)) {
return cryptoKeyCache.get(privateKey);
}
const processedKey = privateKey.replace(/\\n/g, '\n');
const converted = await convertPrivateKey(processedKey);
const der = getDERfromPEM(converted);
const cryptoKey = await crypto.subtle.importKey(
'pkcs8',
der,
{ name: 'RSASSA-PKCS1-v1_5', hash: 'SHA-256' },
false,
['sign']
);
cryptoKeyCache.set(privateKey, cryptoKey);
return cryptoKey;
}
Estimated impact: Eliminates repeated expensive crypto operations (key import can be 5-10ms per call)
3. Proactive Token Refresh (Low-Medium Impact)
Current behavior: Tokens are refreshed only when expired or explicitly requested with refresh: true
Recommendation: Implement proactive refresh before expiration
const REFRESH_THRESHOLD = 5 * 60 * 1000; // Refresh 5 min before expiry
async function getTokenWithProactiveRefresh(cache, key, generator) {
const cached = await cache.get(key);
if (!cached) {
return generator();
}
const timeUntilExpiry = cached.expiresAt - Date.now();
// Proactively refresh if close to expiration
if (timeUntilExpiry < REFRESH_THRESHOLD) {
// Return current token but trigger background refresh
generator().then(newToken => cache.set(key, newToken));
}
return cached;
}
Benefits:
- Reduces risk of using expired tokens
- Spreads token refresh load over time
- Prevents thundering herd on expiration
4. Signature Result Memoization (Low Impact, Edge Cases)
For scenarios where the same payload is signed repeatedly within the token lifetime:
const signatureCache = new Map();
function getCacheKey(appId, iat, exp) {
return `${appId}:${iat}:${exp}`;
}
Implementation Considerations
Memory Management
- Use WeakMap for private key caches to prevent memory leaks
- Implement size limits for JWT cache (similar to installation token cache)
- Consider LRU eviction for long-running processes
Configuration Options
Allow users to configure caching behavior:
createAppAuth({
appId,
privateKey,
cache: {
// Existing installation token cache
get: async (key) => { /* ... */ },
set: async (key, value) => { /* ... */ }
},
jwtCache: {
enabled: true, // Default: true
ttl: 9 * 60 * 1000, // 9 minutes
maxSize: 100 // Limit number of cached JWTs
}
});
Backward Compatibility
- Keep existing behavior as default
- Make optimizations opt-in via configuration
- Ensure no breaking changes to public API
Performance Impact for High-Volume Apps
For a GitHub App making 1000 requests/minute with app-level auth:
- Current: 1000 JWT generations = ~1000 crypto operations
- With JWT cache: ~6 JWT generations/hour = ~99.4% reduction
- With key cache: Additional 50%+ improvement on crypto key import overhead
Real-world scenario: A webhook processor handling 10k events/hour could see:
- Reduction from 10k crypto operations → ~60 operations
- Latency improvement: 5-10ms saved per auth call
- Total time saved: 50-100 seconds/hour in crypto overhead
References
- Current implementation:
src/get-app-authentication.ts
- JWT library:
universal-github-app-jwt
- Existing cache implementation:
src/cache.ts
- Installation token caching:
src/get-installation-authentication.ts
Related Work
- Installation tokens already use efficient caching (59-min TTL, 15k capacity)
toad-cache
LRU implementation provides good memory management patterns to follow
Priority: Medium-High for production apps with high request volumes
Effort: Medium - requires careful implementation to maintain security and correctness
Impact: High for high-volume scenarios, minimal for low-volume usage
Metadata
Metadata
Assignees
Labels
Type
Projects
Status