Mastering Hybrid Caching in .NET: Complete Guide for 2025
Discover how hybrid caching revolutionizes .NET application performance with memory and distributed cache layers, resilience patterns, and production-ready solutions.

Building fast and reliable systems in .NET requires more than just writing clean code-it demands intelligent caching strategies that can handle real-world production challenges. Hybrid caching has emerged as the gold standard for modern .NET applications, combining the speed of memory caching with the scalability of distributed caching while adding critical resilience patterns that keep your applications running smoothly even when things go wrong.
Introduction
Caching is fundamental to building performant applications, but traditional approaches often force developers to choose between the simplicity of memory caches and the scalability of distributed caches. This compromise leaves applications vulnerable to common production issues like cache stampedes, database overload during restarts, and cascading failures when backends become unavailable.
Modern .NET offers powerful hybrid caching solutions that eliminate these tradeoffs. By combining multiple cache levels with intelligent coordination, resilience patterns, and automatic synchronization, hybrid caching delivers both exceptional performance and production-grade reliability. Whether you're building microservices handling millions of requests or scaling a monolith horizontally, understanding hybrid caching is essential for .NET developers in 2025. Combined with smart architecture decisions like choosing Channels over RabbitMQ where appropriate, you can build highly performant applications without unnecessary complexity.
This comprehensive guide explores the ins and outs of caching in .NET, from memory to distributed to hybrid implementations. We'll examine both FusionCache (the world's first production-ready hybrid cache implementation) and Microsoft's new HybridCache introduced in .NET 9, with a focus on pragmatic solutions you can implement immediately in real-world applications.
Understanding the Three Types of Caches
Memory Caches: Speed at a Cost
Memory caches store data in the same memory space as the application using them-essentially glorified dictionaries with eviction policies. This provides exceptional performance characteristics:
// Using Microsoft's built-in MemoryCache
var product = cache.GetOrCreate($"product:{id}", entry =>
{
entry.SetAbsoluteExpiration(TimeSpan.FromMinutes(5));
return database.GetProduct(id);
});
Advantages of Memory Caches:
- Exceptional locality - Data is a memory access away
- Near-zero cost - No network calls or serialization
- 100% availability - Always present (it's a dictionary)
- Dead simple to use - Minimal configuration required
Critical Limitations:
- Cold start vulnerability - Every restart wipes the cache completely
- Horizontal scaling problems - Each node maintains its own isolated cache
- No cross-application sharing - Multiple services can't share cached data
- Memory constraints - Limited by available RAM on a single machine
Popular memory cache implementations in .NET include Microsoft's MemoryCache, LazyCache, BitFasterCaching, and FastCache.
Distributed Caches: Scalability with Complexity
Distributed caches represent remote key-value stores (like Redis or Memcached) accessed over the network. They solve the scaling problems of memory caches but introduce significant complexity:
// Using IDistributedCache requires manual serialization
var cachedData = await cache.GetAsync($"product:{id}");
Product product;
if (cachedData != null)
{
// Cache hit - deserialize the data
product = JsonSerializer.Deserialize<Product>(cachedData);
}
else
{
// Cache miss - fetch from database
product = await database.GetProductAsync(id);
// Serialize and store in cache
var serialized = JsonSerializer.SerializeToUtf8Bytes(product);
await cache.SetAsync($"product:{id}", serialized, new DistributedCacheEntryOptions
{
AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5)
});
}
Advantages of Distributed Caches:
- Survives restarts - Cache persists independently of application lifecycle
- Horizontal scaling - Multiple nodes share the same cache
- Cross-application support - Different services can access shared data
- Large capacity - Not constrained by single-machine memory
Key Challenges:
- Complex to use - Manual serialization/deserialization required
- Network latency - Every access incurs a network round trip
- Lower availability - External dependency that can fail
- Resource intensive - CPU and memory costs for serialization
Common implementations include StackExchange.Redis, MongoDB, Memcached, SQLite, and Amazon DynamoDB providers for IDistributedCache.
Hybrid Caches: Best of Both Worlds
Hybrid caches combine a memory level (L1) with an optional distributed level (L2), automatically coordinating between them to deliver optimal performance with production-grade resilience:
// Simple, powerful hybrid cache usage
var product = await cache.GetOrSetAsync(
$"product:{id}",
async ct => await database.GetProductAsync(id, ct),
options => options.SetDuration(TimeSpan.FromMinutes(5))
);
The Hybrid Advantage:
- Easy to use - Clean API abstracts complexity
- Local performance - L1 provides memory-speed access
- Scalable - L2 enables horizontal scaling
- Resilient - Survives restarts and handles failures gracefully
- Transparent - Same code works with L1 only or L1+L2
- Advanced features - Stampede protection, fail-safe, eager refresh built-in
Importantly, the L2 distributed level is optional. You can start with L1-only (essentially a supercharged memory cache) and transparently enable L2 later with a single configuration change-no code modifications required.
Why Hybrid Caches Are Game-Changers
The true power of hybrid caching becomes apparent when you examine real-world production scenarios:
Development to Production Workflow
// During setup/configuration
if (env.IsDevelopment())
{
// Local development - L1 only, no complexity
services.AddFusionCache();
}
else
{
// Production/Staging - L1 + L2 for full power
services.AddFusionCache()
.WithDistributedCache(/* Redis config */)
.WithBackplane(/* Backplane for multi-node sync */);
}
// Application code remains IDENTICAL in both environments
var data = await cache.GetOrSetAsync(key, factory, options);
Transparent Cache Level Management
The hybrid cache automatically manages the dance between levels:
- Cache miss on L1 → Check L2 → If found, copy to L1 and return
- Cache set → Update both L1 and L2 automatically
- Cache remove → Invalidate on both levels
- All subsequent requests → Served directly from L1 (memory speed)
This coordination happens automatically, transparently, with no manual intervention required.
Supercharged Memory Cache Alternative
Even using only L1 (no distributed cache), hybrid caches provide significant advantages over basic memory caches:
- Stampede protection - Prevents redundant database queries
- Advanced features - Eager refresh, factory timeouts, fail-safe mechanisms
- Built-in observability - Logging, tracing, metrics via OpenTelemetry
- Future-proof - Enable L2 later with zero code changes
FusionCache: Production-Ready Hybrid Caching
FusionCache is a free, open-source .NET hybrid caching library created by Jody Donetti in 2020, designed to solve real-world production caching challenges. It's been validated by extensive use including by Microsoft itself in their Data API Builder product, and serves applications handling millions of daily requests.
Key Features Overview
FusionCache organizes its capabilities into four categories:
Resiliency:
- Cache stampede protection
- Fail-safe mechanisms
- Auto-recovery patterns
- Soft/hard timeouts
Performance & Scalability:
- Two-level architecture (L1 + optional L2)
- Backplane support for multi-node synchronization
- Eager refresh (background pre-loading)
- Factory timeouts
- Conditional refresh
Flexibility:
- Multiple named caches
- Advanced key/tag-based invalidation
- Dependency injection with fluent builder
- Support for both sync and async operations
- Adaptive caching capabilities
Observability:
- Full OpenTelemetry integration (logs, traces, metrics)
- Event-driven architecture
- Rich diagnostic information
Basic Setup and Usage
// Register FusionCache in dependency injection
services.AddFusionCache();
// Or with advanced configuration
services.AddFusionCache()
.WithDefaultEntryOptions(new FusionCacheEntryOptions
{
Duration = TimeSpan.FromMinutes(5),
Priority = CacheItemPriority.Normal
})
.WithDistributedCache(/* optional L2 config */)
.WithBackplane(/* optional multi-node sync */);
// Usage in your application
public class ProductService
{
private readonly IFusionCache _cache;
public ProductService(IFusionCache cache)
{
_cache = cache;
}
public async Task<Product> GetProductAsync(int id)
{
return await _cache.GetOrSetAsync(
$"product:{id}",
async ct => await _database.GetProductAsync(id, ct),
options => options.SetDuration(TimeSpan.FromMinutes(10))
);
}
}
Enabling Distributed Cache (L2)
Adding distributed caching requires minimal configuration:
// With Redis and Protobuf serialization
services.AddFusionCache()
.WithSerializer(new FusionCacheProtobufSerializer())
.WithDistributedCache(new RedisCache(new RedisCacheOptions
{
Configuration = "localhost:6379"
}));
// Or with Memcached and JSON serialization
services.AddFusionCache()
.WithSerializer(new FusionCacheSystemTextJsonSerializer())
.WithDistributedCache(/* Memcached configuration */);
The beauty is that your application code doesn't change at all. The same GetOrSetAsync calls work identically whether L2 is enabled or not.
Critical Production Issues and Solutions
Issue #1: Cache Stampede
The Problem:
When multiple concurrent requests arrive for the same data that's not in cache (expired or first request), without protection each request independently:
- Checks cache → Miss
- Queries database
- Writes result to cache
For 100 concurrent requests for the same product, this generates 100 identical database queries-a massive waste that can overwhelm your database during traffic spikes.
Visual Impact:
Product 1: Request 1 → DB Query
Product 1: Request 2 → DB Query
Product 1: Request 3 → DB Query
Product 1: Request 4 → DB Query
Product 2: Request 1 → DB Query
Product 2: Request 2 → DB Query
Product 2: Request 3 → DB Query
Product 2: Request 4 → DB Query
= 8 database queries for 2 products
The Wrong Approach:
// WRONG - Separate get/set calls don't allow stampede protection
var product = cache.Get<Product>($"product:{id}");
if (product == null)
{
product = database.GetProduct(id);
cache.Set($"product:{id}", product);
}
The Correct Solution:
// CORRECT - Single GetOrSet call enables coordination
var product = await cache.GetOrSetAsync(
$"product:{id}",
async ct => await database.GetProductAsync(id, ct),
options => options.SetDuration(TimeSpan.FromMinutes(5))
);
How It Works:
Hybrid caches with stampede protection coordinate concurrent requests:
- First request for a key → Executes factory (DB query)
- Concurrent requests for same key → Wait for first request
- All requests receive the same result
- Database queried only once per unique key
Result:
- 100 requests for Product 1 → 1 database query
- 100 requests for Product 2 → 1 database query
- Total: 2 queries instead of 200
Important Note: Not all caches provide stampede protection! Both FusionCache and Microsoft's HybridCache provide stampede protection, though FusionCache has more proven production usage and richer configuration.
Issue #2: Database Failures and Service Reliability
The Problem:
When the database becomes unavailable (timeout, restart, crash, network issue), traditional caches propagate the error directly to users:
Database Error → Cache Error → Service Error → User Error
Even if cached data just expired seconds ago, without the database your application fails completely.
The Fail-Safe Solution:
FusionCache's fail-safe mechanism provides a "second chance" when things go wrong. The concept is elegantly simple: if data was good enough to cache for 10 minutes, it's better to use it for a few extra seconds during a database outage than to return errors to users.
How Fail-Safe Works:
var product = await cache.GetOrSetAsync(
$"product:{id}",
async ct => await database.GetProductAsync(id, ct),
options => options
.SetDuration(TimeSpan.FromMinutes(5))
.SetFailSafe(true, TimeSpan.FromHours(1), TimeSpan.FromHours(24))
);
Behavior:
- Normal operation: After 5 minutes, entry expires and triggers refresh
- With fail-safe: After 5 minutes, entry is logically expired but physically retained
- On database error:
- Return the "expired" but still-good data
- Temporarily re-save it to cache (1 hour in this example)
- Give database time to recover
- Maximum duration: After 24 hours, stop using old data (too stale)
Visual Timeline:
Time: 0s → Data cached (5 min duration)
Time: 300s → Logical expiration, refresh triggered
Time: 300s → Database error detected
Time: 300s → Old data returned + re-cached (1 hour)
Time: 301s+ → Subsequent requests use cached data, database gets breathing room
Time: 4200s → Retry database, succeed, fresh data cached
The Result:
Without fail-safe: Database down = Service down = Users see errors
With fail-safe: Database down = Service continues = Users experience no disruption
This pattern has proven invaluable in production environments with millions of daily users, eliminating error cascades during database maintenance, failovers, or unexpected issues.
Issue #3: Database Overload and Slow Queries
The Problem:
Databases aren't just up or down-they can be slow. High workload, missing indexes, complex queries, or resource contention can cause significant latency. Traditional caching makes your service speed directly reflect database speed:
Slow Database → Slow Cache Refresh → Slow Service Response
Two-Part Solution:
FusionCache provides two complementary features to proactively handle database slowness:
Eager Refresh: Proactive Background Updates
Eager refresh triggers background cache updates before expiration, preventing users from experiencing slow refreshes:
var product = await cache.GetOrSetAsync(
$"product:{id}",
async ct => await database.GetProductAsync(id, ct),
options => options
.SetDuration(TimeSpan.FromMinutes(10))
.SetEagerRefresh(0.9f) // Refresh at 90% of duration
);
How It Works:
Time: 0s → Data cached (10 min duration = 600s)
Time: 540s → 90% threshold reached (eager refresh window)
Time: 541s → First request arrives, triggers background refresh
Time: 541s → Request immediately receives cached data (still valid)
Time: 541s+ → Background refresh executes
Time: 548s → Background refresh completes, cache updated
Time: 549s+ → Future requests use fresh data
The key insight: requests never wait for the refresh. Data is updated in the background while serving the still-valid cached version.
Factory Timeouts: Graceful Degradation
Factory timeouts allow you to limit how long you'll wait for slow database operations:
var product = await cache.GetOrSetAsync(
$"product:{id}",
async ct => await database.GetProductAsync(id, ct),
options => options
.SetDuration(TimeSpan.FromMinutes(5))
.SetFactoryTimeouts(TimeSpan.FromMilliseconds(100), TimeSpan.FromSeconds(1))
);
Two Timeout Types:
- Soft timeout (100ms) - Used when stale data is available as fallback
- Hard timeout (1s) - Used when no fallback exists (fail-fast)
Behavior:
Scenario 1: Fast database (50ms response)
→ Returns fresh data normally
Scenario 2: Slow database (150ms) + stale data available
→ After 100ms soft timeout, return stale data
→ Continue database query in background
→ Update cache when complete
Scenario 3: Slow database (2s) + no stale data
→ After 1s hard timeout, throw exception or return default
→ Fail fast rather than indefinite waiting
Combined Power:
When you combine eager refresh and factory timeouts, you get exceptional resilience:
options
.SetDuration(TimeSpan.FromMinutes(10))
.SetEagerRefresh(0.9f) // Refresh early
.SetFactoryTimeouts(
TimeSpan.FromMilliseconds(100), // Soft timeout with fallback
TimeSpan.FromSeconds(1) // Hard timeout without fallback
)
.SetFailSafe(true) // Keep old data as fallback
Real-World Impact:
This combination transformed a production environment handling millions of daily requests. Before implementation, occasional database slow queries caused noticeable latency spikes. After adding these features, response times became consistently fast regardless of database performance fluctuations.
Issue #4: Cold Starts and Cache Invalidation
The Problem:
With memory-only caching, every application restart wipes the cache completely, forcing re-population from the database:
App Restart → Empty Cache → Database Stampede
Multiply this across multiple nodes in a Kubernetes cluster or auto-scaling group, and you have a serious problem.
The Distributed Cache Solution:
Enabling L2 (distributed cache) solves cold starts because the cache persists independently:
services.AddFusionCache()
.WithDistributedCache(new RedisCache(redisOptions))
.WithSerializer(new FusionCacheProtobufSerializer());
Cold Start Behavior:
With L1 only (memory cache):
App Start → Cache empty → Every request queries database
With L1 + L2 (hybrid):
App Start → L1 empty, L2 populated → First request fetches from L2
First request → Copies data from L2 to L1 → Subsequent requests use L1
Result → Database queries minimized, performance optimal
Issue #5: Multi-Node Cache Coherence
The Problem:
With multiple application instances (horizontal scaling), each has its own L1 memory cache. When one node updates cached data:
- Node 1 updates cache → L1 and L2 updated
- Nodes 2-10 → Still have old data in their L1 caches
- Result → Cache incoherence across the cluster
Users might see different data depending on which node handles their request-unacceptable for most applications.
The Backplane Solution:
A backplane is a lightweight message bus that synchronizes cache operations across nodes:
services.AddFusionCache()
.WithDistributedCache(redisCache)
.WithBackplane(new RedisBackplane(redisOptions));
How It Works:
Node 1: Update product:123
↓
1. Update L1 (local memory)
2. Update L2 (Redis)
3. Broadcast message via backplane → "product:123 updated"
↓
Nodes 2-10 receive message
↓
Each node invalidates product:123 from their L1
↓
Next request on any node → Fetches fresh data from L2 → Copies to L1
The Magic:
// Enabling backplane requires ONE LINE
.WithBackplane(new RedisBackplane(redisOptions));
// No changes to application code needed
// Works automatically across 1, 10, or 200 nodes
Real-World Validation:
A production microservices environment with 20+ services and 200+ pod instances implemented FusionCache with backplane. The entire integration took one developer two hours. Cache coherence issues disappeared completely across the entire cluster.
Code Examples and Best Practices
Basic Caching Pattern
public class ProductRepository
{
private readonly IFusionCache _cache;
private readonly ApplicationDbContext _db;
public ProductRepository(IFusionCache cache, ApplicationDbContext db)
{
_cache = cache;
_db = db;
}
public async Task<Product> GetProductAsync(int id, CancellationToken ct = default)
{
return await _cache.GetOrSetAsync(
$"product:{id}",
async (ctx, ct) =>
{
// Factory: only executed on cache miss
return await _db.Products
.Include(p => p.Category)
.FirstOrDefaultAsync(p => p.Id == id, ct);
},
options => options
.SetDuration(TimeSpan.FromMinutes(5))
.SetPriority(CacheItemPriority.Normal),
ct
);
}
}
Production-Ready Caching with Full Resilience
public async Task<Product> GetProductWithResilienceAsync(int id)
{
return await _cache.GetOrSetAsync(
$"product:{id}",
async (ctx, ct) => await _db.Products.FindAsync(new object[] { id }, ct),
options => options
// Core caching
.SetDuration(TimeSpan.FromMinutes(10))
.SetPriority(CacheItemPriority.High)
// Resilience patterns
.SetFailSafe(true,
throttleDuration: TimeSpan.FromMinutes(30),
maxDuration: TimeSpan.FromHours(24))
// Performance optimization
.SetEagerRefresh(0.9f)
.SetFactoryTimeouts(
softTimeout: TimeSpan.FromMilliseconds(100),
hardTimeout: TimeSpan.FromSeconds(2))
);
}
Conditional Refresh Based on Business Logic
public async Task<Product> GetProductWithConditionalRefreshAsync(int id)
{
return await _cache.GetOrSetAsync(
$"product:{id}",
async (ctx, ct) =>
{
var product = await _db.Products.FindAsync(new object[] { id }, ct);
// Set metadata for conditional refresh
ctx.Metadata["lastModified"] = product.UpdatedAt;
return product;
},
options => options
.SetDuration(TimeSpan.FromHours(1))
.SetSkipMemoryCache(false), // Use L1
ct =>
{
// Condition: refresh if product was modified in last hour
if (ct.Metadata.TryGetValue("lastModified", out var lastMod))
{
var lastModified = (DateTime)lastMod;
return DateTime.UtcNow - lastModified < TimeSpan.FromHours(1);
}
return false; // Don't refresh if no metadata
}
);
}
Tag-Based Cache Invalidation
public class CatalogService
{
private readonly IFusionCache _cache;
// Cache products with category tag
public async Task<Product> GetProductAsync(int productId, int categoryId)
{
return await _cache.GetOrSetAsync(
$"product:{productId}",
async (ctx, ct) =>
{
var product = await _db.Products.FindAsync(productId);
// Associate cache entry with category tag
ctx.Tags = new[] { $"category:{categoryId}" };
return product;
},
TimeSpan.FromMinutes(15)
);
}
// Invalidate all products in a category at once
public async Task InvalidateCategoryAsync(int categoryId)
{
await _cache.RemoveByTagAsync($"category:{categoryId}");
}
// Example: After updating category, invalidate all related products
public async Task UpdateCategoryAsync(Category category)
{
await _db.SaveChangesAsync();
// Single call invalidates all products with this category tag
await _cache.RemoveByTagAsync($"category:{category.Id}");
}
}
Multiple Named Caches for Different Scenarios
// During registration - create specialized caches
services.AddFusionCache("fast-changing")
.WithDefaultEntryOptions(opt => opt.SetDuration(TimeSpan.FromMinutes(1)));
services.AddFusionCache("stable")
.WithDefaultEntryOptions(opt => opt.SetDuration(TimeSpan.FromHours(24)));
services.AddFusionCache("critical")
.WithDefaultEntryOptions(opt => opt
.SetDuration(TimeSpan.FromMinutes(30))
.SetFailSafe(true)
.SetEagerRefresh(0.8f));
// Usage - inject by name
public class ProductService
{
private readonly IFusionCache _stableCache;
private readonly IFusionCache _fastChangingCache;
public ProductService(
[FromKeyedServices("stable")] IFusionCache stableCache,
[FromKeyedServices("fast-changing")] IFusionCache fastChangingCache)
{
_stableCache = stableCache;
_fastChangingCache = fastChangingCache;
}
public async Task<Category> GetCategoryAsync(int id)
{
// Categories rarely change - use stable cache
return await _stableCache.GetOrSetAsync(
$"category:{id}",
async ct => await _db.Categories.FindAsync(id)
);
}
public async Task<decimal> GetProductPriceAsync(int id)
{
// Prices change frequently - use fast-changing cache
return await _fastChangingCache.GetOrSetAsync(
$"price:{id}",
async ct => (await _db.Products.FindAsync(id))?.Price ?? 0m
);
}
}
Adaptive Caching Based on Usage Patterns
public class AdaptiveCatalogCache
{
private readonly IFusionCache _cache;
private readonly ConcurrentDictionary<string, int> _accessCount = new();
public async Task<Product> GetProductAdaptiveAsync(int id)
{
var key = $"product:{id}";
// Track access frequency
_accessCount.AddOrUpdate(key, 1, (k, count) => count + 1);
// Adapt cache duration based on popularity
var accessCount = _accessCount.GetValueOrDefault(key, 0);
var duration = accessCount switch
{
> 100 => TimeSpan.FromHours(2), // Very popular
> 50 => TimeSpan.FromMinutes(30), // Popular
> 10 => TimeSpan.FromMinutes(10), // Moderate
_ => TimeSpan.FromMinutes(5) // Low traffic
};
return await _cache.GetOrSetAsync(
key,
async ct => await _db.Products.FindAsync(id),
options => options.SetDuration(duration)
);
}
}
Microsoft HybridCache in .NET 9
Microsoft introduced their own HybridCache implementation in .NET 9, influenced significantly by the patterns established by FusionCache and other community libraries. With .NET 10's performance improvements, caching becomes even more powerful.
Key Features
- Built directly into the .NET platform
- L1 (memory) + optional L2 (distributed) architecture
- Unified serialization abstraction
- Stampede protection built-in
- Global and per-entry options
- Tag-based invalidation support
Basic Usage
// Registration
services.AddHybridCache();
// With distributed cache
services.AddHybridCache()
.AddDistributedMemoryCache() // Or Redis, etc.
.AddSerializer<MyCustomSerializer>();
// Usage
public class ProductService
{
private readonly HybridCache _cache;
public ProductService(HybridCache cache)
{
_cache = cache;
}
public async Task<Product> GetProductAsync(int id, CancellationToken ct)
{
return await _cache.GetOrCreateAsync(
$"product:{id}",
async cancel => await _db.Products.FindAsync(id, cancel),
cancellationToken: ct
);
}
}
Comparison: FusionCache vs Microsoft HybridCache
FusionCache Advantages:
- Mature and battle-tested - In production since 2020
- Rich resilience features - Fail-safe, factory timeouts, eager refresh
- Advanced invalidation - Tag-based, pattern-based, conditional
- Multiple named caches - Different strategies for different data
- Comprehensive observability - Full OpenTelemetry integration
- Broader compatibility - Targets .NET Standard 2.0 (runs everywhere)
- Both sync and async - Critical for legacy codebases
- Extensive documentation - Detailed guides, diagrams, examples
Microsoft HybridCache Advantages:
- First-party support - Official Microsoft implementation
- Integrated with .NET 9+ - Part of the framework
- Simpler for basic scenarios - Minimal configuration needed
- Future platform evolution - Will evolve with .NET
Recommendation:
For new projects on .NET 9+ with straightforward caching needs, Microsoft's HybridCache provides excellent out-of-the-box functionality. For production applications requiring advanced resilience patterns, extensive observability, or running on older .NET versions, FusionCache offers a more comprehensive solution with proven reliability at scale.
Many organizations use both-FusionCache for critical paths requiring maximum resilience, and HybridCache for simpler scenarios.
Best Practices
Cache Key Design
// GOOD - Clear, hierarchical, versioned
$"api:v2:product:{productId}"
$"user:{userId}:preferences"
$"catalog:category:{categoryId}:products"
// BAD - Ambiguous, collision-prone
$"data_{id}"
$"{userId}{productId}"
Appropriate Duration Selection
// Reference data - cache long
_cache.GetOrSetAsync("countries", factory, TimeSpan.FromDays(30));
// User-specific data - moderate duration
_cache.GetOrSetAsync($"user:{id}:profile", factory, TimeSpan.FromMinutes(15));
// Real-time data - short duration with eager refresh
_cache.GetOrSetAsync($"price:{id}", factory, opt => opt
.SetDuration(TimeSpan.FromMinutes(2))
.SetEagerRefresh(0.8f));
Graceful Cache Failures
public async Task<Product> GetProductSafeAsync(int id)
{
try
{
return await _cache.GetOrSetAsync(
$"product:{id}",
async ct => await _db.Products.FindAsync(id, ct),
opt => opt.SetFailSafe(true)
);
}
catch (Exception ex)
{
_logger.LogError(ex, "Cache failure for product {Id}", id);
// Fallback to direct database access
return await _db.Products.FindAsync(id);
}
}
Observability and Monitoring
// Enable OpenTelemetry
services.AddFusionCache()
.WithOptions(opt => opt
.EnableSyncEventHandlersExecution = false // Async for performance
)
.WithPostSetup(cache =>
{
// Subscribe to cache events for custom metrics
cache.Events.Hit += (sender, e) =>
{
metrics.RecordCacheHit(e.Key);
};
cache.Events.Miss += (sender, e) =>
{
metrics.RecordCacheMiss(e.Key);
};
cache.Events.FailSafeActivate += (sender, e) =>
{
alerts.SendAlert($"Fail-safe activated: {e.Key}");
};
});
Common Pitfalls to Avoid
1. Using Get/Set Instead of GetOrSet
// WRONG - No stampede protection, race conditions
var product = await cache.GetAsync<Product>($"product:{id}");
if (product == null)
{
product = await database.GetProductAsync(id);
await cache.SetAsync($"product:{id}", product);
}
// CORRECT - Atomic operation with stampede protection
var product = await cache.GetOrSetAsync(
$"product:{id}",
async ct => await database.GetProductAsync(id, ct)
);
2. Ignoring Cancellation Tokens
// WRONG - No cancellation support
var data = await cache.GetOrSetAsync(key, async _ => await LongOperation());
// CORRECT - Proper cancellation handling
var data = await cache.GetOrSetAsync(
key,
async (ctx, ct) => await LongOperation(ct),
cancellationToken: cancellationToken
);
3. Over-Caching Everything
Not all data benefits from caching. Avoid caching:
- Data that changes constantly (millisecond-level updates)
- Truly user-specific data with no reuse (one-time tokens)
- Data with trivial retrieval cost (in-memory configurations)
- Highly sensitive data requiring strict access control
4. Insufficient Cache Duration
// PROBLEM - Too short, constant database pressure
options.SetDuration(TimeSpan.FromSeconds(1)); // Cache churn
// BETTER - Balanced with eager refresh
options
.SetDuration(TimeSpan.FromMinutes(10))
.SetEagerRefresh(0.9f); // Refresh at 90%, users never wait
5. Forgetting About Cache Warming
// Warm critical caches at startup
public class CacheWarmupHostedService : IHostedService
{
private readonly IFusionCache _cache;
private readonly IProductRepository _products;
public async Task StartAsync(CancellationToken cancellationToken)
{
// Pre-load frequently accessed data
var popularProducts = await _products.GetPopularProductIdsAsync();
foreach (var id in popularProducts)
{
await _cache.GetOrSetAsync(
$"product:{id}",
async ct => await _products.GetByIdAsync(id, ct),
cancellationToken: cancellationToken
);
}
}
public Task StopAsync(CancellationToken cancellationToken) => Task.CompletedTask;
}
Performance Considerations
Memory Management
// Set appropriate memory limits
services.AddFusionCache(options =>
{
options.DefaultEntryOptions = new FusionCacheEntryOptions
{
Duration = TimeSpan.FromMinutes(10),
Priority = CacheItemPriority.Normal,
Size = 1 // Relative size for eviction policies
};
});
// Configure underlying MemoryCache limits
services.Configure<MemoryCacheOptions>(options =>
{
options.SizeLimit = 1024; // Total cache size units
options.CompactionPercentage = 0.25; // Evict 25% when full
});
Serialization Performance
// Choose serializer based on requirements
// Fastest, smallest - best for production
services.AddFusionCache()
.WithSerializer(new FusionCacheMemoryPackSerializer());
// Fast, efficient, widely compatible
services.AddFusionCache()
.WithSerializer(new FusionCacheProtobufSerializer());
// Human-readable, easy debugging
services.AddFusionCache()
.WithSerializer(new FusionCacheSystemTextJsonSerializer());
Network Optimization
// Batch operations when possible
var tasks = productIds.Select(id =>
cache.GetOrSetAsync(
$"product:{id}",
async ct => await db.Products.FindAsync(id, ct)
)
);
var products = await Task.WhenAll(tasks);
// Use backplane efficiently - it uses Redis pub/sub
// Single backplane instance serves entire cluster
services.AddFusionCache()
.WithDistributedCache(redisCache)
.WithBackplane(new RedisBackplane(redisOptions)); // Lightweight messaging
Conclusion
Hybrid caching represents a fundamental shift in how we approach caching in .NET applications. By combining the performance of memory caching with the scalability of distributed caching and adding critical resilience patterns, hybrid caches deliver production-grade reliability that traditional approaches simply cannot match.
The key insights for modern .NET development:
Start Simple, Scale Transparently - Begin with L1-only hybrid caching for the enhanced features and stampede protection. Enable L2 later with zero code changes when you need horizontal scaling.
Embrace Resilience Patterns - Fail-safe, eager refresh, and factory timeouts transform caching from a performance optimization into a resilience mechanism that keeps applications running smoothly even during database issues.
Think Multi-Node from Day One - With backplanes, cache coherence across clusters becomes trivial. Don't architect around single-node limitations that you'll outgrow.
Choose the Right Tool - FusionCache excels in production environments requiring maximum resilience and observability. Microsoft's HybridCache provides excellent baseline functionality for simpler scenarios on .NET 9+. Both are valid choices depending on your requirements.
Whether you're building microservices handling millions of requests, scaling monoliths horizontally, or simply want to improve application performance, hybrid caching should be a fundamental part of your .NET architecture toolkit in 2025.
Additional Resources
FusionCache:
- Official Documentation - Comprehensive guides with diagrams and examples
- GitHub Repository - Source code, issues, discussions
- NuGet Package - Production-ready package
Microsoft HybridCache:
- Official .NET 9 Documentation - Microsoft's implementation guide
- Performance Caching Overview - Caching fundamentals in .NET
Related Topics:
- Distributed Caching in .NET - IDistributedCache interface details
- Memory Caching in .NET - IMemoryCache fundamentals
- Redis for .NET Developers - Redis client libraries and best practices
Want to implement hybrid caching in your .NET application? Contact Hrishi Digital Solutions for expert consulting on performance optimization, distributed systems architecture, and cloud-native application development. Our enterprise web application development services include comprehensive performance optimization.
Hrishi Digital Solutions
Expert digital solutions provider specializing in modern web development, cloud architecture, and digital transformation.
Contact Us →


