Creating Custom Cache Stores

htmgo supports pluggable cache stores, allowing you to use any caching backend or implement custom caching strategies.

This feature enables better control over memory usage, distributed caching support, and protection against memory exhaustion attacks.

The Cache Store Interface

All cache stores implement the following interface:

Copy
 1type Store[K comparable, V any] interface {
 2    // Set adds or updates an entry in the cache with the given TTL
 3    Set(key K, value V, ttl time.Duration)
 4
 5    // GetOrCompute atomically gets an existing value or computes and stores a new value
 6    // This is the primary method for cache retrieval and prevents duplicate computation
 7    GetOrCompute(key K, compute func() V, ttl time.Duration) V
 8
 9    // Delete removes an entry from the cache
10    Delete(key K)
11
12    // Purge removes all items from the cache
13    Purge()
14
15    // Close releases any resources used by the cache
16    Close()
17}

The interface is generic, supporting any comparable key type and any value type.

Important: The GetOrCompute method provides atomic guarantees.

When multiple goroutines request the same key simultaneously, only one will execute the compute function,

preventing duplicate expensive operations like database queries or complex computations.

Technical: The Race Condition Fix

The previous implementation had a time-of-check to time-of-use (TOCTOU) race condition:

With GetOrCompute, the entire check-compute-store operation happens atomically while holding

the lock, eliminating the race window completely.

The Close() method allows for cleanup of resources when the cache is no longer needed.

Using Custom Cache Stores

You can use custom cache stores in two ways:

1. Per-Component Configuration

Copy
1// Create a custom cache store
2lruCache := cache.NewLRUStore[string, string](10000) // Max 10k items
3
4// Use it with a cached component
5var CachedUserProfile = h.CachedPerKeyT(
6    15*time.Minute,
7    getUserProfile,
8    h.WithCacheStore(lruCache), // Pass the custom store
9)

2. Global Default Configuration

Copy
1// Set a global default cache provider
2func init() {
3    h.DefaultCacheProvider = func() cache.Store[any, string] {
4        return cache.NewLRUStore[any, string](50000)
5    }
6}
7
8// All cached components will now use LRU caching by default
9var CachedData = h.Cached(5*time.Minute, getData) // Uses LRU store

Implementing a Custom Cache Store

Here's a complete example of implementing a Redis-based cache store:

Copy
 1package cache
 2
 3import (
 4    "context"
 5    "encoding/json"
 6    "time"
 7    "github.com/redis/go-redis/v9"
 8)
 9
10type RedisStore[K comparable, V any] struct {
11    client *redis.Client
12    prefix string
13    ttl    time.Duration
14}
15
16func NewRedisStore[K comparable, V any](client *redis.Client, prefix string, ttl time.Duration) *RedisStore[K, V] {
17    return &RedisStore[K, V]{
18        client: client,
19        prefix: prefix,
20        ttl:    ttl,
21    }
22}
23
24func (r *RedisStore[K, V]) Set(key K, value V, ttl time.Duration) {
25    ctx := context.Background()
26    redisKey := fmt.Sprintf("%s:%v", r.prefix, key)
27
28    // Serialize value
29    data, err := json.Marshal(value)
30    if err != nil {
31        return
32    }
33
34    // Set in Redis with TTL
35    r.client.Set(ctx, redisKey, data, ttl)
36}
37
38func (r *RedisStore[K, V]) GetOrCompute(key K, compute func() V, ttl time.Duration) V {
39    ctx := context.Background()
40    redisKey := fmt.Sprintf("%s:%v", r.prefix, key)
41
42    // Try to get from Redis first
43    data, err := r.client.Get(ctx, redisKey).Bytes()
44    if err == nil {
45        // Found in cache, deserialize
46        var value V
47        if err := json.Unmarshal(data, &value); err == nil {
48            return value
49        }
50    }
51
52    // Not in cache or error, compute new value
53    value := compute()
54
55    // Serialize and store
56    if data, err := json.Marshal(value); err == nil {
57        r.client.Set(ctx, redisKey, data, ttl)
58    }
59
60    return value
61}
62
63func (r *RedisStore[K, V]) Purge() {
64    ctx := context.Background()
65    // Delete all keys with our prefix
66    iter := r.client.Scan(ctx, 0, r.prefix+"*", 0).Iterator()
67    for iter.Next(ctx) {
68        r.client.Del(ctx, iter.Val())
69    }
70}
71
72func (r *RedisStore[K, V]) Delete(key K) {
73    ctx := context.Background()
74    redisKey := fmt.Sprintf("%s:%v", r.prefix, key)
75    r.client.Del(ctx, redisKey)
76}
77
78func (r *RedisStore[K, V]) Close() {
79    r.client.Close()
80}
81
82// Usage
83redisClient := redis.NewClient(&redis.Options{
84    Addr: "localhost:6379",
85})
86
87redisCache := NewRedisStore[string, string](
88    redisClient,
89    "myapp:cache",
90    15*time.Minute,
91)
92
93var CachedUserData = h.CachedPerKeyT(
94    15*time.Minute,
95    getUserData,
96    h.WithCacheStore(redisCache),
97)

Built-in Cache Stores

htmgo provides two built-in cache implementations:

TTL Store (Default)

The default cache store that maintains backward compatibility with existing htmgo applications.

It automatically removes expired entries based on TTL.

Copy
1// Create a TTL-based cache (this is the default)
2ttlCache := cache.NewTTLStore[string, string]()
3
4// Use explicitly if needed
5var CachedData = h.Cached(
6    5*time.Minute,
7    getData,
8    h.WithCacheStore(ttlCache),
9)

LRU Store

A memory-bounded cache that evicts least recently used items when the size limit is reached.

This is useful for preventing memory exhaustion attacks.

Copy
 1// Create an LRU cache with max 1000 items
 2lruCache := cache.NewLRUStore[int, UserProfile](1000)
 3
 4// Use with per-key caching
 5var CachedUserProfile = h.CachedPerKeyT(
 6    30*time.Minute,
 7    func(userID int) (int, h.GetElementFunc) {
 8        return userID, func() *h.Element {
 9            return renderUserProfile(userID)
10        }
11    },
12    h.WithCacheStore(lruCache),
13)

Migration Guide

Good news! Existing htmgo applications require no changes to work with the new cache system.

The default behavior remains exactly the same, with improved concurrency guarantees.

The framework uses the atomic GetOrCompute method internally, preventing race conditions

that could cause duplicate renders.

If you want to take advantage of custom cache stores:

Before (existing code):

Copy
 1// Existing code - continues to work without changes
 2var CachedDashboard = h.Cached(10*time.Minute, func() *h.Element {
 3    return renderDashboard()
 4})
 5
 6var CachedUserData = h.CachedPerKeyT(15*time.Minute, func(userID string) (string, h.GetElementFunc) {
 7    return userID, func() *h.Element {
 8        return renderUserData(userID)
 9    }
10})

After (with custom cache):

Copy
 1// Enhanced with custom cache store
 2memoryCache := cache.NewLRUStore[any, string](10000)
 3
 4var CachedDashboard = h.Cached(10*time.Minute, func() *h.Element {
 5    return renderDashboard()
 6}, h.WithCacheStore(memoryCache))
 7
 8var CachedUserData = h.CachedPerKeyT(15*time.Minute, func(userID string) (string, h.GetElementFunc) {
 9    return userID, func() *h.Element {
10        return renderUserData(userID)
11    }
12}, h.WithCacheStore(memoryCache))

Best Practices

1. Resource Management: Always implement the Close() method if your cache uses external resources.

2. Thread Safety: The GetOrCompute method must be thread-safe and provide atomic guarantees.

This means when multiple goroutines call GetOrCompute with the same key simultaneously,

only one should execute the compute function.

3. Memory Bounds: Consider implementing size limits to prevent unbounded memory growth.

4. Error Handling: Cache operations should be resilient to failures and not crash the application.

5. Monitoring: Consider adding metrics to track cache hit rates and performance.

6. Atomic Operations: Always use GetOrCompute for cache retrieval to ensure proper

concurrency handling and prevent cache stampedes.

Common Use Cases

Distributed Caching

Use Redis or Memcached for sharing cache across multiple application instances:

Copy
 1// Initialize Redis client
 2redisClient := redis.NewClient(&redis.Options{
 3    Addr:     "redis-cluster:6379",
 4    Password: os.Getenv("REDIS_PASSWORD"),
 5    DB:       0,
 6})
 7
 8// Create distributed cache
 9distributedCache := NewRedisStore[string, string](
10    redisClient,
11    "webapp:cache",
12    30*time.Minute,
13)
14
15// Set as global default
16h.DefaultCacheProvider = func() cache.Store[any, string] {
17    return distributedCache
18}

Memory-Bounded Caching

Prevent memory exhaustion by limiting cache size:

Copy
 1// Limit cache to 5000 items to prevent memory exhaustion
 2boundedCache := cache.NewLRUStore[string, string](5000)
 3
 4// Use for user-generated content where keys might be unpredictable
 5var CachedSearchResults = h.CachedPerKeyT(
 6    5*time.Minute,
 7    func(query string) (string, h.GetElementFunc) {
 8        // Normalize and validate query to prevent cache poisoning
 9        normalized := normalizeSearchQuery(query)
10        return normalized, func() *h.Element {
11            return performSearch(normalized)
12        }
13    },
14    h.WithCacheStore(boundedCache),
15)

Tiered Caching

Implement a multi-level cache with fast local storage and slower distributed storage:

Copy
 1type TieredCache[K comparable, V any] struct {
 2    l1 cache.Store[K, V] // Fast local cache
 3    l2 cache.Store[K, V] // Slower distributed cache
 4}
 5
 6func NewTieredCache[K comparable, V any](local, distributed cache.Store[K, V]) *TieredCache[K, V] {
 7    return &TieredCache[K, V]{l1: local, l2: distributed}
 8}
 9
10func (t *TieredCache[K, V]) Get(key K) (V, bool) {
11    // Check L1 first
12    if val, ok := t.l1.Get(key); ok {
13        return val, true
14    }
15
16    // Check L2
17    if val, ok := t.l2.Get(key); ok {
18        // Populate L1 for next time
19        t.l1.Set(key, val)
20        return val, true
21    }
22
23    var zero V
24    return zero, false
25}
26
27func (t *TieredCache[K, V]) Set(key K, value V) {
28    t.l1.Set(key, value)
29    t.l2.Set(key, value)
30}
31
32func (t *TieredCache[K, V]) Delete(key K) {
33    t.l1.Delete(key)
34    t.l2.Delete(key)
35}
36
37func (t *TieredCache[K, V]) Close() error {
38    if err := t.l1.Close(); err != nil {
39        return err
40    }
41    return t.l2.Close()
42}
43
44// Usage
45tieredCache := NewTieredCache(
46    cache.NewLRUStore[string, string](1000),     // L1: 1k items in memory
47    NewRedisStore[string, string](redis, "", 1*time.Hour), // L2: Redis
48)

Security Note: The pluggable cache system helps mitigate memory exhaustion attacks by allowing

you to implement bounded caches. Always consider using size-limited caches in production environments

where untrusted input could influence cache keys.

Concurrency Note: The GetOrCompute method eliminates race conditions that could occur

in the previous implementation. When multiple goroutines request the same uncached key via

GetOrCompute method simultaneously, only one will execute the expensive render operation,

while others wait for the result. This prevents "cache stampedes" where many goroutines

simultaneously compute the same expensive value.