Skip to content

Commit

Permalink
Merge pull request #15 from hyp3rd/feat/multi-backend-support
Browse files Browse the repository at this point in the history
Feat/multi backend support
  • Loading branch information
hyp3rd authored Jan 15, 2023
2 parents f9e9a73 + 1350acb commit 5c2b46d
Show file tree
Hide file tree
Showing 26 changed files with 214 additions and 211 deletions.
34 changes: 17 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

HyperCache is a **thread-safe** **high-performance** cache implementation in Go that supports multiple backends with the expiration and eviction of items supporting custom algorithms alongside the defaults. It can be used as a standalone cache or as a cache middleware for a service. It can implement a [service interface](./service.go) to intercept cache methods and decorate em with middleware (default or custom).
It is optimized for performance and flexibility allowing to specify the expiration and eviction intervals, provide and register new eviction algorithms, stats collectors, middleware(s).
It ships with a default [historigram stats collector](./stats/statscollector.go) and several eviction algorithms, but you can develop and register your own as long as it implements the [EvictionAlgorithm interface](./eviction/eviction.go).:
It ships with a default [historigram stats collector](./stats/statscollector.go) and several eviction algorithms, but you can develop and register your own as long as it implements the [Eviction Algorithm interface](./eviction/eviction.go).:

- [Recently Used (LRU) eviction algorithm](./eviction/lru.go)
- [The Least Frequently Used (LFU) algorithm](./eviction/lfu.go)
Expand All @@ -27,7 +27,7 @@ It ships with a default [historigram stats collector](./stats/statscollector.go)
- Clear the cache of all items
- Evitc items in the background based on the cache capacity and items access leveraging several custom eviction algorithms
- Expire items in the background based on their duration
- [EvictionAlgorithm interface](./eviction.go) to implement custom eviction algorithms.
- [Eviction Algorithm interface](./eviction.go) to implement custom eviction algorithms.
- Stats collection with a default [stats collector](./stats/statscollector.go) or a custom one that implements the StatsCollector interface.
- [Service interface implementation](./service.go) to allow intercepting cache methods and decorate them with custom or default middleware(s).

Expand Down Expand Up @@ -70,37 +70,37 @@ For a full list of examples, refer to the [examples](./examples/README.md) direc

## API

The `NewHyperCacheInMemoryWithDefaults` function creates a new `HyperCache` instance with the defaults:
The `NewInMemoryWithDefaults` function creates a new `HyperCache` instance with the defaults:

1. The eviction interval is set to 10 minutes.
2. The eviction algorithm is set to LRU.
3. The expiration interval is set to 30 minutes.
4. The capacity of the in-memory backend is set to 1000 items.

To create a new cache with a given capacity, use the NewHyperCache function as described below:
To create a new cache with a given capacity, use the New function as described below:

```golang
cache, err := hypercache.NewHyperCacheInMemoryWithDefaults(100)
cache, err := hypercache.NewInMemoryWithDefaults(100)
if err != nil {
// handle error
}
```

For a fine grained control over the cache configuration, use the `NewHyperCache` function, for instance:
For a fine grained control over the cache configuration, use the `New` function, for instance:

```golang
config := hypercache.NewConfig[backend.InMemoryBackend]()
config.HyperCacheOptions = []hypercache.HyperCacheOption[backend.InMemoryBackend]{
hypercache.WithEvictionInterval[backend.InMemoryBackend](time.Minute * 10),
hypercache.WithEvictionAlgorithm[backend.InMemoryBackend]("cawolfu"),
config := hypercache.NewConfig[backend.InMemory]()
config.HyperCacheOptions = []hypercache.HyperCacheOption[backend.InMemory]{
hypercache.WithEvictionInterval[backend.InMemory](time.Minute * 10),
hypercache.WithEvictionAlgorithm[backend.InMemory]("cawolfu"),
}

config.InMemoryBackendOptions = []backend.BackendOption[backend.InMemoryBackend]{
config.InMemoryOptions = []backend.Option[backend.InMemory]{
backend.WithCapacity(10),
}

// Create a new HyperCache with a capacity of 10
cache, err := hypercache.NewHyperCache(config)
cache, err := hypercache.New(config)
if err != nil {
fmt.Println(err)
return
Expand Down Expand Up @@ -155,8 +155,8 @@ The `Remove` function takes a variadic number of keys as arguments and returns a
The `Service` interface allows intercepting cache methods and decorate them with custom or default middleware(s).

```golang
var svc hypercache.HyperCacheService
hyperCache, err := hypercache.NewHyperCacheInMemoryWithDefaults(10)
var svc hypercache.Service
hyperCache, err := hypercache.NewInMemoryWithDefaults(10)

if err != nil {
fmt.Println(err)
Expand All @@ -181,10 +181,10 @@ defer logger.Sync()
// apply middleware in the same order as you want to execute them
svc = hypercache.ApplyMiddleware(svc,
// middleware.YourMiddleware,
func(next hypercache.HyperCacheService) hypercache.HyperCacheService {
func(next hypercache.Service) hypercache.Service {
return middleware.NewLoggingMiddleware(next, sugar)
},
func(next hypercache.HyperCacheService) hypercache.HyperCacheService {
func(next hypercache.Service) hypercache.Service {
return middleware.NewStatsCollectorMiddleware(next, statsCollector)
},
)
Expand Down Expand Up @@ -219,7 +219,7 @@ import (

func main() {
// Create a new HyperCache with a capacity of 10
cache, err := hypercache.NewHyperCacheInMemoryWithDefaults(10)
cache, err := hypercache.NewInMemoryWithDefaults(10)
if err != nil {
fmt.Println(err)
return
Expand Down
22 changes: 11 additions & 11 deletions backend/backend.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@ import (

// IBackendConstrain is the interface that defines the constrain type that must be implemented by cache backends.
type IBackendConstrain interface {
InMemoryBackend | RedisBackend
InMemory | RedisBackend
}

// IInMemoryBackend is the interface that must be implemented by in-memory cache backends.
type IInMemoryBackend[T IBackendConstrain] interface {
// IInMemory is the interface that must be implemented by in-memory cache backends.
type IInMemory[T IBackendConstrain] interface {
// IBackend[T] is the interface that must be implemented by cache backends.
IBackend[T]
// List the items in the cache that meet the specified criteria.
List(options ...FilterOption[InMemoryBackend]) ([]*models.Item, error)
List(options ...FilterOption[InMemory]) ([]*models.Item, error)
// Clear removes all items from the cache.
Clear()
}
Expand Down Expand Up @@ -48,21 +48,21 @@ type IBackend[T IBackendConstrain] interface {
}

// NewBackend creates a new cache backend.
// Deprecated: Use specific backend constructors instead, e.g. NewInMemoryBackend or NewRedisBackend.
// Deprecated: Use specific backend constructors instead, e.g. NewInMemory or NewRedisBackend.
func NewBackend[T IBackendConstrain](backendType string, opts ...any) (IBackend[T], error) {
switch backendType {
case "memory":
backendOptions := make([]BackendOption[InMemoryBackend], len(opts))
Options := make([]Option[InMemory], len(opts))
for i, option := range opts {
backendOptions[i] = option.(BackendOption[InMemoryBackend])
Options[i] = option.(Option[InMemory])
}
return NewInMemoryBackend(backendOptions...)
return NewInMemory(Options...)
case "redis":
backendOptions := make([]BackendOption[RedisBackend], len(opts))
Options := make([]Option[RedisBackend], len(opts))
for i, option := range opts {
backendOptions[i] = option.(BackendOption[RedisBackend])
Options[i] = option.(Option[RedisBackend])
}
return NewRedisBackend(backendOptions...)
return NewRedisBackend(Options...)
default:
return nil, errors.ErrInvalidBackendType
}
Expand Down
36 changes: 18 additions & 18 deletions backend/inmemory.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,32 +11,32 @@ import (
"github.com/hyp3rd/hypercache/types"
)

// InMemoryBackend is a cache backend that stores the items in memory, leveraging a custom `ConcurrentMap`.
type InMemoryBackend struct {
// InMemory is a cache backend that stores the items in memory, leveraging a custom `ConcurrentMap`.
type InMemory struct {
items datastructure.ConcurrentMap[string, *models.Item] // map to store the items in the cache
capacity int // capacity of the cache, limits the number of items that can be stored in the cache
mutex sync.RWMutex // mutex to protect the cache from concurrent access
SortFilters // filters applied when listing the items in the cache
}

// NewInMemoryBackend creates a new in-memory cache with the given options.
func NewInMemoryBackend[T InMemoryBackend](opts ...BackendOption[InMemoryBackend]) (backend IInMemoryBackend[T], err error) {
// NewInMemory creates a new in-memory cache with the given options.
func NewInMemory[T InMemory](opts ...Option[InMemory]) (backend IInMemory[T], err error) {

inMemoryBackend := &InMemoryBackend{
InMemory := &InMemory{
items: datastructure.New[*models.Item](),
}

ApplyBackendOptions(inMemoryBackend, opts...)
ApplyOptions(InMemory, opts...)

if inMemoryBackend.capacity < 0 {
if InMemory.capacity < 0 {
return nil, errors.ErrInvalidCapacity
}

return inMemoryBackend, nil
return InMemory, nil
}

// SetCapacity sets the capacity of the cache.
func (cacheBackend *InMemoryBackend) SetCapacity(capacity int) {
func (cacheBackend *InMemory) SetCapacity(capacity int) {
if capacity < 0 {
return
}
Expand All @@ -45,12 +45,12 @@ func (cacheBackend *InMemoryBackend) SetCapacity(capacity int) {
}

// itemCount returns the number of items in the cache.
func (cacheBackend *InMemoryBackend) itemCount() int {
func (cacheBackend *InMemory) itemCount() int {
return cacheBackend.items.Count()
}

// Get retrieves the item with the given key from the cacheBackend. If the item is not found, it returns nil.
func (cacheBackend *InMemoryBackend) Get(key string) (item *models.Item, ok bool) {
func (cacheBackend *InMemory) Get(key string) (item *models.Item, ok bool) {
item, ok = cacheBackend.items.Get(key)
if !ok {
return nil, false
Expand All @@ -61,7 +61,7 @@ func (cacheBackend *InMemoryBackend) Get(key string) (item *models.Item, ok bool
}

// Set adds a Item to the cache.
func (cacheBackend *InMemoryBackend) Set(item *models.Item) error {
func (cacheBackend *InMemory) Set(item *models.Item) error {
// Check for invalid key, value, or duration
if err := item.Valid(); err != nil {
models.ItemPool.Put(item)
Expand All @@ -76,7 +76,7 @@ func (cacheBackend *InMemoryBackend) Set(item *models.Item) error {
}

// List the items in the cache that meet the specified criteria.
// func (cacheBackend *InMemoryBackend) List(options ...FilterOption[InMemoryBackend]) ([]*models.Item, error) {
// func (cacheBackend *InMemory) List(options ...FilterOption[InMemory]) ([]*models.Item, error) {
// // Apply the filter options
// ApplyFilterOptions(cacheBackend, options...)

Expand Down Expand Up @@ -129,7 +129,7 @@ func (cacheBackend *InMemoryBackend) Set(item *models.Item) error {
// }

// List returns a list of all items in the cache filtered and ordered by the given options
func (cacheBackend *InMemoryBackend) List(options ...FilterOption[InMemoryBackend]) ([]*models.Item, error) {
func (cacheBackend *InMemory) List(options ...FilterOption[InMemory]) ([]*models.Item, error) {
// Apply the filter options
ApplyFilterOptions(cacheBackend, options...)

Expand Down Expand Up @@ -169,7 +169,7 @@ func (cacheBackend *InMemoryBackend) List(options ...FilterOption[InMemoryBacken
}

// Remove removes items with the given key from the cacheBackend. If an item is not found, it does nothing.
func (cacheBackend *InMemoryBackend) Remove(keys ...string) (err error) {
func (cacheBackend *InMemory) Remove(keys ...string) (err error) {
//TODO: determine if handling the error or not
for _, key := range keys {
cacheBackend.items.Remove(key)
Expand All @@ -178,18 +178,18 @@ func (cacheBackend *InMemoryBackend) Remove(keys ...string) (err error) {
}

// Clear removes all items from the cacheBackend.
func (cacheBackend *InMemoryBackend) Clear() {
func (cacheBackend *InMemory) Clear() {
for item := range cacheBackend.items.IterBuffered() {
cacheBackend.items.Remove(item.Key)
}
}

// Capacity returns the capacity of the cacheBackend.
func (cacheBackend *InMemoryBackend) Capacity() int {
func (cacheBackend *InMemory) Capacity() int {
return cacheBackend.capacity
}

// Size returns the number of items in the cacheBackend.
func (cacheBackend *InMemoryBackend) Size() int {
func (cacheBackend *InMemory) Size() int {
return cacheBackend.itemCount()
}
22 changes: 11 additions & 11 deletions backend/options.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,25 +6,25 @@ import (
"github.com/hyp3rd/hypercache/types"
)

// BackendOption is a function type that can be used to configure the `HyperCache` struct.
type BackendOption[T IBackendConstrain] func(*T)
// Option is a function type that can be used to configure the `HyperCache` struct.
type Option[T IBackendConstrain] func(*T)

// ApplyBackendOptions applies the given options to the given backend.
func ApplyBackendOptions[T IBackendConstrain](backend *T, options ...BackendOption[T]) {
// ApplyOptions applies the given options to the given backend.
func ApplyOptions[T IBackendConstrain](backend *T, options ...Option[T]) {
for _, option := range options {
option(backend)
}
}

// WithCapacity is an option that sets the capacity of the cache.
func WithCapacity[T InMemoryBackend](capacity int) BackendOption[InMemoryBackend] {
return func(backend *InMemoryBackend) {
func WithCapacity[T InMemory](capacity int) Option[InMemory] {
return func(backend *InMemory) {
backend.capacity = capacity
}
}

// WithRedisClient is an option that sets the redis client to use.
func WithRedisClient[T RedisBackend](client *redis.Client) BackendOption[RedisBackend] {
func WithRedisClient[T RedisBackend](client *redis.Client) Option[RedisBackend] {
return func(backend *RedisBackend) {
backend.client = client
}
Expand All @@ -45,7 +45,7 @@ func ApplyFilterOptions[T any](backend *T, options ...FilterOption[T]) {
func WithSortBy[T any](field types.SortingField) FilterOption[T] {
return func(a *T) {
switch filter := any(a).(type) {
case *InMemoryBackend:
case *InMemory:
filter.SortBy = field.String()
}
}
Expand All @@ -56,7 +56,7 @@ func WithSortBy[T any](field types.SortingField) FilterOption[T] {
func WithSortAscending[T any]() FilterOption[T] {
return func(a *T) {
switch filter := any(a).(type) {
case *InMemoryBackend:
case *InMemory:
filter.SortAscending = true
}
}
Expand All @@ -67,7 +67,7 @@ func WithSortAscending[T any]() FilterOption[T] {
func WithSortDescending[T any]() FilterOption[T] {
return func(a *T) {
switch filter := any(a).(type) {
case *InMemoryBackend:
case *InMemory:
filter.SortAscending = false
}
}
Expand All @@ -78,7 +78,7 @@ func WithSortDescending[T any]() FilterOption[T] {
func WithFilterFunc[T any](fn func(item *models.Item) bool) FilterOption[T] {
return func(a *T) {
switch filter := any(a).(type) {
case *InMemoryBackend:
case *InMemory:
filter.FilterFunc = fn
}
}
Expand Down
4 changes: 2 additions & 2 deletions backend/redis.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ type RedisBackend struct {
}

// NewRedisBackend creates a new redis cache with the given options.
func NewRedisBackend[T RedisBackend](redisOptions ...BackendOption[RedisBackend]) (backend IRedisBackend[T], err error) {
func NewRedisBackend[T RedisBackend](redisOptions ...Option[RedisBackend]) (backend IRedisBackend[T], err error) {
rb := &RedisBackend{}
// Apply the backend options
ApplyBackendOptions(rb, redisOptions...)
ApplyOptions(rb, redisOptions...)

// Check if the client is nil
if rb.client == nil {
Expand Down
Loading

0 comments on commit 5c2b46d

Please sign in to comment.