This plugin provides a reverse proxy cache implementation for IAM. It caches response entities based on configurable response code and content type, as well as request method. It can cache per-Consumer or per-API. Cache entities are stored for a configurable period of time, after which subsequent requests to the same resource will re-fetch and re-store the resource. Cache entities can also be forcefully purged via the Admin API prior to their expiration time.
Terminology
plugin
: a plugin executing actions inside IAM before or after a request has been proxied to the upstream API.Service
: the IAM entity representing an external upstream API or microservice.Route
: the IAM entity representing a way to map downstream requests to upstream services.Consumer
: the IAM entity representing a developer or machine using the API. When using IAM, a Consumer only communicates with IAM which proxies every call to the said upstream API.Credential
: a unique string associated with a Consumer, also referred to as an API key.upstream service
: this refers to your own API/service sitting behind IAM, to which client requests are forwarded.API
: a legacy entity used to represent your upstream services. Deprecated in favor of Services.
Configuration
Enabling the plugin on a Service
With a database
Configure this plugin on a Service by making the following request:
$ curl -X POST http://localhost:8001/services/{service}/plugins \
--data "name=proxy-cache" \
--data "config.strategy=memory"
Without a database
Configure this plugin on a Service by adding this section do your declarative configuration file:
plugins:
- name: proxy-cache
service: {service}
config:
strategy: memory
In both cases, {service}
is the id
or name
of the Service that this plugin configuration will target.
Enabling the plugin on a Route
With a database
Configure this plugin on a Route with:
$ curl -X POST http://localhost:8001/routes/{route}/plugins \
--data "name=proxy-cache" \
--data "config.strategy=memory"
Without a database
Configure this plugin on a Route by adding this section do your declarative configuration file:
plugins:
- name: proxy-cache
route: {route}
config:
strategy: memory
In both cases, {route}
is the id
or name
of the Route that this plugin configuration will target.
Enabling the plugin on a Consumer
With a database
You can use the http://localhost:8001/plugins
endpoint to enable this plugin
on specific Consumers:
$ curl -X POST http://localhost:8001/consumers/{consumer}/plugins \
--data "name=proxy-cache" \
\
--data "config.strategy=memory"
Without a database
Configure this plugin on a Consumer by adding this section do your declarative configuration file:
plugins:
- name: proxy-cache
consumer: {consumer}
config:
strategy: memory
In both cases, {consumer}
is the id
or username
of the Consumer that this plugin configuration will target.
You can combine consumer_id
and
service_id
in the same request, to furthermore narrow the scope of the plugin.
Global plugins
- Using a database, all plugins can be configured using the
http://localhost:8001/plugins/
endpoint. - Without a database, all plugins can be configured via the
plugins:
entry on the declarative configuration file.
A plugin which is not associated to any Service, Route or Consumer (or API, if you are using an older version of IAM) is considered "global", and will be run on every request. Read the Plugin Reference and the Plugin Precedence sections for more information.
Parameters
Here's a list of all the parameters which can be used in this plugin's configuration:
form parameter | description |
---|---|
name | The name of the plugin to use, in this case proxy-cache |
service_id | The id of the Service which this plugin will target. |
route_id | The id of the Route which this plugin will target. |
enabled default value: true | Whether this plugin will be applied. |
consumer_id | The id of the Consumer which this plugin will target. |
api_id | The id of the API which this plugin will target. Note: The API Entity is deprecated in favor of Services. |
config.response_code
default value: 200, 301, 404 |
Upstream response status code considered cacheable |
config.request_method
default value: GET, HEAD |
Downstream request methods considered cacheable |
config.content_type
default value: text/plain, application/json |
Upstream response content types considered cacheable. The plugin performs an exact match against each specified value; for example, if the upstream is expected to respond with a |
config.vary_headers
optional |
Relevant headers considered for the cache key. If undefined, none of the headers are taken into consideration. |
config.vary_query_params
optional |
Relevant query parameters considered for the cache key. If undefined, all params are taken into consideration. |
config.cache_ttl
default value: 300 |
TTL, in seconds, of cache entities |
config.cache_control
default value: false |
When enabled, respect the Cache-Control behaviors defined in RFC7234 |
config.storage_ttl
optional |
Number of seconds to keep resources in the storage backend. This value is independent of |
config.strategy
|
The backing data store in which to hold cache entities. Accepted values are; |
config.memory.dictionary_name
default value: kong_cache |
The name of the shared dictionary in which to hold cache entities when the memory strategy is selected. Note that this dictionary currently must be defined manually in the IAM Nginx template. |
config.redis.host
semi-optional |
Host to use for Redis connection when the redis strategy is defined |
config.redis.port
semi-optional |
Port to use for Redis connection when the redis strategy is defined |
config.redis.timeout
semi-optional default value: 2000 |
Connection timeout to use for Redis connection when the redis strategy is defined |
config.redis.password
semi-optional |
Password to use for Redis connection when the redis strategy is defined. If undefined, no AUTH commands are sent to Redis. |
config.redis.database
semi-optional default value: 0 |
Database to use for Redis connection when the redis strategy is defined |
config.redis.sentinel_master
semi-optional |
Sentinel master to use for Redis connection when the redis strategy is defined. Defining this value implies using Redis Sentinel. |
config.redis.sentinel_role
semi-optional |
Sentinel role to use for Redis connection when the redis strategy is defined. Defining this value implies using Redis Sentinel. |
config.redis.sentinel_addresses
semi-optional |
Sentinel addresses to use for Redis connection when the redis strategy is defined. Defining this value implies using Redis Sentinel. |
config.redis.cluster_addresses
semi-optional |
Cluster addresses to use for Redis connection when the |
Strategies
kong-plugin-enterprise-proxy-cache
is designed to support storing proxy cache data in different backend formats. Currently the following strategies are provided:
memory
: Alua_shared_dict
. Note that the default dictionary,kong_cache
, is also used by other plugins and elements of IAM to store unrelated database cache entities. Using this dictionary is an easy way to bootstrap the proxy-cache plugin, but it is not recommended for large-scale installations as significant usage will put pressure on other facets of IAM's database caching operations. It is recommended to define a separatelua_shared_dict
via a custom Nginx template at this time.redis
: Supports Redis and Redis Sentinel deployments.
Cache Key
IAM keys each cache elements based on the request method, the full client request (e.g., the request path and query parameters), and the UUID of either the API or Consumer associated with the request. This also implies that caches are distinct between APIs and/or Consumers. Currently the cache key format is hard-coded and cannot be adjusted. Internally, cache keys are represented as a hexadecimal-encoded MD5 sum of the concatenation of the constituent parts. This is calculated as follows:
key = md5(UUID | method | request)
Where method
is defined via the OpenResty ngx.req.get_method()
call, and request
is defined via the Nginx $request
variable. IAM will return the cache key associated with a given request as the X-Cache-Key
response header. It is also possible to precalculate the cache key for a given request as noted above.
Cache Control
When the cache_control
configuration option is enabled, IAM will respect request and response Cache-Control headers as defined by RFC7234, with a few exceptions:
- Cache revalidation is not yet supported, and so directives such as
proxy-revalidate
are ignored. - Similarly, the behavior of
no-cache
is simplified to exclude the entity from being cached entirely. - Secondary key calculation via
Vary
is not yet supported.
Cache Status
IAM identifies the status of the request's proxy cache behavior via the X-Cache-Status
header. There are several possible values for this header:
Miss
: The request could be satisfied in cache, but an entry for the resource was not found in cache, and the request was proxied upstream.Hit
: The request was satisfied and served from cache.Refresh
: The resource was found in cache, but could not satisfy the request, due to Cache-Control behaviors or reaching its hard-coded cache_ttl threshold.Bypass
: The request could not be satisfied from cache based on plugin configuration.
Storage TTL
IAM can store resource entities in the storage engine longer than the prescribed cache_ttl
or Cache-Control
values indicate. This allows IAM to maintain a cached copy of a resource past its expiration. This allows clients capable of using max-age
and max-stale
headers to request stale copies of data if necessary.
Upstream Outages
Due to an implementation in IAM's core request processing model, at this point the proxy-cache plugin cannot be used to serve stale cache data when an upstream is unreachable. To equip IAM to serve cache data in place of returning an error when an upstream is unreachable, we recommend defining a very large storage_ttl
(on the order of hours or days) in order to keep stale data in the cache. In the event of an upstream outage, stale data can be considered “fresh” by increasing the cache_ttl
plugin configuration value. By doing so, data that would have been previously considered stale is now served to the client, before IAM attempts to connect to a failed upstream service.
Admin API
This plugin provides several endpoints to managed cache entities. These endpoints are assigned to the proxy-cache
RBAC resource.
The following endpoints are provided on the Admin API to examine and purge cache entities:
Retrieve a Cache Entity
Two separate endpoints are available: one to look up a known plugin instance, and another that searches all proxy-cache plugins data stores for the given cache key. Both endpoints have the same return value.
Endpoint
Attributes | Description |
---|---|
plugin_id |
The UUID of the proxy-cache plugin |
cache_id |
The cache entity key as reported by the X-Cache-Key response header |
Endpoint
Attributes | Description |
---|---|
cache_id |
The cache entity key as reported by the X-Cache-Key response header |
Response
If the cache entity exists
HTTP 200 OK
If the entity with the given key does not exist
HTTP 400 Not Found
Delete Cache Entity
Two separate endpoints are available: one to look up a known plugin instance, and another that searches all proxy-cache plugins data stores for the given cache key. Both endpoints have the same return value.
Endpoint
Attributes | Description |
---|---|
plugin_id |
The UUID of the proxy-cache plugin |
cache_id |
The cache entity key as reported by the X-Cache-Key response header |
Endpoint
Attributes | Description |
---|---|
cache_id |
he cache entity key as reported by the X-Cache-Key response header |
Response
If the cache entity exists
HTTP 204 No Content
If the entity with the given key does not exist
HTTP 400 Not Found
Purge All Cache Entities
Endpoint
Response
HTTP 204 No Content
Note that this endpoint purges all cache entities across all proxy-cache
plugins.