Amazon MemoryDB Features

Why MemoryDB?

Amazon MemoryDB is a Redis OSS-compatible, durable, in-memory database service that delivers ultra-fast performance. It is purpose-built for modern applications with microservices architectures.

MemoryDB is compatible with Redis OSS, allowing customers to quickly build applications using the same flexible and friendly Redis OSS data structures, APIs, and commands that they already use today. With MemoryDB, all of your data is stored in memory, which allows you to achieve microsecond read and single-digit millisecond write latency and high throughput. MemoryDB also stores data durably across multiple Availability Zones (AZs) using a distributed transactional log to enable fast failover, database recovery, and node restarts. Delivering both in-memory performance and Multi-AZ durability, MemoryDB can be used as a high-performance primary database for your microservices applications, removing the need to separately manage both a cache and durable database.

Redis OSS compatibility

Redis OSS is a fast, open source, in-memory, key-value data store. Developers use Redis OSS to achieve sub-millisecond response times, enabling millions of requests per second for real-time applications in industries like gaming, ad tech, financial services, healthcare, and IoT. 

Redis OSS offers flexible APIs, commands, and data structures like streams, sets, and lists, to build agile and versatile applications. MemoryDB maintains compatibility with Redis OSS and supports the same set of Redis OSS data types, parameters, and commands that you are familiar with. This means that the code, applications, drivers, and tools you already use today with Redis OSS can be used with MemoryDB, so you can quickly build applications.

Ultra-fast performance

MemoryDB stores your entire dataset in memory to deliver microsecond read latency, single-digit millisecond write latency, and high throughput. It can handle more than 13 trillion requests per day and support peaks of 160 million requests per second. Developers building with microservices architectures require ultra-high performance, as these applications can involve interactions with many service components per user interaction or API call. With MemoryDB, you allow extreme low latency to deliver real-time performance for end users.

MemoryDB includes Enhanced IO Multiplexing, which delivers significant improvements to throughput and latency at scale. Enhanced IO Multiplexing is ideal for throughput-bound workloads with multiple client connections, and its benefits scale with the level of workload concurrency. As an example, when using r6g.4xlarge node and running 5,200 concurrent clients, you can achieve up to 46% increased throughput (read and write operations per second) and up to 21% decreased P99 latency, compared with MemoryDB version 6 compatible with Redis OSS. For these types of workloads, a node's network IO processing can become a limiting factor in the ability to scale.

With Enhanced IO Multiplexing, each dedicated network IO thread pipelines commands from multiple clients into the Redis OSS engine, taking advantage of Redis OSS' ability to efficiently process commands in batches, as illustrated in the following diagram:

Multi-AZ durability

In addition to storing your entire data set in memory, MemoryDB uses a distributed transactional log to provide data durability, consistency, and recoverability. MemoryDB stores data across multiple AZs so you can achieve fast database recovery and restart. You can use MemoryDB as a single, primary database service for your workloads requiring low-latency and high throughput instead of separately managing a cache for speed and an additional relational or nonrelational database for reliability.

Scalability

You can scale your MemoryDB cluster to meet fluctuating application demands: horizontally by adding or removing nodes or vertically by moving to larger or smaller node types. MemoryDB supports write scaling with sharding and read scaling by adding replicas. Your cluster continues to stay online and support read and write operations during resizing operations.

Fully managed

Getting started with MemoryDB is easy. Just launch a new MemoryDB cluster using the AWS Management Console, or you can use the AWS CLI or SDK. MemoryDB database instances are preconfigured with parameters and settings appropriate for the node type selected. You can launch a cluster and connect your application within minutes without additional configuration.

MemoryDB provides Amazon CloudWatch metrics for your database instances. You can use the console to view over 35 key operational metrics for your cluster including compute, memory, storage, throughput, active connections, and more.

MemoryDB automatically keeps your clusters up-to-date with new updates, and you can easily upgrade your clusters to the latest versions of Redis OSS.

Security

MemoryDB runs in Amazon Virtual Private Cloud (Amazon VPC), which allows you to isolate your database in your own virtual network and connect to your on-premises IT infrastructure using industry-standard, encrypted IPsec VPNs. In addition, using VPC configuration in MemoryDB, you can configure firewall settings and control network access to your database instances.

With MemoryDB, data at rest is encrypted using keys you create and control through AWS Key Management Service (AWS KMS). And clusters created with AWS Graviton2 node types include always-on 256-bit DRAM encryption. MemoryDB supports encryption in flight using Transport Layer Security (TLS).

Using the AWS Identity and Access Management (IAM) features integrated with MemoryDB, you can control the actions that your IAM users and groups can take on MemoryDB resources. For example, you can configure your IAM rules to help ensure that certain users only have read-only access, while an Administrator can create, modify, and delete resources. For more information about API-level permissions, refer to Using IAM Policies for MemoryDB.

MemoryDB uses Redis OSS Access Control Lists (ACLs) to control both authentication and authorization for your cluster. ACLs allow you to define different permissions for different users in the same cluster.

Integration with Kubernetes

AWS Controllers for Kubernetes (ACK) for MemoryDB allows you to define and use MemoryDB resources directly from your Kubernetes cluster. This lets you take advantage of MemoryDB to support your Kubernetes applications without needing to define MemoryDB resources outside of the cluster or run and manage in-memory database capabilities within the cluster. You can download the MemoryDB ACK container image from Amazon Elastic Container Registry (Amazon ECR) and refer to the documentation for installation guidance. You can also visit the blog for more detailed information.

Note: ACK for MemoryDB is now generally available. Send us your feedback on our GitHub page.

 

JSON support

MemoryDB provides native support for JavaScript Object Notation (JSON) documents in addition to the data structures included in Redis OSS, at no additional cost. You can simplify developing applications by using the built-in commands designed and optimized for JSON documents. MemoryDB supports partial JSON document updates, as well as powerful searching and filtering using the JSONPath query language. JSON support is available when using Redis OSS 6.2 and above. For more information, see the MemoryDB documentation.

Cost optimization

MemoryDB offers data tiering as a lower-cost way to scale your clusters up to hundreds of terabytes of capacity. Data tiering provides a price-performance option for MemoryDB by using lower-cost solid state drives (SSDs) in each cluster node in addition to storing data in memory. It is ideal for workloads that access up to 20% of their overall dataset regularly and for applications that can tolerate additional latency when accessing data on SSDs.

When using clusters with data tiering, MemoryDB is designed to automatically and transparently move the least recently used items from memory to locally attached NVMe SSDs when available memory capacity is consumed. When you access an item stored on SSD, MemoryDB moves it back to memory before serving the request. MemoryDB data tiering is available on Graviton2-based R6gd nodes. R6gd nodes have nearly 5x more total capacity (memory + SSD) and can help you achieve over 60% storage cost savings when running at maximum utilization compared to R6g nodes (memory only). Assuming 500-byte String values, you can typically expect an additional 450µs latency for read requests to data stored on SSD compared to read requests to data in memory.

MemoryDB offers reserved nodes that allow you to save up to 55% over on-demand node prices in exchange for a usage commitment over a one- or three-year term. Reserved nodes are complementary to MemoryDB on-demand nodes and give businesses flexibility to help reduce costs. MemoryDB provides three reserved node payment options—No Upfront, Partial Upfront, and All Upfront—that allow you to balance the amount you pay upfront with your effective hourly price.

MemoryDB reserved nodes offer size flexibility within a node family and AWS Region. This means that the discounted reserved node rate will be automatically applied to usage of all sizes in the same node family. The size flexibility capability reduces the time that you need to spend managing your reserved nodes and since you’re no longer tied to a specific database node size, you can get the most out of your discount even if your database needs updates.