What if you had very important information that you needed to safeguard against corruption? What if you couldn’t afford to lose access to it? How should you go about storing it so that your data maintains its integrity whilst always being available when you need it? To answer these questions, we here at KeyDB decided to use the Raft algorithm, which we plan to implement in KeyDB in a future release.
A long requested feature that Redis does not implement is the ability to expire individual members of data types with submembers such as SET and HASH. Redis' rationale for not adding this feature makes sense for Redis, but KeyDB is focused on delivering a high performance product that is easy to use and trying to implement this functionality without a built in command is hard, so adding this feature just made sense.
KeyDB's initial attempt at adding subkey expires was a straightforward one, for each expire add a vector for potential subkey expires, however this lead to certain performance issues. In this blog post we look at the root cause of those issues and how we used more complex data structures, such as hashtables, to solve them.
Have you ever worried about how your database will react when you get that major traffic spike? Or whether you can sustain high performance for a growing number of daily active users? When you have software that's capable of reaching blazing speeds, every part of your setup needs to work in tandem to support them.
KeyDB is fast, which means we often run into these issues. It can be difficult to reach optimal performance due to various hardware bottlenecks, which is frustrating to users. Hardware issues can be tough to debug, though, and we need a way to diagnose these issues consistently. In our pursuit of a general solution to this problem, here's what we found.
When you're setting up your database, Cron is often not far away... so for low latency calls, why not have the database handle this?
We recently introduced this tool with the KEYDB.CRON command that enables the user to execute Lua scripts via a scheduler at a specific time and/or interval. This functionality persists and is locally stored, bringing the call into the database instead of through a 3rd party tool.
We were extremely excited about TLS (Transport Layer Security) support which arrived in the ‘6.0’ versions of Redis and KeyDB. TLS database connections are part of a continuing trend towards defense in depth which has been a long time in the making, first starting with google encrypting links between their datacenters in 2013.
Unfortunately with Redis, TLS came with a big hit to performance ranging from 36-61%. While security is important, making a trade-off with performance may not always be a viable compromise. We thought carefully about the TLS implementation in KeyDB to try and prevent our users from experiencing this. By taking advantage of KeyDB’s multithreaded architecture we were able to maintain performance achieving nearly 1M ops/sec which was over 7X faster than the Redis TLS implementation.
SCAN is a powerful tool for querying data, but its blocking nature can destroy performance when used heavily. KeyDB has changed the nature of this command in its Enterprise/Cloud edition, allowing orders of magnitude performance improvement!
This article looks at the limitations of using the SCAN command and the effects it has on performance. It demonstrates the major performance degredation that occurs with Redis, and it also looks at how KeyDB Enterprise solved this by implementing a MVCC architecture allowing SCAN to be non-blocking, avoiding any negative impacts to performance.
People often ask what is faster, Elasticache, Redis, or KeyDB. With Redis 6 on the horizon with multithreaded io, we felt it was a good time to do a full comparison! This blog compares single node performance of Elasticache, open source KeyDB, and open source Redis v5.9.103 (6.0 branch). We will take a look at throughput and latency under different loads and with different tools.
I still remember driving two hours away to pick up the only Ryzen 3900X in stock “near by”. The excitement of AMD finally breaking Intel’s monopoly on high end CPUs was contagious. Since then it’s handled pretty much everything, I’ve thrown at it, but I can’t help but feel most software I run has still been optimized for Intel only. These CPUs perform extremely well, but how much better could they be if software was optimized specifically for them?
We’ve always been excited about Arm so when Amazon offered us early access to their new Arm based instances we jumped at the chance to see what they could do. We are of course referring to the Amazon EC2 M6g instances powered by AWS Graviton2 processors. The performance claims made and the hype surrounding the Graviton2 had us itching to see how our high-performance database would perform.
This article compares KeyDB running on several different M5 & M6g EC2 instances to get some insight into cost, performance, and use case benefits. The numbers were quite exciting with the AWS Graviton2 living up to the hype, we hope you enjoy!
Open source databases have a monetization problem, even the companies themselves will admit it. In announcing their new proprietary license MongoDB claimed it was necessary or cloud vendors could “capture all the value”. Redis Labs was so concerned they felt licensing changes were necessary to maintain a “sustainable business in the cloud era”. As these companies abandon open source for proprietary licenses and direct blame elsewhere, we couldn’t shake the feeling that the problem lied not with external forces like licensing and cloud vendors, but instead with the approach taken towards monetization.