(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40030746

该用户正在考虑将 Redis 替换为 SQLite,以满足其项目中的特定用例。 尽管两个系统都提供 SQLite 文件锁定,但用户澄清说他们没有对其内部结构使用 SQLite 线程锁定。 他们警告不要使用网络文件系统 (NFS) 来存储文件,因为锁定功能较差。 他们的主要目标不是现阶段的最大吞吐量,尽管他们使用了合适的 SQLite 默认设置,包括预写日志记录 (WAL)。 用户质疑该项目是否旨在通过在特定场景中提供卓越性能来增强 Redis。 他们确认它与 redis-py 和 python-rq 等 Python 库交互良好。 然而,他们打算避免实现 Lua 脚本,因为它不适用于 1.0 版本。 鉴于这些细节,用户似乎打算创建一个基于 SQLite 的 Redis 替代品,强调在某些情况下对 Redis 的潜在改进,而不是像 Redis 那样仅仅依赖内存。 他们承认 Redis 的独特优势,例如单线程和具有仅附加文件以持久保存到磁盘的特点。 最终,他们建议探索一个翻译层,将 Redis 命令映射到等效的 SQL 语句,并利用 postgreSQL。 该项目的最终目标仍然不明确,但它涉及提高生产力并可能解决 Redis 局限性带来的挑战。

相关文章

原文


I’m not sure to what degree you want to follow the Redis no concurrency “everything serialized on one thread” model.

You can get substantially better performance out of sqlite by using the lower level https://github.com/crawshaw/sqlite, turning on WAL etc, using a connection per goroutine for reads, and sending batches of writes over a buffered channel / queue to a dedicated writer thread. That way you can turn off SQLite’s built in per-connection mutex but still be thread safe since each connection is only used on a single thread at a time.

For this use-case you will also probably save a lot of time if you use some large arena-style buffers (probably N per conn?) and copy incoming parameter bytes from the network request/socket to the buffer, or copy straight from sqlite out to the socket, instead of allocating and passing around a bunch of individual strings. Boxing those strings in interface{} (as done by the high level sql stdlib) slows things down even more.

None of this is necessary to get usable perf, even decently good perf, just sharing some tips from my experience trying to get absolutely maximum write throughput from SQLite in Golang.



> You can get substantially better performance out of sqlite by using the lower level https://github.com/crawshaw/sqlite, turning on WAL etc, using a connection per goroutine for reads, and sending batches of writes over a buffered channel / queue to a dedicated writer thread. That way you can turn off SQLite’s built in per-connection mutex but still be thread safe since each connection is only used on a single thread at a time.

Would this protect against a row update in the middle of a read? e.g. would a row at least be internally consistent at the time it was read?



Yes you would still get SQLite file locking; the thing you aren't using is SQLite thread locking around its own structures (according to GP, I don't know myself). But be sure not to use NFS mount for the files, the locking there is not that great.


Great tips, thank you! The thing is, getting maximum throughput is not the goal of the project (at least not at this stage). I'm using reasonable SQLite defaults (including WAL), but that's it for now.


What are the project goals? I assume it's a drop-in replacement for Redis that is supposed to be better in certain cases? If yes, then what cases do you have in mind?


The goal is to have a convenient API to work with common data structures, with an SQL backend and all the benefits it provides. Such as:

— Small memory footprint even for large datasets.

— ACID transactions.

— SQL interface for introspection and reporting.



I can tell you my use case. I have a lot of systems which I'd like to use on a single computer AND at scale.

That's sometimes even just for development work.

A lot of these use a common API to a more complex distributed store, as well as to something simple like files on disk, in memory, or SQLite.

I'm most cases, it's one user at a time, so performance doesn't matter, but simplicity does.

It can also be for the project which has a 1 percent chance of going viral.

Etc. But I find relatively few cases between truly small scale and large scale.



I love this, it’s the solution that makes sense for 90% of the times I have used redis with python.

I’ve made several versions of this, and to be honest, it ended up being so straightforward that I assumed it was a trivial solution.

This is pretty well-planned. This is 100% the way to go.

Heh. I took a detour into making my idea of “streams” also solve event sourcing in native python; dumb idea, if interesting. Mission creep probably killed my effort!

Nice work



That's pretty cool. Reckon it would work with existing code that calls Redis over the wire for RQ?
  https://python-rq.org
This RQ stuff has been a pain with a recent project because only Python seems to use it, so once an RQ job has been submitted only Python based things can do anything with it. :(

If Redka works as a backend replacement, we could potentially have non-Python things check the SQLite database instead.



It works with redis-py (which python-rq uses), but I doubt it will be any good in this case. python-rq seems to use Lua scripting in Redis, which is not planned for 1.0. I'd rather not add it at all, but we'll see.


At that point why using Redis entirely? You can use any DBMS you want, either relational or NoSQL. The advantage of Redis is that it is a memory cache, if you take out the memory from it, just use Postgres or whatever DBMS you are using (I say Postgres because it has all the features of Redis).


Postgres has nowhere near all the features of redis. Go and have a look at the redis command’s documentation. They’re not even really similar at all, once you get past basic GET/SET stuff.


I feel like one could implement most Redis commands as functions or PL/SQL using native Postgres hstore and json. Could be an interesting translation layer.


This is silly, Postgres doesn’t speak the Redis wire protocol. You will need a large army of connection proxies to get a Postgres database to handle the number of connections a single Redis shrugs off with no sweat.

Maybe you like this answer more: At the end of the day you can embed a bunch of Turing-complete programming languages in Postgres, and Postgres can store binary blobs, so Postgres can do literally anything. Can it do it performantly, and for low cost? Probably not. But if you put in enough time and money I’m sure you can re-implement Redis on Postgres using BLOB column alone.

Here’s a simpler answer: cuckoo filter is available out of the box in Redis, 2 seconds of Googling I didn’t find one for Postgres: https://redis.io/docs/latest/develop/data-types/probabilisti...



No, that's a completely different thing - an index access method, building a bloom filter on multiple columns of a single row. Which means a query then can have an equality condition on any of the columns.

That being said, building a bloom/cuckoo filter as a data type would be quite trivial - a basic version might take an hour or two, the more advanced stuff (to support partial aggregates etc) might take a bit more. Ultimately, this is not that different from what postgresql-hll does.



As usual, there is a spectrum of data safety vs. performance. Redis is at the "very fast, but unsafe" side of the scale.

ScyllaDB for me is in the middle of being high performance key-value store, but not really supporting transactions. FoundationDB is another one that I would consider.



Depends on the kind of safety you’re looking for. Redis is entirely safe from concurrency issues because it’s single-threaded. It supports an append-only file for persistence to disk.


Redis drops data semi randomly when under memory pressure.

If you use Redis for queue tasks (this is popular in Rails and Django/Python web services), that means that during an incident where your queue jobs are getting added faster than they’re removed, you’re going to lose jobs if the incident goes on long enough.



Well, of course! Redis is not (and has never been) a database, it's a data structure server, at best described as a cache. If jobs are added faster than they're removed, this is straight queueing theory 101 -- ideally you'd reject jobs at add-time, but otherwise you have to drop them.


Right. I think Redis hitting the disk would be a terrible tradeoff compared to making a new backend call. it probably wouldn't save you much time and I imagine it would lead to very strange and unpredictable behavior on the front end or trying to debug latency or data issues downstream


Since Redis is an in-memory cache, and already doesn't guarantee the data, would it make sense to set PRAGMA SYNCHRONOUS on nalgeon to OFF to boost performance to something closer to standard Redis?


The use case is caching 20 million API responses that almost never change, each about 20kb of JSON, for a high traffic site.

Yes, I can pay for a 400Gb RAM instance of Redis, but it's expensive.

I can also cache it on disk, but then I need to think about cache expiration myself.

Or I can use something appropriate like a document database, but then I need additional code & additional configuration because we otherwise don't need that piece of infrastructure in our stack.

It would be a lot easier if I could just store it in Redis with the other (more reasonably sized) things that I need to cache.



In other abuses of SQLite, I wrote a tool [0] that exposes blobs in SQLite via an Amazon S3 API. It doesn't do expiry (but that would be easy enough to add if S3 does it).

We were using it to manage a millions of images for machine learning as many tools support S3 and the ability to add custom metadata to objects is useful (harder with files). It is one SQLite database per bucket but at the bucket level it is transactional.

0: https://github.com/seddonm1/s3ite



A few things:

Redis Data Tiering - Redis Enterprise and AWS Elasticache for Redis support data tiering (using SSD for 80% of the dataset and moving things in and out). On AWS, a cache.r6gd.4xlarge with 100GB of memory can handle 500GB of data.

Local Files

> I can also cache it on disk, but then I need to think about cache expiration myself.

Is the challenge that you need it shared among many machines? On a single machine you can put 20 million files in a directory hierarchy and let the fs cache keep things hot in memory as needed. Or use SQLite which will only load the pages needed for each query and also rely on the fs cache.

S3 - An interesting solution is one of the SQLite S3 VFS's. Those will query S3 fairly efficiently for specific data in a large dataset.



This looks like a good use case for ScyllaDB with Compression and TTL. It is pretty simple to setup a single-node instance.

If you rather have something in-process and writes to disk, to avoid extra infrastructure, I would also recommend RocksDB with Compression and TTL.



Or shard it - divide your objects up based on some criteria (hash the name of the object, use the first N digits of the hash to assign to a shard), and distribute them across multiple redis instances. Yes, you then need to maintain some client code to pick the right redis instance to fetch from, but you can now pick the most $/memory efficient instance types to run redis, and you don't have to worry about introducing disk read latency and the edge cases that brings with it.

Edit: looks like redis has some built-in support for data sharding when used as a cluster (https://redis.io/docs/latest/commands/cluster-shards/) - I haven't used that, so not sure how easy it is to apply, and exactly what you'd have to change.



You're trying to get redis to be what it isn't. Use a thing that has the properties you want: a document or relational database. If you insist on this then running a system that allows a ton of swap onto a reasonably fast disk might work, but is still gonna perform worse than a system that's designed for concurrently serving queries of wildly differing latencies.


Have you looked at varnish for caching api responses? Varnish let's you back it with disk and relies on page cache to keep more accessed items in memory.

If the reverse proxy thing doesn't work I think memcached has two level storage like that now iirc



This is state of the art. It will take time before Tigerbeetle themselves release their state machine as a library.

They have done some incredible job. Implementing things I didnt know existed about software/hardware.



The entire value proposition of Redis is that it operates out of memory, and therefore has memory-like performance. (edit: And doesn't provide the benefit of, and therefore pay the costs related to, ACID-like consistency guarantees.) If you move it to disk (edit: Or try to assert ACID-like consistency or transactional guarantees) there's little reason to use Redis any more.


>The entire value proposition of Redis is that it operates out of memory

Not familiar with Redis specifically, but I doubt this idea. You can run anything on top of a ramdisk (granted, you can save a few pointer additions and get rid of some safety checks if you know you're working with memory)



Sure you can run things off a ramdisk, but the way you lay out data to achieve high performance from disk vs from RAM is different (disk assumes that you read pages of data at once, and tries to avoid reading extra pages, while RAM assumes that you read cache lines of data at once).


By the way, I noticed you're using SetMaxConnections(1), however in WAL mode (which you're using) SQLite does support writes that don't block reads, so you might benefit from allowing read concurrency (in theory).


Yeah, it's explained in the code[1]

SQLite only allows one writer at a time, so concurrent writes will fail with a "database is locked" (SQLITE_BUSY) error.

There are two ways to enforce the single writer rule:

1. Use a mutex for write operations.

2. Set the maximum number of DB connections to 1.

Intuitively, the mutex approach seems better, because it does not limit the number of concurrent read operations. The benchmarks show the following results:

- GET: 2% better rps and 25% better p50 response time with mutex

- SET: 2% better rps and 60% worse p50 response time with mutex

Due to the significant p50 response time mutex penalty for SET, I've decided to use the max connections approach for now.

[1]: https://github.com/nalgeon/redka/blob/main/internal/sqlx/db....



How about having two pools, one for writes only, and the other one for reads? SQLite allows you to open the DB more than in one thread per application, so you can have a read pool and a write pool with SetMaxConnections(1) for better performance. This of course also means that reads should be handled separately from writes in the API layer too.


Well I agree, that's a good starting point. You probably won't be able to beat Redis with SQLite anyway :), although given that WAL mode allows for concurrent reads it might give it a large enough performance boost to match Redis in terms of QPS if the concurrency is high enough.


A lot redis's fantastic performance evaporates when network IO is added to the equation. Of course, you can host your own redis, but as soon as people start using a SaaS hosted redis service, performance usually takes a huge hit. If this makes running your own redis compatible key/val store easier for people in your own cluster, that's a win.


Back when Foursquare made MongoDB famous someone posted a PoC of a NoSQL DB implemented in MySQL. It did not seem to have caught on, but it did make me think of how much performance is traded for helping us not to reinvent SQL every time we need a DB. I like experiments like this one, they sometimes lead to new projects.


Please do not post replies that are trivial spelling or grammatical corrections; they add little to the discourse.

To parent: I think all datastores can boil down to a SQL interface eventually. Someday a SQL-interface may dynamically choose a bespoke backend for your schema (or even sub-schema!)



Does this or Garnet have more coverage of Redis commands? I'm looking to embed a Redis-compatible subset of functionality in a program for local debugging purposes, and there will be an API in between that can protect against Redis commands that lack coverage.


Potentially many things like session mgmt, queues, document graphs, etc, can be done right with simple facilities like tables. Tables represent sets, and set algebra seems very common in data representations. Thing is how the sets are combined, i.e. related. This' essentially API-to-SQL-in-Redis-clothes. Kudos to the author.


Thank you! I also think that the relational model can get you pretty far if you don't need to squeeze every last bit of performance out of the program. And the added benefit of using a battle-tested SQL engine is far fewer storage-related bugs.


> reimplement the good parts of Redis

Seems to be missing streams, hyperloglog and pubsub though, so mostly just the kv part of the side protocol with a different backend?



This is a great idea and I am glad it is BSD licensed. Unfortunately the execution is somewhat lacking. SQLite is best suited for embedded / clientside applications with minimal dependencies. The author of this project decided to use Go and make it a service.


Redis is a KV store.

With this you can run a full SQL query.

I did something similiar for my small baseball statistics website switchfielder.com/baseball. On server startup, I copy the relevant tables into memory.



Because you are guaranteeing everything is in memory, so its very fast and you aren't relying on the DB memory caching. Also it is in-process, which (supposedly) has some benefits as well. Theoretically DBs such as Postgres can be set up similarly, although they will be out of process, and in the case of Postgres, behind at least a UNIX domain socket. This may be a difference of maybe ~ms, I"d love to see some benchmarks ... maybe I'll try that when I have time. Could make a decent Medium article ...

For an OLAP (i.e., no writes), small databases with complicated queries(joins, grouping, partitioning, ordering, etc.) this works very well. Yes this is somewhat of an edge case, but I'd bet small shops could fit their entire DB into one of the bigger (several hundred GB) instances.



Hmm, am I the only one who is not worried?

Although I don't really see anything in the license change that would prevent me from using it at both home and business, Redis seem "complete" functionality wise so using a pre-license-change version can't hurt even long-term I think.



I would personally not recommend implementing a Redis protocol on top of SQLite, as I've seen too many failed cases like this. Users may perceive your product as a drop-in Redis replacement, and it may work fine in a PoC, but once it hits production, SQLite's performance and scalability will be severely challenged.

It's much better to use RocksDB as the underlying storage engine for such a solution. RocksDB can provide the performance and scalability required.If you need a distributed solution, I would suggest looking at TiKV or FoundationDB. These are both excellent distributed storage systems that can handle production workloads much better than a SQLite-based approach.

联系我们 contact @ memedata.com