![]() |
|
![]() |
|
This is great news. Now I can utilize any CDN provider that supports S3. Like bunny.net [1] which has image optimization, just like Supabase does but with better pricing and features. I have been developing with Supabase past two months. I would say there are still some rough corners in general and some basic features missing. Example Supabase storage has no direct support for metadata [2][3]. Overall I like the launch week and development they are doing. But more attention to basic features and little details would be needed because implementing workarounds for basic stuff is not ideal. [1] https://bunny.net/ [2] https://github.com/orgs/supabase/discussions/5479 [3] https://github.com/supabase/storage/issues/439 |
![]() |
|
I like to Lob my BLOBs into PG's storage. You need that 1-2TB of RDS storage for the IOPS anyway; might as well fill it up. Large object crew, who's with me?! |
![]() |
|
A hero appears! The client I use currently, npgsql, supports proper streaming so I've created a FS->BLOB->PG storage abstraction. Streamy, dreamy goodness. McStreamy. |
![]() |
|
> neutered Can you share more? There is nothing missing, it’s all the same code that we run in production (besides some login changes to keep it decoupled from the platform) |
![]() |
|
From top of my head: reports and multiple projects. Also I kept getting weird "fetch failed" errors when creating rules or adding users from gui working behind reverse proxy so I gave up on the end.
|
![]() |
|
Yes indeed!
I would call it S3 on steroids! Currently, it happens to be S3 to S3, but you could write an adapter, let's say GoogleCloudStorage and it will become S3 -> GoogleCloudStorage, or any other type of underline Storage. Additionally, we provide a special way of authenticating to Supabase S3 using the SessionToken, which allows you to scope S3 operations to your users specific access control https://supabase.com/docs/guides/storage/s3/authentication#s... |
![]() |
|
Gentle reminder here that S3 compatability is a sliding window and without further couching of the term it’s more of a marketing term than anything for vendors. What do I mean by this statement? I mean that you can go to cloud vendor Foo and they can tell you they offer s3 compatible api’s or clients but then you find out they only support the most basic of operations, like 30% of the API. Vendor Bar might support 50% of the api and Baz 80%. In a lot of cases, if your use case is simple, 30% is enough if you’re doing the most common GET and PUT operations etc. But all it takes is one unsupported call in your desired workflow to rule out that vendor as an option until such time that said API is supported. My main beef with this is that there’s no easy way to tell usually unless the vendor provides a support matrix that you have to map to the operations you need, like this: https://docs.storj.io/dcs/api/s3/s3-compatibility. If no such matrix is provided on both the client side and server side you have no easy way to tell if it will even work without wiring things in and attempting to actually execute the code. One thing to note is that it’s quite unrealistic for vendors to strive for 100% compat - there’s some AWS specific stuff in the API that will basically never be relevant for anyone other than AWS. But the current situation of Wild West could stand for some significant improvement |
![]() |
|
we don't have any plans to get bought. we only have plans to keep pushing open standards/tools - hopefully we have enough of a track record here that it doesn't feel like lip service |
![]() |
|
That's a neat way of thinking about it. Thanks for an awesome product. Please also never get bought or make plans to in the future, or if you really, really, really have to then please not by google. |
![]() |
|
We know, but it screws over your existing customers when a very helpful tool is turned over to a greedy investment firm that’s gonna gut the product seeking the highest return
|
![]() |
|
Firebase was such a terrible loss. I had been following it quite closely on its mailing list until the Google takeover, then it seemed like progress slowed to a halt. Also having big brother watching a common bootstrap framework's data like that, used by countless MVP apps, doesn't exactly inspire confidence, but that's of course why they bought it. At the time, the most requested feature was a push notification mechanism, because implementing that on iOS had a steep learning curve and was not cross-platform. Then probably some more advanced rules to be able to do more functional-style permissions, possibly with variables, although they had just rolled out an upgraded rules syntax. And also having a symlink metaphor for nodes might have been nice, so that subtrees could reflect changes to others like a spreadsheet, for auto-normalization without duplicate logic. And they hadn't yet implemented an incremental/diff mechanism to only download what's needed at app startup, so larger databases could be slow to load. I don't remember if writes were durable enough to survive driving through a tunnel and relaunching the app while disconnected from the internet either. I'm going from memory and am surely forgetting something. Does anyone know if any/all of the issues have been implemented/fixed yet? I'd bet money that the more obvious ones from a first-principles approach have not, because ensh!ttification. Nobody's at the wheel to implement these things, and of course there's no budget for them anyway, because the trillions of dollars go to bowing to advertisers or training AI or whatnot. IMHO the one true mature web database will be distributed via something like Raft, have rich access rules, be log based with (at least) SQL/HTTP/JSON interfaces to the last-known state and access to the underlying sets selection/filtering/aggregation logic/language, support nested transactions or have all equivalent use-cases provided by atomic operations with examples, be fully indexed by default with no penalty for row or column based queries (to support both table and document-oriented patterns and even software transactional memories - STMs), have column and possibly even row views (not just table views), use a copy-on-write mechanism internally like Clojure's STM for mostly O(1) speed, be evented with smart merge conflicts to avoid excessive callbacks, preferable with a synchronized clock timestamp ordered lexicographically: https://firebase.blog/posts/2015/02/the-2120-ways-to-ensure-... I'm not even sure that the newest UUID formats get that right: https://uuid6.github.io/uuid6-ietf-draft/ Loosely this next-gen web database would be ACID enough for business and realtime enough for gaming, probably through an immediate event callback for dead reckoning, with an optional "final" argument to know when the data has reached consensus and was committed, with visibility based on the rules system. Basically as fast as Redis but durable. A runner up was the now (possibly) defunct RethinkDB. Also PouchDB/PouchBase, a web interface for CouchDB. I haven't had time to play with Supabase yet, so any insights into whether it can do these things would be much appreciated! |
![]() |
|
Never used Supabase before but I'm very much comfortable with their underlying stack. I use a combination of postgres, PostgREST, PLv8 and Auth0 to achieve nearly the same thing.
|
![]() |
|
Presigned URLs are useful because client app can upload/download directly from S3, saving the app server from this traffic. Does Row-Level Security achieve the same benefit?
|
![]() |
|
I just finished implementing S3 file upload in nextjs to cloudflare R2 with a supabase backend. Wish I had been lazy and waited a day!
|
![]() |
|
You specifically say "for large files". What's your bandwidth and latency like for small files (e.g. 20-20480 bytes), and how does it compare to raw S3's bandwidth and latency for small files?
|
![]() |
|
At $0.1/GB of egress it’s not super attractive compared to B2 or R2 for anything but trivial projects. I wish they would offer a plan with just the pg database. Any news on pricing of Fly PG? |
![]() |
|
I think that protocol is appropriate here since s3 resources are often represented by a s3:// url where the scheme part of the url is often used to represent the protocol.
|
![]() |
|
Is there any request pricing (I could not find a mention to it on the pricing page). Could be quite compelling for some use-cases if request pricing is free.
|
![]() |
|
I wish supabase had more default integrations with CDNs, transactional email services and domain registrars. I'd happily pay a 50% markup for the sake of having everything in one place. |
![]() |
|
> Parquet. But, I'd also like to store PDFs, CSVs, images yes, you can store all of these in Supabase Storage and it will probably "just work" with the tools that you already use (since most tools are s3-compatible) Here is an example of one of our Data Engineers querying parquet with DuckDB: https://www.youtube.com/watch?v=diL00ZZ-q50 We're very open to feedback here - if you find any rough edges let us know and we can work on it (github issues are easiest) |
![]() |
|
Yes, absolutely! You can download files as streams and make use of Range requests too. The good news is that the Standard API is also supporting stream! |
![]() |
|
That would be cool, at least for someone building a small app with just one tiny SaaS bill to worry about. Even better; using the free tier for an app that is a very very small niche.
|
![]() |
|
Friendly reminder that Supabase is really cool, and if you haven't tried it out you should do it (everything can be self hosted and they have generous free tiers!)
|
For background: we have a storage product for large files (like photos, videos, etc). The storage paths are mapped into your Postgres database so that you can create per-user access rules (using Postgres RLS)
This update adds S3 compatibility, which means that you can use it with thousands of tools that already support the protocol.
I'm also pretty excited about the possibilities for data scientists/engineers. We can do neat things like dump postgres tables in to Storage (parquet) and you can connect DuckDB/Clickhouse directly to them. We have a few ideas that we'll experiment with to make this easy
Let us know if you have any questions - the engineers will also monitor the discussion