I hadn't heard of RustFS and it looks interesting, although I nearly clicked away based on the sheer volume of marketing wank on their main page. The GitHub repo is here: https://github.com/rustfs/rustfs
If it is not an Apache/CNCF/LinuxFoundation project, it can be a rug pull aimed at using open source for getting people in the door only. They were never open for commits, and now they have abandoned open source altogether.
Shocker... they abandoned POSIX compatibility, built a massively over-complicated product, then failed to compete with things like Ceph on the metal side or ubiquitous S3/R2/B2 on the cloud side.
Looks like you cleanly point out their violation of the AGPL. I wish I were a lawyer with nothing better to do, I'd definitely be suing the MinIO group, there's no way they can cleanly remove the AGPL code outsiders contributed.
I'm not a contributor to Minio. This is its own separate thing.
I do have a separate AGPL project (see github) where I have nearly all of the copyright and have looked into how one would be able to enforce this in the US at some point and it did look pretty bleak -- it is a civil suit where you have to show damages etc. but IANAL.
I did not like the FUD they were spreading about AGPL at the time since it is a good license for end-user applications.
I don't think there would be an issue with removing AGPL contributed code. You can't force someone to distribute something they don't want to. IANAL, but I believe that what (all?) copyright in software is most concerned with is the active distribution of code -- not the removal of code.
That said, if there was contributed AGPL code, they couldn't change the license on that part of the code w/o a CLA. AGPL also doesn't necessarily mean you have to make the code publicly available, just available to those that you give the program to (I'm assuming AGPL is like the GPL in this regard).
So, that I'd be curious about it is -- (1) is there any contributed AGPL code in the current version? (2) what license is granted to customers of the enterprise version?
Minio can completely use whatever license they want for their code. But, if there was contributed code w/o a CLA, then I'm not sure how a commercial/enterprise license would play with contriubuted AGPL code. It would be an interesting question to find out.
That's definitely not how its written or interpreted. Microsoft had to release code because they touched GPL code some years back I think it was for HyperV? We're talking about a company with many lawyers at the ready not being able to skirt the GPL in any way, like undoing the code.
Interesting! I like the relative simplicity and durability guarantees. I can see using this for dev and proof of concept. Or in situations where HA/RAID are handled lower in the stack.
What is the performance like for reads, writes, and deletes?
And just to play devil's advocate: What would you say to someone who argues that you've essentially reimplemented a filesystem?
It sucks that S3 somehow became the defacto object storage interface, the API is terrible IMO. Too many headers, too many unknowns with support. WebDAV isn't any better, but I feel like we missed an opportunity here for a standardized interface.
it's storing a [utf8-string => bytes] mapping with some very minimal metadata. But that can be whatever you want. JSON, CBOR, XML, actual document formats etc.
And it's default encoding for listing, management operations and similar is XML....
> but I feel like we missed an opportunity here for a standardized interface.
except S3 _is_ the de-facto standard interface which most object storage system speaks
but I agree it's kinda a pain
and commonly done partial (both feature wise and partial wrong). E.g. S3 store utf8 strings, not utf8 file paths (like e.g. minio does), that being wrong seems fine but can lead to a lot of problems (not just being incompatible for some applications but also having unexpected perf. characteristics for others) making it only partial S3 compatible. Similar some implementations random features like bulk delete or support `If-Match`/`If-Non-Match` headers can also make them S3 incompatible for some use cases.
So yeah, a new external standard which makes it clear what you should expect to be supported to be standard compatible would be nice.
Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.
It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.
Its not perfect but I don't think its a strange API at all.
Everything uses poorly documented, sometimes inconsistent HTTP headers that read like afterthoughts/tech debt. An S3 standard implementation has to have amazon branding all over it (x-amz) which is gross.
It was better. When it first came out, it was a pretty simple API, at least simpler than alternatives (IIRC, I could just be thinking with nostalgia).
I think it's only gotten as complicated as it has as new features have been organically added. I'm sure there are good use cases for everything, but it does beg the question -- is a better API possible for object storage? What's the minimal API required? GET/POST/DELETE?
Does anyone have any recommendations for a simple S3-wrapper to a standard dir? I've got a few apps/services that can send data to S3 (or S3 compatible services) that I want to point to a local server I have, but they don't support SFTP or any of the more "primitive" solutions. I did use a python local-s3 thing, but it was... not good.
Possibly of interest: s3gw[1] is a modified version of ceph's radosgw that allows it to run standalone. It's geared towards kubernetes (notably part of Rancher's storage solution), but should work as a standalone container.
Versity Gateway looks like a reasonable option here. I haven't personally used it, but I know some folks who say it performs pretty great as a "ZFS-backed S3" alternative.
Unlike other options like Garage or Minio, it doesn't have any clustering, replication, erasure coding, ...
Your S3 objects are just files on disk, and Versity exposes it. I gather it exists to provide an S3 interface on top of their other project (ScoutFS), but it seems like it should work on any old filesystem.
Versity is really promising. I got a chance to meet with Ben recently at the Super Computing conference in St. Louis and he was super chill about stuff. Big shout out to him.
He also mentioned that the minio-to-versity migration is a straight forward process. Apparently, you just read the data from mino's shadow filesystem and set it as an extended attribute in your file.
Garage is really good for core S3, the only thing I ran into was it didn't support object tagging. It could be considered maybe a more esoteric corner of the S3 api, but minio does support it. Especially if you're just mapping for a test api, object tagging is most likely an unneeded feature anyway.
It's not a fully featured s3 compatible service, like MinIO, but we used it to great success as a local on-prem s3 read/write cache with AWS as the backing S3 store. This avoided expensive network egress charges as we wanted to process data in both AWS as well as in a non-AWS GPU cluster (i.e. a neocloud)
Their marketing had shifting to trying to push an AI angle for some time now. For an established project or company, that's usually a sign that things aren't going well.
Same here, since I'm the only one using my instance. But, you should be aware that there is an CVE in that version that allows any account level to increase their own permissions to admin level, so it's inherently unsafe
What a story. EOL the open source foundation of your commercial product, to which many people contributed, to turn it into a closed source "A-Ff*ing-I Store" .. seriously what the ...
Is this not the best thing that could happen? Like now its in maintenance, it can be forked without any potential license change in the future, or any new features that are in that license change... This allows anyone to continue working on this, right? Or did i miss something?
You can, sort of, sometimes. Copyleft is still based on copyright. So in theory you can do a new license as long as all the copyright holders agree to the change. Take open source/free/copyleft out of it:
You create a proprietary piece of software. You license it to Google and negotiate terms. You then negotiate different terms with Microsoft. Nothing so far prevents you from doing this. You can't yank the license from Google unless your contract allows that, but maybe it does. You can in theory then go and release it under a different license to the public. If that license is perpetual and non-revokable then presumably I can use it after you decide to stop offering that license. But if the license is non-transferrable then I can't pass on your software to someone else either by giving them a flash drive with it, or by releasing it under a different license.
Several open source projects have been re-licensed. The main thing that really is the obstacle is that in a popular open source or copyleft project you have many contributors each of which holds the copyright to their patches. So now you have a mess of trying to relicense only some parts of your codebase and replace others for the people resisting the change or those you can't reach. It's a messy process. For example, check out how the Open Street Maps data got relicensed and what that took.
I think you are correct, but you probably misunderstood the parent.
My understanding of what they meant by "retroactively apply a restrictive license" is to apply a restrictive license to previous commits that were already distributed using a FOSS license (the FOSS part being implied by the new license being "restrictive" and because these discussions are usually around license changes for previously FOSS projects such as Terraform).
As allowing redistribution under at least the same license is usually a requirement for a license to be considered FOSS, you can't really change the license of an existing version as anyone who has acquired the version under the previous license can still redistribute it under the same terms.
Edit: s/commit/version/, added "under the same terms" at the end, add that the new license being "restrictive" contributes to the implication that the previous license was FOSS
I've been using the minio-go client for S3-compatible storage abstraction in a project I'm working on. This new change putting the minio project into maintenance mode means no new features or bug fixes, which is concerning for something meant to be a stable abstraction layer
Need to start reconsidering the approach now and looking for alternatives
A lot of them actually. Ceph personally I've used. But there's a ton, some open source, some paid. Backblaze has a product Buckets or something. Dell powerscale. Cloudian has one. Nutanix has one.
Ceph is awesome for software defined storage where you have multiple storage nodes and multiple storage devices on each. It's way too heavy and resource intensive for a single machine with loopback devices.
Ceph has multiple daemons that would need to be running: monitor, manager, OSD (1 per storage device), and RADOS Gateway (RGW). If you only had a single storage device it would still be 4 daemons.
Like many smart people they focused on telling people the "how", and assume visitors to their wall of "AI"/hype text already understand the use-case "why".
1. I like that it is written in Go
2. I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)
Best of luck, maybe folks should look around for that https://donate.apache.org/ button before the tax year concludes =3
I'm both shocked and not surprised. Lots of questions: Are they doing that bad from the outcry? Or are they just keeping a private version and going completely commercial only? If so, how do they bypass the AGPL in doing so, I assume they had contributions under the AGPL.
I don't understand. They've seen the contributions. How can they possibly do a clean-room implementation to avoid copyright infringement? (Let alone how tangled up in the history of the codebase they must be...)
for those looking for a simple and reliable self hosted S3 thing, check out Garage . it's much simpler - no web ui, no fancy RS coding, no VC-backed AI company, just some french nerds making a very solid tool.
fwiw while they do produce Docker containers for it, it's also extremely simple to run without that - it's a single binary and running it with systemd is unsurprisingly simple[1].
Disgusting.
Build a product, make it open-source to gain traction, and when you are done completely abandon it.
Shame on me that I have put this ^%^$hit on a project and advocated it.
They have been removing features from the open source version for a while.
The closest alternative seems to be RustFS. Has anyone tried it? I was waiting until they support site replication before switching.
Garage is a popular alternative to Minio. https://garagehq.deuxfleurs.fr
I hadn't heard of RustFS and it looks interesting, although I nearly clicked away based on the sheer volume of marketing wank on their main page. The GitHub repo is here: https://github.com/rustfs/rustfs
Might be coming soon based on this: https://docs.rustfs.com/features/replication/
If it is not an Apache/CNCF/LinuxFoundation project, it can be a rug pull aimed at using open source for getting people in the door only. They were never open for commits, and now they have abandoned open source altogether.
Sad to see these same people were behind GlusterFS.
Well, maybe they are using that experience to build something better this time around? One can hope...
Shocker... they abandoned POSIX compatibility, built a massively over-complicated product, then failed to compete with things like Ceph on the metal side or ubiquitous S3/R2/B2 on the cloud side.
I've been working on https://github.com/uroni/hs5 as a replacement with similar goals to early minio.
The core is stable at this point, but the user/policy management and the web interface is still in the works.
Looks like you cleanly point out their violation of the AGPL. I wish I were a lawyer with nothing better to do, I'd definitely be suing the MinIO group, there's no way they can cleanly remove the AGPL code outsiders contributed.
I'm not a contributor to Minio. This is its own separate thing.
I do have a separate AGPL project (see github) where I have nearly all of the copyright and have looked into how one would be able to enforce this in the US at some point and it did look pretty bleak -- it is a civil suit where you have to show damages etc. but IANAL.
I did not like the FUD they were spreading about AGPL at the time since it is a good license for end-user applications.
I don't think there would be an issue with removing AGPL contributed code. You can't force someone to distribute something they don't want to. IANAL, but I believe that what (all?) copyright in software is most concerned with is the active distribution of code -- not the removal of code.
That said, if there was contributed AGPL code, they couldn't change the license on that part of the code w/o a CLA. AGPL also doesn't necessarily mean you have to make the code publicly available, just available to those that you give the program to (I'm assuming AGPL is like the GPL in this regard).
So, that I'd be curious about it is -- (1) is there any contributed AGPL code in the current version? (2) what license is granted to customers of the enterprise version?
Minio can completely use whatever license they want for their code. But, if there was contributed code w/o a CLA, then I'm not sure how a commercial/enterprise license would play with contriubuted AGPL code. It would be an interesting question to find out.
That's definitely not how its written or interpreted. Microsoft had to release code because they touched GPL code some years back I think it was for HyperV? We're talking about a company with many lawyers at the ready not being able to skirt the GPL in any way, like undoing the code.
I don't see a contributor licensing agreement (CLA), so you may be right.
(I personally choose not to contribute to projects with CLAs, I don't want my contributions to become closed-source in the future.)
Its worse than I thought:
https://blog.min.io/weka-violates-minios-open-source-license...
They think they can revoke someone's AGPL license. That's not at all how that license works!
Interesting! I like the relative simplicity and durability guarantees. I can see using this for dev and proof of concept. Or in situations where HA/RAID are handled lower in the stack.
What is the performance like for reads, writes, and deletes?
And just to play devil's advocate: What would you say to someone who argues that you've essentially reimplemented a filesystem?
Good time to post a Show HN for your project then
It sucks that S3 somehow became the defacto object storage interface, the API is terrible IMO. Too many headers, too many unknowns with support. WebDAV isn't any better, but I feel like we missed an opportunity here for a standardized interface.
S3 isn't JSON
it's storing a [utf8-string => bytes] mapping with some very minimal metadata. But that can be whatever you want. JSON, CBOR, XML, actual document formats etc.
And it's default encoding for listing, management operations and similar is XML....
> but I feel like we missed an opportunity here for a standardized interface.
except S3 _is_ the de-facto standard interface which most object storage system speaks
but I agree it's kinda a pain
and commonly done partial (both feature wise and partial wrong). E.g. S3 store utf8 strings, not utf8 file paths (like e.g. minio does), that being wrong seems fine but can lead to a lot of problems (not just being incompatible for some applications but also having unexpected perf. characteristics for others) making it only partial S3 compatible. Similar some implementations random features like bulk delete or support `If-Match`/`If-Non-Match` headers can also make them S3 incompatible for some use cases.
So yeah, a new external standard which makes it clear what you should expect to be supported to be standard compatible would be nice.
?
Its like GET <namespace>/object, PUT <namespace>/object. To me its the most obvious mapping of HTTP to immutable object key value storage you could imagine.
It is bad that the control plane responses can be malformed XML (e.g keys are not escaped right if you put XML control characters in object paths) but that can be forgiven as an oversight.
Its not perfect but I don't think its a strange API at all.
Everything uses poorly documented, sometimes inconsistent HTTP headers that read like afterthoughts/tech debt. An S3 standard implementation has to have amazon branding all over it (x-amz) which is gross.
It was better. When it first came out, it was a pretty simple API, at least simpler than alternatives (IIRC, I could just be thinking with nostalgia).
I think it's only gotten as complicated as it has as new features have been organically added. I'm sure there are good use cases for everything, but it does beg the question -- is a better API possible for object storage? What's the minimal API required? GET/POST/DELETE?
Like everything it starts off simple but slowly with every feature added over 19 years Simple Storage is it not.
S3 has 3 independent auth mechanisms.
I thought the openstack swift API was pretty clean, but i'm biased.
Time to fork and bring back removed features. :). An advantage of it being AGPL licensed.
I use Supabase Storage. It does S3-style signed download links (so I can switch to any S3 service if I like later).
Does anyone have any recommendations for a simple S3-wrapper to a standard dir? I've got a few apps/services that can send data to S3 (or S3 compatible services) that I want to point to a local server I have, but they don't support SFTP or any of the more "primitive" solutions. I did use a python local-s3 thing, but it was... not good.
s3proxy has a filesystem backend [0].
Possibly of interest: s3gw[1] is a modified version of ceph's radosgw that allows it to run standalone. It's geared towards kubernetes (notably part of Rancher's storage solution), but should work as a standalone container.
[0] https://github.com/gaul/s3proxy [1] https://github.com/s3gw-tech/s3gw
Versity Gateway looks like a reasonable option here. I haven't personally used it, but I know some folks who say it performs pretty great as a "ZFS-backed S3" alternative.
https://github.com/versity/versitygw
Unlike other options like Garage or Minio, it doesn't have any clustering, replication, erasure coding, ...
Your S3 objects are just files on disk, and Versity exposes it. I gather it exists to provide an S3 interface on top of their other project (ScoutFS), but it seems like it should work on any old filesystem.
Versity is really promising. I got a chance to meet with Ben recently at the Super Computing conference in St. Louis and he was super chill about stuff. Big shout out to him.
He also mentioned that the minio-to-versity migration is a straight forward process. Apparently, you just read the data from mino's shadow filesystem and set it as an extended attribute in your file.
You could perhaps checkout https://garagehq.deuxfleurs.fr/
I've done some preliminary testing with garage and I was pleasantly surprised. It worked as expected and didn't run into any gotchas.
Garage is really good for core S3, the only thing I ran into was it didn't support object tagging. It could be considered maybe a more esoteric corner of the S3 api, but minio does support it. Especially if you're just mapping for a test api, object tagging is most likely an unneeded feature anyway.
It's a "Misc" endpoint in the Garage docs here: https://garagehq.deuxfleurs.fr/documentation/reference-manua...
Check out from nvidia, aistore: https://github.com/NVIDIA/aistore
It's not a fully featured s3 compatible service, like MinIO, but we used it to great success as a local on-prem s3 read/write cache with AWS as the backing S3 store. This avoided expensive network egress charges as we wanted to process data in both AWS as well as in a non-AWS GPU cluster (i.e. a neocloud)
rclone serve s3, could be.
I thought they were pivoting towards close it and trying to monetize this?
That got backlash so now it’s just getting dropped entirely?
People get to do whatever they want but bit jarring to go from this is worth something people will pay for to maintenance mode in quick succession
> I thought they were pivoting towards close it and trying to monetize this?
That's literally what the commit shows that they're doing?
> *This project is currently under maintenance and is not accepting new changes.*
> For enterprise support and actively maintained versions, please see MinIO SloppyAISlop (not actual name)
Their marketing had shifting to trying to push an AI angle for some time now. For an established project or company, that's usually a sign that things aren't going well.
They cite a proprietary alternative they offer for enterprises. So yes they pivoted to a monetized offering and are just dropping the open source one.
So they’re pulling an OpenAI.
Start open source to use free advertising and community programmer, and then dumps it all for commercial licensing.
I think n8n is next because they finished the release candidate for version 2.0, but there are no changelogs.
I use this image on my VPS, it was the last update before they neutered the community version
quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z
Same here, since I'm the only one using my instance. But, you should be aware that there is an CVE in that version that allows any account level to increase their own permissions to admin level, so it's inherently unsafe
What a story. EOL the open source foundation of your commercial product, to which many people contributed, to turn it into a closed source "A-Ff*ing-I Store" .. seriously what the ...
please copy and paste outrage from previous discussions to not waste more time
https://news.ycombinator.com/item?id=45665452
Is this not the best thing that could happen? Like now its in maintenance, it can be forked without any potential license change in the future, or any new features that are in that license change... This allows anyone to continue working on this, right? Or did i miss something?
> ... it can be forked without any potential license change in the future ...
It is useful to remember that one may fork at the commit before a license change.
Pretty sure you can’t retroactively apply a restrictive license, so that was never a concern.
You can, sort of, sometimes. Copyleft is still based on copyright. So in theory you can do a new license as long as all the copyright holders agree to the change. Take open source/free/copyleft out of it:
You create a proprietary piece of software. You license it to Google and negotiate terms. You then negotiate different terms with Microsoft. Nothing so far prevents you from doing this. You can't yank the license from Google unless your contract allows that, but maybe it does. You can in theory then go and release it under a different license to the public. If that license is perpetual and non-revokable then presumably I can use it after you decide to stop offering that license. But if the license is non-transferrable then I can't pass on your software to someone else either by giving them a flash drive with it, or by releasing it under a different license.
Several open source projects have been re-licensed. The main thing that really is the obstacle is that in a popular open source or copyleft project you have many contributors each of which holds the copyright to their patches. So now you have a mess of trying to relicense only some parts of your codebase and replace others for the people resisting the change or those you can't reach. It's a messy process. For example, check out how the Open Street Maps data got relicensed and what that took.
I think you are correct, but you probably misunderstood the parent.
My understanding of what they meant by "retroactively apply a restrictive license" is to apply a restrictive license to previous commits that were already distributed using a FOSS license (the FOSS part being implied by the new license being "restrictive" and because these discussions are usually around license changes for previously FOSS projects such as Terraform).
As allowing redistribution under at least the same license is usually a requirement for a license to be considered FOSS, you can't really change the license of an existing version as anyone who has acquired the version under the previous license can still redistribute it under the same terms.
Edit: s/commit/version/, added "under the same terms" at the end, add that the new license being "restrictive" contributes to the implication that the previous license was FOSS
big L for all the cloud providers that made the mistake of using it instead of forging their own path, they're kind of screwed now
Raising 100 mil at 1 B valuation and then trying for an exit is a bitch!
Is this just the open source portion? Minio is now a fully paid product then?
"For enterprise support and actively maintained versions, please see MinIO AIStor."
Probably yes.
I've been using the minio-go client for S3-compatible storage abstraction in a project I'm working on. This new change putting the minio project into maintenance mode means no new features or bug fixes, which is concerning for something meant to be a stable abstraction layer
Need to start reconsidering the approach now and looking for alternatives
What's the simplest replacement for mocking S3 in CI? We don't about performance or reliability.. it's just gotta act like S3.
I've used localstack in the past which worked pretty well.
https://github.com/localstack/localstack
localstack, 100%
Any good alternatives?
I saw this referenced a few days ago. Haven't investigated it at all.
https://garagehq.deuxfleurs.fr/
Edit: jeez, three of us all at once...
If you just need a simple local s3 server (e.g. for developing and testing), I recommend rclone.
rclone serve s3 path/to/buckets --addr :9000 --auth-key <key-id>,<secret>
A lot of them actually. Ceph personally I've used. But there's a ton, some open source, some paid. Backblaze has a product Buckets or something. Dell powerscale. Cloudian has one. Nutanix has one.
Ceph is awesome for software defined storage where you have multiple storage nodes and multiple storage devices on each. It's way too heavy and resource intensive for a single machine with loopback devices.
I've been looking at microceph, but the requirement to run 3 OSDs on loopback files plus this comment from the docs gives me pause:
`Be wary that an OSD, whether based on a physical device or a file, is resource intensive.`
Can anyone quantify "resource intensive" here? Is it "takes an entire Raspberry Pi to run the minimum set" or is it "takes 4 cores per OSD"?
Edit: This is the specific doc page https://canonical-microceph.readthedocs-hosted.com/stable/ho...
Ceph has multiple daemons that would need to be running: monitor, manager, OSD (1 per storage device), and RADOS Gateway (RGW). If you only had a single storage device it would still be 4 daemons.
ceph depends a lot on your use case
minio was also suited for some smaller use cases (e.g. running a partial S3 compatible storage for integration tests). Ceph isn't really good for it.
But if you ran large minio clusters in production ceph might be a very good alternative.
This one is usually the most recommended: https://garagehq.deuxfleurs.fr/
Seaweed and garage (tried both, still using seaweed)
RustFS is good, but still pretty immature IMO
wasn't there a fork with the UI?
seaweedfs
Have heard good things about Garage (https://garagehq.deuxfleurs.fr/).
Am forced to use MinIO for certain products now but will eventually move to better eventually. Garage is high on my list of alternatives.
I’ve been relying on minio in the CI of ZeroFS [0], because it was easy to use as a single binary and supports preconditions.
I guess I’ll move to MicroCeph [1].
[0] https://github.com/Barre/ZeroFS
[1] https://canonical-microceph.readthedocs-hosted.com/stable/
Any efforts to consolidate around a community fork yet?
https://aistore.nvidia.com
> For enterprise support and actively maintained versions, please see [MinIO AIStor]
Naming the product “AIStor” is one of the most blatant forced AI branding pivots I’ve seen.
Raising 100 mil at 1 B valuation and then trying for an exit is a bitch!
“The real hell of life is everyone has his reasons.” ― Jean Renoir
Like many smart people they focused on telling people the "how", and assume visitors to their wall of "AI"/hype text already understand the use-case "why".
1. I like that it is written in Go
2. I saw nothing above what Apache Spark+Hadoop with _consistent_ object stores already offers on Amazon (S3), Google Cloud (GCS), and or Microsoft (Azure Storage, ADLS Gen2)
Best of luck, maybe folks should look around for that https://donate.apache.org/ button before the tax year concludes =3
Hopefully no one is shocked or surprised.
I'm both shocked and not surprised. Lots of questions: Are they doing that bad from the outcry? Or are they just keeping a private version and going completely commercial only? If so, how do they bypass the AGPL in doing so, I assume they had contributions under the AGPL.
"For enterprise support and actively maintained versions, please see MinIO AIStor."
Commercial only, they will replace the agpl contributions from external people. (Or at least they will say that)
I don't understand. They've seen the contributions. How can they possibly do a clean-room implementation to avoid copyright infringement? (Let alone how tangled up in the history of the codebase they must be...)
It doesnt matter unless someone takes them to court over it.
I hope some contributors get together and sue. ;)
for those looking for a simple and reliable self hosted S3 thing, check out Garage . it's much simpler - no web ui, no fancy RS coding, no VC-backed AI company, just some french nerds making a very solid tool.
fwiw while they do produce Docker containers for it, it's also extremely simple to run without that - it's a single binary and running it with systemd is unsurprisingly simple[1].
0: https://garagehq.deuxfleurs.fr/
1: https://garagehq.deuxfleurs.fr/documentation/cookbook/system...
How do you sustain yourselves while developing this project?
What if the sponsorships run out?
How it makes sense? If they are no longer open-source S3 and cloud only, I'll just use S3.
Disgusting. Build a product, make it open-source to gain traction, and when you are done completely abandon it. Shame on me that I have put this ^%^$hit on a project and advocated it.
That can happen to any project, hence why Plan B should be implemented right alongside Plan A whenever humanly possible.
Oh, no! Anyway... Maybe it's for the best seeing as it's AGPL. I won't go within 39.5 feet of infected software like that, so no loss for me.
Downvoted because nobody knows how far a distance 39.5 feet is.
they do if they know the shoe size of the person who measured it