It’s now possible to access files in S3 (or any compatible services) with Dolphin and other @kde apps! We just released the first version of KIO S3, which bridges S3 with all KIO-powered apps.
The KDE project has announced KIO S3, which provides direct access to Amazon S3 and S3-compatible storage within KDE applications through the KIO framework.
https://linuxiac.com/kde-introduces-kio-s3-for-native-amazon-s3-storage-access/
Self-Host Weekly (6 March 2026)
#Microslop and #cheeseburgers, software updates and launches, a spotlight on #VersityGW -- an #S3 object storage gateway, and more in this week's #selfhosted recap!
https://selfh.st/weekly/2026-03-06
#selfhost #selfhosting #foss #opensource #homelab #smarthome #privacy #security #newsletter #sysadmin #devops #openclaw #development #app #apps #photos #ente
What if every phone could contribute to the #Fediverse, not just consume it?
Short-form video means heavy server-side transcoding for every clip. But your phone can handle it. Re-encode, resize, thumbnails, all done locally. Then push to your own #S3 or #Nextcloud.
No transcoding server. Just simple storage.
We can decentralize the compute, not just the network.
We're working on it
Building an S3-compatible server for local FS. Started with the duplicate photos problem. Realized the real value is making local files accessible via S3 API.
1st milestone: `shoebox ~/Photos` serves an S3 endpoint. `aws s3 ls` returns actual files. PutObject, GetObject, DeleteObject, ListObjectsV2 all working. SQLite metadata layer + filesystem ops with symlink safety.
Next: SigV4 auth, multipart uploads, scanner.
#selfhosted #selfhosting #homelab #rustlang #opensource #buildinpublic #S3
Building an S3-compatible server for local filesystems. Point it at a directory, get an S3 endpoint.
Files stay where they are. Works with rclone, AWS CLI, any S3 SDK. When the object store knows every file's content hash, duplicates are just a query.
Started to deduplicate photos on my NAS. Realized the S3 API unlocks more: backup tools, sync workflows, dev tests.
I added an #S3-compatible Storage backend to a #selfhosted #Grist [1] instance. This allows creating forms with "Attachements" in Grist (e.g. users can upload photos). I experimented with several and settled on Zenko Scality #CloudServer [2]:
- #MinIO [3] is somewhat deprecated and not really open source anymore. Its future is unsure.
- #GarageHQ [4] looks pretty great and I wished I could have used this, but it is not yet feature-complete with S3 protocol and specifically missing the versioning feature (I reported this [5])
- #Zenko Scality works out of the box; it is a bit too "big" for my context (aimed at thousands of parallel users) and uses 500MB memory; but it does the job for now.
The S3 Storage protocol is needed in many situations for data infrastructure and allows to decouple persistend data from ephemeral base images (I am considering adding this to my #Mastodon instance, too).
I posted my compose here [6].
[1]: https://github.com/gristlabs/grist-core
[2]: https://github.com/scality/cloudserver
[3]: https://www.min.io/
[4]: https://garagehq.deuxfleurs.fr/
[5]: https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/166
[6]: https://github.com/scality/Zenko/discussions/1779#discussioncomment-15869532
Plan to remove minIO from nixpkgs: https://github.com/NixOS/nixpkgs/issues/490996
After minIO pulling the plug on the open source project, what are folks moving to in terms of object storage for home labs?
I use a an open-source tool called "rclone" to back up my data to the AWS S3 service; this data is then quickly migrated from the base S3 storage tier to another tier called "Glacier", which is less expensive.
The tradeoff for the savings is that files in the Glacier class are not immediately available; in order to be able to restore them I need to request that they be restored in S3 so I can copy them. Typically you restore them for a limited number of days (enough time for you to grab a copy) before it then reverts back to Glacier class.
The other wrinkle is: The files are encrypted. Not just the files but the file names and the file paths (enclosing folders/directories).
Here is the tricky part: The backup software does not have the ability to request a file be restored from files stored in the Glacier tier. I have to do that using the aws command line or the console. This is doubly tricky because I will have to request the exact file using the encrypted filename and path... not the name I actually know the files as.
So it turns out that rclone can actually tell me the encypted filename and path if I ask it correctly because of course they've dealt with this problem already. :)
I thought to myself "Here is a chance for ChatGPT to show its quality".
I'll skip to the chase:
ChatGPT gave me exactly the *opposite* instructions of what I asked for.
Instead of telling me how to get the encrypted filename path from the unencrypted equivalent it, instead, told me how to get the plaintext from the encrypted filename - which I didn't have. This is using the latest ChatGPT 4o, the very latest.
I question the usefulness of this kind of tool (meaning ChatGPT) for anyone who isn't an expert. I've done this long enough that I know of other sources to look at (such as the manual pages) but if you aren't that savvy I'm not sure how you would find the right answer.
The ability to regurgitate unstructured data with LLMs is amazing - almost magical when I compare it to other efforts to do the same that I have been involved in previously.
But the ability to summarize and present the data in an accurate, useful form is nowhere near acceptable.
NICE!
metalhead.club SeafeedFS S3 storage is now 666 GB (and 3.2 million files) 🥳
Hm. It seems that #garage S3 storage only handles one region per node? I guess I will switch my install from a binary to a container based one, so I can run two separate instances. One being a single-node setup for my homelab and another one that has 3 nodes, allowing me to distribute it between my homelab and my VPSes (Virtual Private Servers) that are on the internet.
New connection profiles for Impossible Cloud, Europe's sovereign #s3 cloud storage are now available
Avec MinIO qui est en train de mourir il faudrait qu'on passe à Garage ou directement sur disque pour Exodus…
Hélas on a personne pour bosser sur ça depuis plus d'un an, si jamais le sujet vous intéresse n'hésitez pas à consulter l'issue suivante :
https://github.com/Exodus-Privacy/exodus/issues/624
🎉Version 9.3 is now available https://cyberduck.io/changelog/ with improved support to connect with temporary credentials to #S3 obtained with token from #OpenID Connect provider. https://cyberduck.io/changelog/
Use the #AWS #S3 (#Microsoft #Entra) connection profile to connect to S3 with temporary credentials from Microsoft Entra #OIDC #SSO https://docs.cyberduck.io/tutorials/s3_microsoft_entra_oidc/
Use the #AWS #S3 (STS Assume Role) connection profile to connect to S3 with temporary security credentials by assuming a role and optional MFA requirement. https://docs.cyberduck.io/tutorials/s3_iam_role_mfa/
Use the #AWS #S3 (MFA Session Token) connection profile to connect to a bucket in S3 with a policy requiring MFA. https://docs.cyberduck.io/tutorials/s3_iam_getsessiontoken_bucketpolicy_mfa/
Just released a tiny open-source utility! 🎉
🔗 s3-migrate → github.com/lmammino/s3-migrate
Need to copy an entire S3 bucket between accounts or migrate to an S3-compatible service (Cloudflare R2, anyone?!)—give it a try! 🚀
Feedback & contributions welcome! 💬 #AWS #S3 #opensource


