State of the Onion - Feb 2024
from fen
I was gonna title this state of the union, then state of the “fenion” and so here I've landed on onion. Congrats, welcome to the onion. No, not that onion.
Superficially, zoner.work looks a lot like it did a month or two ago. The instance is a misskey fork, it has S3 storage, it's roughly instance-shaped. That's about where it stops, though. A lot has changed since even just the start of 2024.
Fired Fish
In late December, it became very much clear that Firefish (formerly Calckey) support and development was going to be at best stagnant, and at worst abandoned. The bulk of the team forked to Catodon and were going in a direction very different from Firefish and the bulk of Misskey, and were still some time away from a public release. The project lead was offline, rightfully so, dealing with serious personal items, some of which stemmed from his involvement in the Fediverse. In short, the future wasn't looking hopeful for the platform.
The question naturally arose – what next? It became pretty important to start scoping out migration routes. As it was, we were already having some serious issues with image processing and usercard generation that had broken in places in just the short time we'd been on Firefish, so staying wasn't viable. While there's a load of Misskey forks, and even some Firefish forks, only a small few presented anything resembling a sensible path. Of the options available, I considered three -
- Iceshrimp – forked from Firefish but with some heavy backend modification, and a forward path focusing on the essentials over the usual Misskey fluff. In particular, the backend rewrites and database optimizations were really attractive. Misskey forks tend to carry a lot of database heft. You can see it in firefish.social crumbling under its own weight. Migration here would be relatively trivial, but with further changes of the database building on what Firefish had changed itself from vanilla Misskey, how far afield would this be putting the instance in the event that Iceshrimp, in turn, became unmaintained?
- Sharkey – A soft fork of Misskey that builds in additional user and administration tools. At the time of evaluation, it was almost like a half-step between Misskey and Firefish in terms of supporting some of the nicer add-ins Firefish had handled in the database (eg longer IDs, more complex passkey hashing) and different from either had added in much finer control of moderation and administration tools regarding users, registrations, and so forth that I'd been missing since leaving Akkoma in September. Structurally, this was closer to moving back “upstream” while keeping migrations easy (the dev had documented a route to move from Firefish to Sharkey). Because it's a soft fork, this also leaves the option to pull in upstream Misskey changes manually should the actual Sharkey project become unmaintained.
- Vanilla Misskey – This one is the full “just move all the way back upstream” to the core Misskey project. There's some odd choices in Misskey itself, but it's a project that's been maintained for several years now, and the dev has shown they're committed to support. I like that kind of stability.
Iceshrimp ended up being ruled out as an option as that felt like moving further down the rabbit hole and increasing risk in terms of support, losing exit routes should things go bad and, I'll be forthright, lack of trust in the development team.
Vanilla Misskey and Sharkey both were attractive. I like the additional Sharkey administrator control features (even if they've proven to be a little convoluted at times) and the ability to add approvals for account registrations, along with the control added by the roles system. The easy initial migration was a bonus. Realizing that I'm doing a ton of handwaving with the whole “I could theoretically pull in upstream changes if Sharkey goes unmaintained” thing is absolutely just kicking the issue down the road, but it wasn't going to be an easy task to migrate directly to Misskey from Firefish anyway, and then there was the potential to have to squash other issues as they arose through natural operation.
All this to say zoner.work is now a Sharkey instance. It hasn't been completely smooth sailing, but the migration rectified a lot of the niggling instance issues I'd been having and were headscratchers we couldn't fix. Usercards generate, SVG instance icons generate, and now I've got much finer control over the onboarding experience. It's been a net positive thus far.
Other Instance Updates
Server Box
I'd documented a while back the migration from DigitalOcean to Hetzner as far as provider for the VPS. zoner.work is still on vCPU-based Hetzner box, but has moved from a shared vCPU to a dedicated one. Thus far, this has kept the server out of trouble during load spikes, and the additional RAM has been necessary as additional services have been added to this same server.
Object Storage
Following multiple outages in the same week, S3 compatible storage has moved from DreamHost DreamObjects to DigitalOcean Spaces. This a few dollars more expensive per month for the level of usage the instance needs, but has been much more stable and reliable.
Backups
zoner.work has always had 7 days of daily snapshot backups via Hetzner's built-in backup features which has been super handy. As of this week, the instance also has 7 days of 4x per day Sharkey postgres database backups stored in a Hetzner volume attached to the server, with offsite copies of the backups kept for 14 days. My hope is that, in the event of a failure, this will allow Sharkey to be moved or restored independently with a minimum of data loss.
Status Monitoring
Status monitoring was previously driven by Upptime. While it was nice to have upptime run out of github pages so as to be completely remote from the cloud VPS, it was unfortunately unreliable and lacked the reporting options I truly wanted (showing only response time graphs is completely unintuitive). status.zoner.work is now driven by Uptime Kuma, with better, more frequent availability reporting, easier maintenance scheduling, and more connection options. The trade-off is that it does live on the same VPS as the majority of services so a full and total outage might take a couple extra minutes to notice, but there's a fair number of failure states between “everything's fine” and “it's a complete disaster” that it's already been able to help catch. Status reporting has been configured for all zoner.work services and their components, not just Sharkey.
All zoner.work services now use ProtonMail SMTP servers for sending email communications (previously, Mailjet was the sender, however there were reliability issues). zoner.work instances, where applicable, will require emails for signup as both a component of bot-limiting measures and to be able to send account recovery emails for self-service password resets. In line with the zoner.work privacy policy, zoner.work will never share email addresses and the only emails you'll ever receive are those that are a part of user-initiated platform functionality, such as the aforementioned password recovery email or user-scheduled digest emails. zoner.work will never send you any solicitations or advertisements via email of any kind.
New & Upcoming Services
In addition to the Sharkey and this writefreely instance, here's what's new and planned for the future:
Matrix – Available Now
zoner.work has a matrix server with aliasing so that it displays the zoner.work domain as hostname. Registration is currently freely open, with a Variance web client (a fork of Cinny) available at m.zoner.work. Feel free to register for an account. At some point registrations will be closed to invite-only. As of now, the matrix server is covered by Hetzner daily backups, but does not have dedicated postgres backups, which will be implemented soon.
Nextcloud – Available Now
The nextcloud instance has been relaunched under the nc.zoner.work subdomain. Registration is by invite only, but feel free to message me if you'd like to test things or set up a space. This is running on its own dedicated VPS due to the resource requirements and in an effort to prevent implosion should things go wrong. 7 days of backups are being kept, and all files are being kept in S3 storage in order to mitigate the need to add non-reducible volumes later. Nextcloud encrypts data into blocks when loading to S3, so that it can't be publicly viewable no matter the settings on the bucket. Due to the functionally limitless but pay-as-you-go storage, I'm able to offer about 50 GB per user. If more was needed, I'd ask for some contribution to the hosting cost. That said, having an ad-free free-free RSS reader option, CalDAV, CardDAV, document editing, cloud storage general space free of the corporate options has been very, very nice.
Invidious – Coming Soon
Despite all my de-googling, youtube has still been a necessity for me in some way. Especially when it comes to hosting stream and personal best VODs from my speedrunning, among other needs for video watching and sharing. I've been scoping out hosting an Invidious instance as part of the zoner network. This, as with the rest, is planned to be public but not completely open registration.
Owncast – Coming Soon
I plan on re-launching the Owncast instance that formerly was at oc.zoner.gay under the zoner.work domain. Initial state will be single-user but I plan on making the platform available to those who need/want it. Still working the structure out on this.
Peertube – On the horizon
I'm still struggling with video content hosting. On the previous peertube instance I tested, using static storage made management a challenge due to the storage needs of the video content, particularly with my need being for longer-length content. I do want to eventually move away from Youtube entirely, but thus far I've found the Peertube instance I'm currently on and contributing to be inadequate for my needs. I'm again looking into bringing back up a Peertube instance with S3 storage as the solution and better hardware, and if needs are met, making this available via invite.
Landing Page
At some point I'll have a proper zoner.work landing page for listing out the available services. Right now the closest to this is status.zoner.work having the appropriate web services linked in the status rows. Eventually, that'll be dressed up, because there's also additional documentation, like about pages, privacy policies, and terms of service, that I want to expand to include the entire service network. As of now, all that is just hosted in pages on the zoner.work Sharkey instance. — That wraps up current state. For anyone using or interested in using any of the zoner.work services, please contact me on the fediverse at @fen@zoner.work by email at fencore@zoner.work, or on matrix at @fen:zoner.work to talk through.
I send this to everyone who ever registers for any of my services, but I'll reiterate it here – my first priority is having services that are available, responsive, and work in the way they're intended. If there's ever any issue, please let me know, nothing is too small. I can't fix the things I don't know about, and while the high-level monitoring is good, it doesn't see functionality-related problems.
For those of you who are zoner.work users, I continue to value the trust you place in me as an administrator of the platform in which you base your online presence. I treat this responsibility with care, and it's important that I'm doing all that I can to provide a stable, sustainable, and safe space.