Garbage Collector

Reader

Read the latest posts from Garbage Collector.

from fen

I was gonna title this state of the union, then state of the “fenion” and so here I've landed on onion. Congrats, welcome to the onion. No, not that onion.

Superficially, zoner.work looks a lot like it did a month or two ago. The instance is a misskey fork, it has S3 storage, it's roughly instance-shaped. That's about where it stops, though. A lot has changed since even just the start of 2024.

Fired Fish

In late December, it became very much clear that Firefish (formerly Calckey) support and development was going to be at best stagnant, and at worst abandoned. The bulk of the team forked to Catodon and were going in a direction very different from Firefish and the bulk of Misskey, and were still some time away from a public release. The project lead was offline, rightfully so, dealing with serious personal items, some of which stemmed from his involvement in the Fediverse. In short, the future wasn't looking hopeful for the platform.

The question naturally arose – what next? It became pretty important to start scoping out migration routes. As it was, we were already having some serious issues with image processing and usercard generation that had broken in places in just the short time we'd been on Firefish, so staying wasn't viable. While there's a load of Misskey forks, and even some Firefish forks, only a small few presented anything resembling a sensible path. Of the options available, I considered three -

  • Iceshrimp – forked from Firefish but with some heavy backend modification, and a forward path focusing on the essentials over the usual Misskey fluff. In particular, the backend rewrites and database optimizations were really attractive. Misskey forks tend to carry a lot of database heft. You can see it in firefish.social crumbling under its own weight. Migration here would be relatively trivial, but with further changes of the database building on what Firefish had changed itself from vanilla Misskey, how far afield would this be putting the instance in the event that Iceshrimp, in turn, became unmaintained?
  • Sharkey – A soft fork of Misskey that builds in additional user and administration tools. At the time of evaluation, it was almost like a half-step between Misskey and Firefish in terms of supporting some of the nicer add-ins Firefish had handled in the database (eg longer IDs, more complex passkey hashing) and different from either had added in much finer control of moderation and administration tools regarding users, registrations, and so forth that I'd been missing since leaving Akkoma in September. Structurally, this was closer to moving back “upstream” while keeping migrations easy (the dev had documented a route to move from Firefish to Sharkey). Because it's a soft fork, this also leaves the option to pull in upstream Misskey changes manually should the actual Sharkey project become unmaintained.
  • Vanilla Misskey – This one is the full “just move all the way back upstream” to the core Misskey project. There's some odd choices in Misskey itself, but it's a project that's been maintained for several years now, and the dev has shown they're committed to support. I like that kind of stability.

Iceshrimp ended up being ruled out as an option as that felt like moving further down the rabbit hole and increasing risk in terms of support, losing exit routes should things go bad and, I'll be forthright, lack of trust in the development team.

Vanilla Misskey and Sharkey both were attractive. I like the additional Sharkey administrator control features (even if they've proven to be a little convoluted at times) and the ability to add approvals for account registrations, along with the control added by the roles system. The easy initial migration was a bonus. Realizing that I'm doing a ton of handwaving with the whole “I could theoretically pull in upstream changes if Sharkey goes unmaintained” thing is absolutely just kicking the issue down the road, but it wasn't going to be an easy task to migrate directly to Misskey from Firefish anyway, and then there was the potential to have to squash other issues as they arose through natural operation.

All this to say zoner.work is now a Sharkey instance. It hasn't been completely smooth sailing, but the migration rectified a lot of the niggling instance issues I'd been having and were headscratchers we couldn't fix. Usercards generate, SVG instance icons generate, and now I've got much finer control over the onboarding experience. It's been a net positive thus far.

Other Instance Updates

Server Box

I'd documented a while back the migration from DigitalOcean to Hetzner as far as provider for the VPS. zoner.work is still on vCPU-based Hetzner box, but has moved from a shared vCPU to a dedicated one. Thus far, this has kept the server out of trouble during load spikes, and the additional RAM has been necessary as additional services have been added to this same server.

Object Storage

Following multiple outages in the same week, S3 compatible storage has moved from DreamHost DreamObjects to DigitalOcean Spaces. This a few dollars more expensive per month for the level of usage the instance needs, but has been much more stable and reliable.

Backups

zoner.work has always had 7 days of daily snapshot backups via Hetzner's built-in backup features which has been super handy. As of this week, the instance also has 7 days of 4x per day Sharkey postgres database backups stored in a Hetzner volume attached to the server, with offsite copies of the backups kept for 14 days. My hope is that, in the event of a failure, this will allow Sharkey to be moved or restored independently with a minimum of data loss.

Status Monitoring

Status monitoring was previously driven by Upptime. While it was nice to have upptime run out of github pages so as to be completely remote from the cloud VPS, it was unfortunately unreliable and lacked the reporting options I truly wanted (showing only response time graphs is completely unintuitive). status.zoner.work is now driven by Uptime Kuma, with better, more frequent availability reporting, easier maintenance scheduling, and more connection options. The trade-off is that it does live on the same VPS as the majority of services so a full and total outage might take a couple extra minutes to notice, but there's a fair number of failure states between “everything's fine” and “it's a complete disaster” that it's already been able to help catch. Status reporting has been configured for all zoner.work services and their components, not just Sharkey.

Email

All zoner.work services now use ProtonMail SMTP servers for sending email communications (previously, Mailjet was the sender, however there were reliability issues). zoner.work instances, where applicable, will require emails for signup as both a component of bot-limiting measures and to be able to send account recovery emails for self-service password resets. In line with the zoner.work privacy policy, zoner.work will never share email addresses and the only emails you'll ever receive are those that are a part of user-initiated platform functionality, such as the aforementioned password recovery email or user-scheduled digest emails. zoner.work will never send you any solicitations or advertisements via email of any kind.

New & Upcoming Services

In addition to the Sharkey and this writefreely instance, here's what's new and planned for the future:

Matrix – Available Now

zoner.work has a matrix server with aliasing so that it displays the zoner.work domain as hostname. Registration is currently freely open, with a Variance web client (a fork of Cinny) available at m.zoner.work. Feel free to register for an account. At some point registrations will be closed to invite-only. As of now, the matrix server is covered by Hetzner daily backups, but does not have dedicated postgres backups, which will be implemented soon.

Nextcloud – Available Now

The nextcloud instance has been relaunched under the nc.zoner.work subdomain. Registration is by invite only, but feel free to message me if you'd like to test things or set up a space. This is running on its own dedicated VPS due to the resource requirements and in an effort to prevent implosion should things go wrong. 7 days of backups are being kept, and all files are being kept in S3 storage in order to mitigate the need to add non-reducible volumes later. Nextcloud encrypts data into blocks when loading to S3, so that it can't be publicly viewable no matter the settings on the bucket. Due to the functionally limitless but pay-as-you-go storage, I'm able to offer about 50 GB per user. If more was needed, I'd ask for some contribution to the hosting cost. That said, having an ad-free free-free RSS reader option, CalDAV, CardDAV, document editing, cloud storage general space free of the corporate options has been very, very nice.

Invidious – Coming Soon

Despite all my de-googling, youtube has still been a necessity for me in some way. Especially when it comes to hosting stream and personal best VODs from my speedrunning, among other needs for video watching and sharing. I've been scoping out hosting an Invidious instance as part of the zoner network. This, as with the rest, is planned to be public but not completely open registration.

Owncast – Coming Soon

I plan on re-launching the Owncast instance that formerly was at oc.zoner.gay under the zoner.work domain. Initial state will be single-user but I plan on making the platform available to those who need/want it. Still working the structure out on this.

Peertube – On the horizon

I'm still struggling with video content hosting. On the previous peertube instance I tested, using static storage made management a challenge due to the storage needs of the video content, particularly with my need being for longer-length content. I do want to eventually move away from Youtube entirely, but thus far I've found the Peertube instance I'm currently on and contributing to be inadequate for my needs. I'm again looking into bringing back up a Peertube instance with S3 storage as the solution and better hardware, and if needs are met, making this available via invite.

Landing Page

At some point I'll have a proper zoner.work landing page for listing out the available services. Right now the closest to this is status.zoner.work having the appropriate web services linked in the status rows. Eventually, that'll be dressed up, because there's also additional documentation, like about pages, privacy policies, and terms of service, that I want to expand to include the entire service network. As of now, all that is just hosted in pages on the zoner.work Sharkey instance. — That wraps up current state. For anyone using or interested in using any of the zoner.work services, please contact me on the fediverse at @fen@zoner.work by email at fencore@zoner.work, or on matrix at @fen:zoner.work to talk through.

I send this to everyone who ever registers for any of my services, but I'll reiterate it here – my first priority is having services that are available, responsive, and work in the way they're intended. If there's ever any issue, please let me know, nothing is too small. I can't fix the things I don't know about, and while the high-level monitoring is good, it doesn't see functionality-related problems.

For those of you who are zoner.work users, I continue to value the trust you place in me as an administrator of the platform in which you base your online presence. I treat this responsibility with care, and it's important that I'm doing all that I can to provide a stable, sustainable, and safe space.

 
Read more...

from fen

I went on a bit of an adventure over the past few weeks in a move from Windows to Linux full time. Over a few months of dual-booting I'd taken care of the big, critical hurdles, and finally decided to pull the trigger. One that was still on the list was getting Portal 2 and its speedrunning tools up and moving, and jumped in knowing more or less that there are people who run on Linux, but not necessarily how. Through some trial and error, I got things working for my setup and hope to capture here some of what I learned.

The major hurdles were getting Portal 2 to display correctly and learning how to configure Adrift – I've detailed things below but the major bullet points are:

  • Download and install Steam and Portal 2
  • Download and install the Linux plugin for SourceAutoRecord (SAR) and optionally also install srconfigs
  • Configure Portal 2 to launch in the correct resolution and position using launch arguments
  • Download and unpack Adrift Speedrun Timer and the sar_split autosplitter
  • Place adrift executable, splitter, splits, and config files in the same folder
  • Run adrift and Portal 2

Steam and Portal 2

For Steam, use your preferred method of install for your distro. Many distros have current versions of steam in their package repos, but a .deb/.rpm package straight from valve or install from a flatpak/snap store should be just fine. Install Portal 2, but don't sweat turning Proton on – we need to run Portal 2 as its native Linux version for compatibility with the current version of Adrift.

Install the Linux release of SAR by placing sar.so in your Portal 2 install folder (usually .steam/steam/steamapps/common/Portal\ 2/. Optionally, consider also installing srconfigs in addition to SAR, which provides enhanced functionality and control. Both are cross-platform.

Finally, launch Portal 2. If you're on a single-monitor setup, or you have a multi-monitor setup where your leftmost monitor is your primary, odds are Portal 2 will have launched at the right resolution and position. You're good to move on to setting up Adrift. If you're like me and have a non-conventional monitor setup, or even just a multi-monitor where you don't want the game on the far left monitor, there will likely need to be a little extra to do. This will take some adjusting depending on your specific needs.

I found that if I launched Portal 2 with no parameters, it would try to display at the full resolution of the “rectangle” that encloses my monitors display area, and lock to my center monitor with scaling that wouldn't allow me to interact with anything but the top-left of the window via mouse, and an inability to confirm settings changes via keyboard. This means configuring, generally, through the in-game menus is out.

In Steam, you can add launch parameters by right-clicking on the game in your gameslist and selecting “Properties”. The key options that got 90% of the way there were:

  • -sw (-startwindowed): forces Portal 2 to start windowed (as opposed to the default fullscreen) so that its position can be altered with the following options
  • -w (-width): sets the game's horizontal resolution
  • -h (-height): sets the game's vertical resolution
  • -x: sets the horizontal position of the window, larger numbers position further to the right
  • -y: sets the vertical position on the window, larger numbers position the window lower

For a full list of launch options, see here.

For my particular use case, -sw -w 1920 -h 1080 -x 1280 was the right combination to position the Portal 2 window on my center monitor at the right resolution.

Past this, you may need to take care of a desktop panel, if applicable, as any existing panels will likely try to draw over top of the Portal 2 game window. In KDE, I set an Application Window Rule to force Portal 2 to be fullscreen from the DE (as opposed to the application). This locks Portal 2 to the monitor and allows it to draw on top of the KDE taskbar panel. Alternatively, one could set the panel to hide automatically or move it to another location. Continue to work the settings until you're satisfied with where Portal 2 launches and displays.

Setting Up Adrift

For Adrift, pre-compiled binaries are avaialble on the releases page. v0.1.1 was most current as of this writing. If you go this route, grab both the “adrift” binary download and the “splitters.tar.gz” archive.

If you'd prefer, you can compile from source – adrift requires a special build of vtk to also be built and installed from source, which itself requires libcairo2-dev and libgtk2.0-dev on Debian-based systems (or the equivalent for your system). More detail is on each project's page for building.

As a high-level overview, Adrift is a timer built on using autosplitters for program control, and cannot be manually controlled, and is pretty bare-bones, which means it's really only suitable for Source engine games such as Portal 2. Being pretty minimal, that means there's a little setup Adrift needs before it can be used.

By default Adrift will look for three files – splits, splitter, and config – in the same directory where the adrift executable is located, but a directory with a set of files for a particular game can be specified at launch time by passing the directory as an argument (eg, ./adrift p2_amc), which can be useful if you run multiple games or categories. For example, if you run both Portal 2 singleplayer and co-op, you might have a directory for portal2_nosla and portal2_amc that each have their own splits and config for Singleplayer No SLA and Co-Op All Main Courses specifically, which will also allow gold splits and peronal bests to be separated. For the rough purposes of this guide, we'll assume a flat structure with all files in the same directory as the Adrift executable – you can always restructure for your particular need if you choose to later.

  1. Make a splits file – it is required that a splits file be present, otherwise Adrift will not start. Each line will have a split name. Typically, one split for each chapter or course will suffice with a default SAR configuration. This file should be named “splits”.
  2. Extract the splitter file – the splitters.tar.gz archive has a few contents, but what we need is the sar_split file. Place this in the same folder as the Adrift executable and rename it to “splitter”.
  3. Create a “config” file – The Adrift github page has details on what options Adrift supports, but at minimum you'll likely want to change the category name to match what you're running. By default, Adrift shows Portal 2 No SLA, with white text on a transparent window.
  4. Make Adrift executable – Change adrift's permissions to allow it be executable from either your file manager (Right-click –> Properties), or by running chmod +x /path/to/adrift/executable From here, Adrift can be run either by double-clicking the executable in your file manager, or navigating to the folder where you have adrift stored in your terminal and executing with ./adrift.

Adrift should launch using your config files, and if you launched from terminal you should also see status messages as Adrift searches for the Portal 2 application process. If Portal 2 hasn't yet been launched, do so and verify that Adrift finds the process via the terminal messages. If there are issues, Adrift will give confirmation both when it finds the portal2 process and when it makes connection with SAR, which should help to give indication of where to start troubleshooting.

Start Running

That's the critical minimum to get up and running. From here, Reference SAR and Adrift documentation for additional configuration and customization to meet your needs. Speedrun timing should start when you launch either a Single Player or Co-Operative Game, and using the do_reset command during or following a run should reset the timer and your game both for a fresh run.

Be sure also to visit the Portal 2 Speedrunning Discord server for dicussion, help, and support. Much of the information here is an amalgamation of information from P2SR and the members' effort in gathering resources and creating tools are what allow this to be possible in the first place.

 
Read more...

from fen

Domains are expensive.

I didn't really put much thought into overall, longterm stability when I started zoner.gay, mostly with regard to cost. Frankly, I'm still surprised that the whole “hosting your own copy of a whole-ass microblogging site” thing has generally gone pretty well. The host migration from Digital Ocean to Hetzner that I talked about in my last post cut my actual hosting cost roughly in half. The other component is domain registrations. Zoner.gay was cheap to pick up, but renewal is going to run $50 when it next comes up. It's not for another 8 months, but that's not never.

The good news is I have a domain I've maintained for years that only runs $10/year – zoner.work. I've basically had zoner.work tied to a hosting plan with dreamhost for years running a managed WordPress site. I don't much like WordPress, so I never really used it as much more than a landing and the one time I made a JSON adapter for Super Mario 64 Bingo. In hindsight, I probably should've held on to that.

Tied to this, I'd been very curious to try one of the *key forks, so I spun up a test Firefish instance and spent a couple days exploring and configuring before falling in love with how polished the frontend felt and finally pulling the trigger and moving over myself. Its not been perfect, but its a great UI experience, at least compared to *oma. At time of writing, I'm about two weeks in and while I have a couple things on the wishlist for admin controls, and there was a weird import bug that led to me opening my first issue on a repo ever, I think this was the right choice.

All of this has been more or less in service of a larger goal – I'd like to open up my instance. If not to public, at least to friends. And even for my own sake, making things sustainable, stable, and affordable are solid long-term goals. I posted up a soft opening and we'll see if we get any takers – moving instances is a big deal and I'm not exactly someone people immediately jump instances for but maybe next time they need a backup or are looking to move, they'll remember me.

So, Firefish is the target for the new instance under the new domain. I absolutely still want to have writefreely for these longer-form posts. Owncast is a pretty easy move since there's only four followers. Nextcloud will be an absolute nightmare though, and I'm not looking forward to it. I'm sure I'll have a long post to write for when that ordeal comes.

The sorta cool thing about having the new instance up now is that, looking at a longer-term option, I've also set up S3 compatible storage for the instance. Obviously that adds a little bit of cost in dedicated storage in the short term (looking at about $4ish per month? we'll see when this month closes for billing), but I think long-term it'll save a lot of headache as the data store gets larger over time. We're already at 13 GB between the two of us, and it'll only get bigger from there.

I do want to take a moment, though, and despite joking about it earlier in the post, truly put to record a thanks to @andy@zoner.work for trusting me enough to make a jump with what is basically his entire social media presence. I didn't ask him to, but he wanted to check it out and then dove in the deep end to do it. It's humbling, it's warming, it's scary, it's emboldening, to have someone willing to completely put their presence in your hands. I treasure that. I can't fully put together a string of words that captures what I think I feel because I'm pretty sure that combination doesn't exist. In lieu of that, I will say this: thank you.

Part of me wants to say 'I hope that to be the first of many' but I don't really think that's what I feel. I do hope at some point it's more than just the two of us on zoner.work, but I don't want that to be because of any individual. I hope to be part of a community of people who share, respect, and dignify each other in a way that I felt when I joined tech.lgbt back in November. I hesitate to say it feels like a calling, but I do think as far as contributions I can make, that being a part of building a safe space within the fediverse is something that I would be suited to, and something that I want to participate in.

Time will tell.

 
Read more...

from fen

It only took, what, three weeks? It's finally done though. As happy as I was with Digital Ocean (or, thought I was), the cost was just getting to be more than I could afford long-term for what I was getting. All of the active zoner.gay network has been moved to Hetzner, which includes this weblog, the zoner.gay Akkoma instance itself, plus Owncast and a shiny new Nextcloud instance. More on that in a bit.

This whole thing is just one big exercise in learning Docker really – and that it's a little bit more of a hassle than I expected.

I'm running this Writefreely instance directly on the host. That makes for a really easy move the way it's built – just copy the data folder, port over the systemd service, and rerun certbot. Cake.

Owncast is in a docker container but also ends up being an easy move – the docker volume has all the persistent data exposed to the host, so move that and run the single docker run command to get things going again, not a problem.

Nextcloud got stood up in place – I'd set it up from scratch on the host server, so aside from monitoring resource usage, it's a pretty self-contained setup. I do not look forward to the day where it ends up needing to be moved, though, because i opted for the AIO master container installation which was super easy, but I have next to no idea how it works or if/where it's storing persistent data. That's on the investigative to-do list.

Aside from that worry, though, Nextcloud is a pretty neat piece of kit. It's nice to have control of my own cloud storage, notes, tasks, calendar, etc, which is what I had intended to use it for when I set it up. With a shift away from Google on my Pixel 7's fresh install of grapheneOS too, it became the perfect place to also sync my phone contacts outside of my Google account. De-googling as much as possible is really nice.

The feature I've actually come to use the most though, and unexpectedly so, is the News applet's synced RSS feeds. I have both my phone and tablet with a Nextcloud-compatible RSS reader app, and have been slowly building a library of RSS feeds. whatyearisit.jpg

Now, back to Akkoma. Let me preface this next bit – this migration took so long not because Akkoma is difficult (quite the opposite, actually), but because I am woefully inexperienced and past me wrote a check that she didn't know how to cash, which suddenly became present-me's problem.

I had expected, naively, that I might just be able to export or commit the docker containers, copy them over to the new host machine, and then bring them online. Maybe this is doable. Beats the fuck out of me, though, because I spent about two nights trying to get docker to load and connect the frontend and backend containers (yes, i know about the docker network to support this). I think my issue is that I expected docker to behave similarly to a traditional VM container, and it doesn't really.

I also had a sort of nebulous understanding of file handling between host and container which I don't think truly clicked until last night as I was making a second attempt at migration, which may have also been a part of the problem. In short, last week all I had to show for my effort was a broken frontend and no evidence that it was connecting to the postgresql container where my DB was supposed to be.

Over the weekend, I also noticed that I was one release version out of date. That was an issue i wanted to tackle before attempting to migrate again, so pulled and started running the necessary mix tasks which, for some reason, absolutely failed. Compilations did not work, db migration did not work, and the containers would start, but, well, “broken” is the mild way to put it. I should've grabbed a screenshot.

Remember that part where I said I thought I was happy with Digital Ocean? There was this one nagging feeling I'd had for a while which I'd had the foresight to compensate for – that was backups. DO allows for instant snapshots anytime, but backups are completely automated and only happen once a week, usually overnight Thursday into Friday Eastern US time. [Contrast this with Hetzner, which takes backups daily for a rolling 7 days, and both backups (temp) and snapshots (long term) are able to be triggered manually, I love it].

Guess who forgot to take a backup before she started the update process.

Guess who got a really, really unwelcome lesson in hubris.

I was very, very lucky that I was doing this on a Friday night (who doesn't hang out at home rolling The Old Republic and update their web services on a Friday night? What are you, a bunch of shut-ins?) and so the last backup had occurred less than 24 hours before. I could live with a day of lost posts, after all the instance is just me. I restored the backup, but there was still the lingering question of why the update failed in the first place. It was giving me errors and throwing warnings that I needed to verify that there was a compiler installed.

After some poking, I stumbled on that apparently git was pointed to the dev branch. That shouldn't be right. I pointed it back to the stable branch with a reset and re-pulled the new version. Git listed the new files being pulled in. This time the update worked fine, migrations and all. I don't know what exactly was different because I have a feeling there was some other me-specific weirdness happening there, but I didn't care to spend time digging into it. If it happens again, maybe I'll put effort into tracing the issue. Update complete, now back to the migration situation.

It'd been about a week again since I last tried the docker image method. Thinking about it, migrating akkoma is all about just the static folder and the postgres db. Those just lay overtop more or less the same base installation. I'd already gone through that once moving from the OTP install to docker to begin with, why not just, y'know, do it again?

I wish this had occurred to me sooner. Exporting the static content folders and dumping the db was far, far easier. Migration was complete about 2:30 AM Monday morning ET. We're now fully on Hetzner, and as I write this, I destroyed the last droplet I was maintaining. Goodbye Digital Ocean, I'll think of you fondly as my first place to have been supporting my first venture into this...digital...ocean... :

But wait! Fen! Didn't you have a Peertube instance too? What happened to that?

Great question, person I completely made up for the loose purpose of a segue. I debated a bit on moving Peertube over – I really, really love the idea of Peertube as a distributed and federated platform for sharing videos. It's youtube with the power of bittorrent! How cool is that!? Like lemmy and kbin before it, the issue really came down to scope and usefulness. From a resource standpoint, Peertube is heavy, especially when it swallows the rest of the server trying to transcode an upload that's 1-2 hours in length. All I want out of it is to be a place to stash past stream broadcasts and speedrun PB vods, it's just more than is needed and even if I thought it might be useful to open up to friends, video storage isn't exactly a trivial matter.

As an alternative, I'm seriously looking into what I can do with Nextcloud to just make the videos available in a fileshare-type deal and stream in-browser. If that works, I might just do that as the easy answer. If it doesn't look like that's going to be viable in a satisfying way, then I might just sign up for a peertube instance elsewhere and chip in a few bucks to whomever's ko-fi or whatever. That maps same as kbin, where I'm currently on kbin.social instead of continuing to try to host my own. Some things just don't really make sense as a single-user instance.

Speaking of which, the RSS reader via Nextcloud has almost killed entirely my usage of my kbin.social account. I discussed my Reddit and Lemmy/kbin habits in my previous post, but the short version is I was a passive consumer of content for news and my hobbies and interests. Now I've got the news, and the interests never really made a 1:1 jump to Lemmy/kbin fedispace, so I'd already come off that some. I get more of that through Akkoma than I'd really expected anyway.

As is, I think I have a relatively stable configuration of instances that I'm pretty happy with. Next projects are building on what's there – I'd like to get a volume going and give some extra dedicated storage to both Akkoma and Nextcloud, maybe even a proper CDN. Akkoma has some clerical and administrative paperwork I want to take care of with the intent of maybe someday opening it up to friends, but I need to put some polish into it before doing that because it is a very me space.

Whatever you do, pay no mind to the shadowy figure in the background that is a nagging desire to want to set up a Firefish instance just because. You don't need it Fen, you're happy, you're stable, it offers you no benefit. Don't do it.

(i'm probably going to do it eventually, because I'm weak >_<)

 
Read more...

from fen

I said I'd talk about Lemmy/kbin and this is that post.

Back in June of this year, just before the Reddit blackouts and just after the news had come out about the API pricing changes, I'd made the decision to spin up my own Lemmy instance because this was the tipping point where a declining Reddit was no longer fit for me. I genuinely did get a lot of news via Reddit, and it was my go-to just-absently-browsing-the-internet platform of choice, which meant finding a replacement was something I was interested in doing. Most of my browsing was on mobile via Boost for Reddit, which of course was going to be one of the third-party apps impacted by and shutting down because of the astronomical API rate hike. I figured, why wait until I'm actually locked out after June 30 when the change would go into effect, let's get this show on the road.

By this point there are a number of reddit-like content aggregator platforms – Tildes, Raddle, technically HackerNews fits (blergh), components of Hive, and of course Lemmy and kbin, just to name a few. Tildes was out since I wasn't planning on jumping out of one situation where I was at the mercy of the administration to another (and it was invite-only). Raddle has the ability to be self-hosted and has a number of instances, but they're more like private forums and aren't interlinked. The last two were the ones that interested me most because of their ActivityPub compatibility and ability to federate with other instances. This makes a ton of sense for an instance with a small userbase (ie me, maybe a couple friends) since we won't be running a whole community or hardly even our own sub. At least at its start, it would just be a portal, albeit one I could control.

I loved the ideas behind kbin and the direction it was going, but unfortunately it was still pretty young and relatively feature incomplete – it had been started mid-2021, had more or less a single dev working on it, and I don't think it has a proper functioning instance until either late 2022 or early 2023. The project was very clear about being essentially in an alpha state. Even a couple months later at the time of this writing, it has really poor tools for administration, but that's the nature of software in alpha.

Lemmy was a little older, dating back to early 2019, and had been functional for a couple years by this point. Even before the big push the API changes were getting ready to make, I'd seen it mentioned previously as a Reddit alternative, though never paid it much mind. While I didn't necessarily care for the politics of Lemmy's version of Website Boy, having my own self-hosted instance meant I could still control my interface, and it was more complete on the features I wanted. Namely control of registrations and easy deployment and updating.

Mind you, back at this point in June, I'd only been self-hosting my Akkoma instance for about 5 weeks. Tiny baby fen just now set afloat on her own raft and it kinda actually working out was great. It was a little unexpected, actually. I had genuinely planned that I'd probably migrate back to my @fencore@tech.lgbt account after my 30 day migration cooldown was up from the first move, but I was in love then and still am now.

So anyway, deployment and getting federation and all that jazz up and running was relatively simple using the Ansible playbook. I did put the Lemmy instance on its own droplet so as to not risk impacting my Akkoma instance (I hadn't yet moved Akkoma to its own docker container). Everything more or less worked out of the box, so top marks for all that.

A couple weeks afterward a new major release was made available, and likewise, updating was fairly simple. Perfect.

“Fen,” you say, “sounds like everything's great, but you killed your instance. What's the deal?”

Great question, fake person I made up. The problem was me! And not in the usual “I identify as a menace” kind of problem I like to be on most days, but in this case really came down to a fundamental difference in how I used a platform like a link aggregator compared to what I do on Akkoma/microblogging.

I absolutely hesitate to call anything I do “creation” of any kind. Akkoma fits my needs in being able to be what some people might call “funny” while making connections to people. In fact, the focus is the people at either end, and fostering conversations. I have personal things I post about, but people wouldn't consider following me for whatever project I'm in to, because then they'd basically die of relevant content hunger when I never post about it again. I, naturally, was not conducive to the algorithm on Twitter, but have absolutely found my niche on fediverse via microblogging algorithm-free and making legitimate connections with very good people.

This runs counter to how a link aggregation platform is meant to be used. The whole angle of link aggregation is “here's a thing, discuss it” in a kind of meta sort of fashion. The golden age of Reddit really thrived because of content then being created specifically for the platform, and not just linked or shared to it. The part of me that enjoys reading and learning absolutely craves that sort of informational uptake. Read a thing, and then hit the comments to see what sort of interesting stories and perspectives there are. Good stuff.

The issue is that I don't interact well in that kind of space. In my 12ish years on Reddit I'd made a few hundred comments and only...3? 4? of my own posts. And over the past two months on Lemmy I think maybe 6-8 comments total? My point being is that in that space, I am absolutely a consumer, not a contributor. Though this consumption did help in a lot of ways – realizing I am trans and being able to read the stories of other trans people via r/transgender and laugh along with people like me on r/traaaaaaa and r/egg_irl really helped to make me feel a little more like I belonged. That kind of stuff was what was really tough to walk away from. Even still though, I have a hard time with “hey look at me I made this thing or have this idea to share” one shot deal that creators on that kind of platform take to. There's probably some things there I need to talk to a therapist about.

The whole point of this self-hosting journey has been to take control, specifically of:

  1. My data
  2. My interface with the internet and the space I occupy in it.

As it turn out, while I learned a lot in the process of creating my Lemmy instance, it functionally served neither of those two end goals.

What data was there to control, truly? I hardly commented or posted. I didn't moderate a community and I had no interest in doing so.

What benefit did I get in controlling my interface? I could just as easily block from my individual account as I could my instance, and I wasn't really aiming to build a community of people that I could extend that courtesy to. I could just as easily subscribe and self-moderate from any other instance.

And, on top of that, there were some flaws in the software that are just a factor of its development state. A content aggregator actually really needs a good algorithm to handle surfacing and decay of new posts to keep the main feed moving, especially a feed federated across countless instances. Lemmy's is not in a good state, leaving posts that are weeks or months old near the top of the feed. Kbin's is marginally better, but not enough to make a difference. Because of this, I found myself over time wanting to access it less and less and when I would I wasn't enjoying the time I was spending there.

Ergo, Shork Online is dead. I cut my server for the instance, I made a backup which I'll probably can in a week or two, and moved my subscriptions to kbin.social and will probably use that as home base for the few times I feel like I want to check up on what's happening on the threadiverse (it's all Linus Tech Tips drama this week, unsurprisingly, and this parenthetical will absolutely date this post terribly. By the way, hate the term threadiverse).

The platforms still have a lot of maturing to do and I think will eventually be in a really good space, but even then I don't foresee myself spinning up another instance. This experiment taught me a lot about what has value for me personally on this project, and I feel it was very much worthwhile to gain that knowledge, even if the instance itself didn't last long.

And thanks to @andyy@tech.lgbt for putting up with the worst administrator (me) who still never had email notifications running the entire time the instance existed. Glad you never forgot your password.

 
Read more...