Blob/Cat

Sorry for the downtime everyone. Looks like the migrations are going smoothly. NB should startup once everything is done.

@Nobody @fluffy @matrix @mewmew @p @sjw @Kommando Alright, NB should be back up. The whole process took about 30 minutes. Not bad.
image.png

@sjw @Kommando @Nobody @fluffy @matrix @mewmew @sjw

> 30 minutes

> Not bad

Not good at all.

@p @sjw @Kommando @Nobody @fluffy @matrix @mewmew Sorry I don't have a nasa computer running my db

@sjw @Kommando @Nobody @fluffy @matrix @mewmew @sjw This can be done in such a way that it doesn't hose your DB for 30 minutes. I will make a ticket later.

@p @Kommando @Nobody @fluffy @matrix @mewmew @sjw I reverted migrations from a different branch, got back on develop, and updated to current. I was about 3 months behind so a fuck ton of migrations was expected. Plus, I had to rebuild a few indexes.

@Kommando @Nobody @fluffy @matrix @mewmew @p @sjw @march @redneonglow @redneonglow Sorry for the downtime again. Shouldn't be too much longer now.
image.png

@fluffy @Kommando @Nobody @march @matrix @mewmew @p @redneonglow @redneonglow @sjw Thanks. @mewmew said I needed a blobcat pfp so I made this.

@Kommando @Nobody @fluffy @march @matrix @mewmew @p @redneonglow @redneonglow @sjw Jesus christ! How long is this going to take?

@Nobody Yeah, DB migration (going back to rum indexes). It's taking way longer than expected. blobcatsadreachrev
phone_in_bed.png

@Nobody Welp, that was a 2.5 hour migration and now everything's fucked. I hope that's just a backlog of federation and it'll even out soon.

@sjw @Nobody what's fucked?

if the instance is running like shit run a vacuum analyze

@mewmew @sjw @Nobody what is vaccum analyze?

@a1batross @Nobody @sjw you run it in postgres and it frees up a bit of space and also increases the query planner's iq by 10000% because it goes from being really slow to really fast

@mewmew @sjw @Nobody It was offline for a few hours doing a migration to rum indexes. Now it's just catching up on what it missed (I think)

@mewmew @Nobody @sjw Yeah, it's starting to level out.

@mewmew @a1batross @Nobody I've got good auto vacuum and auto analyse but I've made a ton of changes and the query planner is probably all kinds of fucked up so I'm running vacuum(analyse); now. I'll try letting tags on again and see how it goes.

@mewmew @Nobody @a1batross Oh yeah, it's removing a fuck tonne of row versions.

@sjw @Kommando @Nobody @fluffy @march @matrix @mewmew @redneonglow @redneonglow @sjw

I just use iotop and this:

select CURRENT_TIMESTAMP-query_start as runtime, * from pg_stat_activity where datname = 'YOUR_DB_NAME_HERE' and state <> 'idle' order by query_start asc;

That gives you an ordered list of what queries are running. It's no progress bar, but you can more or less see how expensive something is.

@p @Kommando @Nobody @fluffy @march @matrix @mewmew @redneonglow @redneonglow @sjw Yeah, postgres seems to have shat itself while I was asleep. Funkwhale and Plume DBs seem to be working fine but Misskey and Pleroma DBs are fugged. Doesn't look like any data was lost and I can probably fix it but it'll take a little while. ETA 2 hours.

@sjw @Kommando @Nobody @fluffy @march @matrix @mewmew @redneonglow @redneonglow @sjw Shat itself? What happened?

There's one Palormo index that takes, like, 8 hours to build on FSE. Watch the active queries and the disk I/O, if it's shitting itself you'll know.

@p @Kommando @Nobody @fluffy @march @matrix @mewmew @redneonglow @redneonglow @sjw Maybe some driver crashed or stuttered or something. Not entirely sure. youmushrug
Those two databases are angry in the logs tho. I think I've fixed it tho. Now we just play the waiting game.
replies
0
announces
0
likes
4