an appeal to the fediverse regarding anti-abuse 

Dear fediverse:

Fascism joining the fediverse is extremely bad, and we have to do something about it. But please, please, please: give me two weeks before you roll out any new solutions. Some of the solutions being proposed look like they will make the situation better but will make it much worse.

I am dropping nearly everything to write a demo and spec explaining how to do things right. Please give me two weeks. I've been preparing for this.

an appeal to the fediverse regarding anti-abuse 

As a hint as to why the current solutions aren't going to work, I'll point you to what happened when Mastodon rolled out direct messages with OStatus, but they *weren't really* private messages. An admirable attempt but it needed a different approach.

I believe this could be like that, but 10x worse. I've been studying what will happen under different approaches and trying hard to figure out how to map a solution onto what we have.

Show thread

an appeal to the fediverse regarding anti-abuse 

@cwebber what are their solutions that you find bad?

an appeal to the fediverse regarding anti-abuse 

@wilkie That's probably a really hard question for @cwebber to answer, since it'd involve talking about his plan that, as he said, isn't ready.

So I'll try and answer: One issue is they would like to cryptographically-ensure that a communication is proper, which means that the more targeted your server is by harassment, the more costly it will be to continue operating. Another is that it doesn't allow a similar autonomy of moderation as is current.

@emsenn @wilkie @cwebber Please elaborate on the autonomy of moderation point. What do you mean by that?

@Gargron If I understand it right - which I very well might not:

Current discussions of OCAP provide the tools to instance moderators, but don't provide similar tooling for users.

Right now, as I understand it, users can do most of the moderation action moderators can, relative to their own profile: they can autonomously moderate their profile even if their instance doesn't do moderation.

(Again, I could be wrong in understanding how things are or could be.)
@wilkie @cwebber

Follow

@emsenn @Gargron ah, you are right about the current implementation in Mastodon. That can be fixed though (but it would be more expensive)

@emsenn @Gargron say hello to github.com/tootsuite/mastodon/

Also, your claim about “the more targeted your server is by harassment, the more costly it will be to continue operating” is partially true: cryptography as well as the most expensive db queries are avoided when the instance is known to be blocked instance-wide (not the user-defined blocks honored by this PR)

@Thib Gonna untag Gargron since I'm just asking questions: My main concern is with short-lived instances spun up to DDoS more than a known server trying to spam me. There are tools at nearly every part of the network stack for dealing with the latter.

@emsenn that concern is already very much present without authenticated fetches, unfortunately

@Thib Well, the concern of DDoSing at a network level is, sure. But the encryption stuff adds other resources to the pile of what can be made scarce.

@emsenn that can be done by pushing things to the inbox, this will trigger exactly the same kind of workload as fetching a toot with a signed request.

And I'm not sure something OCAPS-based will help a lot, since you'd still have at least the endpoint(s) dedicated to requesting caps vulnerable to the same kind of things

@emsenn that being said, I'm still curious what cwebber comes up with

@Thib Thanks for that first sentence - it's a crucial part in what I was missing to understand.

Sign in to participate in the conversation
Mastodon (instance perso)

This is a small personal instance running on a couple small ARM servers at home.