[email protected] is live! If you missed the previous discussion, it’s a community with a robot moderator that bans you if the community doesn’t like your comments, even if you’re not “breaking the rules.” The hope is to have a politics community without the arguing. [email protected] has an in-depth explanation of how it works.

I was trying to keep the algorithm a secret, to make it more difficult to game the system, but the admins convinced me that basically nobody would participate if they could be banned by a secret system they couldn’t know anything about. I posted the code as open source. It works like PageRank, by aggregating votes and assigning trust to users based on who the community trusts and banning users with too low a trust level.

I’ve also rebalanced the tuning of the algorithm and worked on it more. It now bans a tiny number of users (108 in total right now), but still including a lot of obnoxious accounts. There are now no slrpnk users banned. It’s a lot of lemmy.world people, a few from lemmy.ml or lemm.ee, and a scattering from other places.

Check it out! Let me know what you think.

  • LibertyLizard@slrpnk.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 months ago

    Very interesting idea. Glad you decided to make it transparent since I don’t think it will work otherwise.

    I am not sure I think it will work as intended—in my opinion the state of political discourse on Lemmy is pretty bad right now, but that may reflect the broader state of politics in society more than our particular platform. If we want to create a truly positive space for political discussion, it might require more intervention than banning a small fraction of users. To use myself as an example, I try to be pleasant and constructive but I know I don’t always succeed. An analysis based on the content of comments could also be interesting to try. Or a kind of intermediate status of user, where comments require mod approval. That could be overly dependent on subjective mod opinions though.

    Still, I think even if it doesn’t work, this is the type of experiment we need to elevate online discourse beyond the muck we see today.

    • auk@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 months ago

      I agree. As soon as I started talking to people about it, it was blatantly obvious that no one would trust it if I was trying to keep how it worked a secret. I would have loved to inhabit the future where everyone assumed it was an LLM and spent time on trying to trick the nonexistent AI, but it’s not to be.

      I agree with you about the bad state of the political discourse here. That’s why I want to do this. It looks really painful to take part in, and I thought for a long time about what could be a way to make it better. This may or may not work, but it’s what I could come up with.

      I do think there is a significant advantage to the bot being totally outside of human judgement, because that means it can be a lot more aggressive with moderation than a human could be, because it’s not personal. The solution I want to try for the muck you are talking about is setting a high bar, but it’s absurd to have a human go through comments sorting them into “high enough quality” and “not positive enough, engage better” because it’ll always be based on personal emotion and judgement. If it’s a bot then the bot can be a demanding jerk, and it’s okay.

      I think a lot of the intervention element you’re talking about can come from good transparency and giving people guidance and insight into how the bot works. The bans aren’t permanent. If someone wants to engage in a bot-protected community, they can, if they’re amenable to changing the way they are posting so that the bot likes them again. Which also means being real with people about what the bot is getting wrong when it inevitably does that, of course.

      • LibertyLizard@slrpnk.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        I agree with your points and general philosophy, but I guess I the flaw I was trying to address is that good users can post bad content and vice versa. So moderation strategies that can make decisions based on individual comments might be better than just banning individuals that on average we don’t like.

        This would require a totally different approach, and I don’t think your tool necessarily needs to solve every problem, but it’s worth pondering.

        • auk@slrpnk.netOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          I think I see it the opposite. There’s a population that posts normal stuff and sometimes crosses a line and posts inflammatory stuff. And there’s a population that has no interest in being normal or civil with their conversation, which can sometimes be kept in line to some degree by the moderators, or sometimes gets removed if they can’t.

          The theory behind this moderation is that it’s better to leave alone the first population, but outright remove the whole second population, while still giving them the option of coming back in if they want to change their way of interacting on a longer-term timescale. My guess is that it’s better to do that than to keep them in line by removing comments every now and then, but not intervene unless they cross certain lines, which means they can continue to make unwanted postings according to the community while skirting the lines of acceptable levels of offensiveness, according to the moderators.

          Whether that theory is accurate remains to be seen, of course.