Full stack developer and privacy advocate. I like to keep the mentality, if you can program one language well, then you can program in any language!

  • 2 Posts
  • 88 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle




  • Imagine living in China,
    where the government is able to request data of each company in their country.

    Imagine that China would setup an AI/LLM, to feed all private chat data into it,
    and automatically flagging opposition of the government regime.

    Imagine a white van appearing in front of your house and disappearing into a concentration camp because you got flagged after expressing your opposition to the government to your mate in a private chat.

    All collected data can be abused like that,
    or by other means (E.g. a country at war gets hacked, which could lead to leaking critical private information on political/defensive decisions).

    To me the question is not if data collected on you will be abused, but rather when will it be abused?

    Just having it stored somewhere imposes risks.



  • OP I agree with you, it’s a great idea imo.
    I’ve been a moderator before on a Discord server with +1000 members, for one of my FOSS projects,
    and maintenance against scam / spam bots grew so bad,
    that I had to get a team of moderators + an auto moderation bot + wrote an additional moderation bot myself!..

    Here is the source to that bot, might be usable for inspiration or just plain usable some other users:
    https://github.com/Rikj000/Discord-Auto-Ban

    I think it will only be a matter of time before the spam / scam bots catch up to Lemmy,
    so it’s good to be ahead of the curve with auto-moderation.

    However I also partially agree with @dohpaz42, auto-moderation on Reddit is very, uhm, present.
    Imo auto moderation should not really be visible to non-offenders.