• 0 Posts
  • 397 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • And you can’t tell when something is active/focused or not because every goddamn app and web site wants to use its own “design language”. Wish I had a dollar for every time I saw two options, one light-gray and one dark-gray, with no way to know whether dark or light was supposed to mean “active”.

    I miss old-school Mac OS when consistency was king. But even Mac OS abandoned consistency about 25 years ago. I’d say the introduction of “brushed metal” was the beginning of the end, and IIRC that was late 90s. I am old and grumpy.






  • We find that the MTEs are biased, signif-icantly favoring White-associated names in 85.1% of casesand female-associated names in only 11.1% of case

    If you’re planning to use LLMs for anything along these lines, you should filter out irrelevant details like names before any evaluation step. Honestly, humans should do the same, but it’s impractical. This is, ironically, something LLMs are very well suited for.

    Of course, that doesn’t mean off-the-shelf tools are actually doing that, and there are other potential issues as well, such as biases around cities, schools, or any non-personal info on a resume that might correlate with race/gender/etc.

    I think there’s great potential for LLMs to reduce bias compared to humans, but half-assed implementations are currently the norm, so be careful.










  • Yeah, they were able (and thus legally required) to hand over the user’s recovery email address, which is what got them caught. You don’t need to enter a recovery email address, and you can of course choose to use an equally-secure service for recovery.

    One big technical issue to note is that Proton doesn’t use end-to-end encryption for email headers, which includes recipients and subject lines, among other things. So that’s potentially exposed to law enforcement as well. I believe Tuta does encrypt headers.




  • However, it is still comparatively easy for a determined individual to remove a watermark and make AI-generated text look as if it was written by a person.

    And that’s assuming people are using a model specifically designed with watermarking in the first place. In practice, this will only affect the absolute dumbest adversaries. It won’t apply at all to open source or custom-built tools. Any additional step in a workflow is going to wash this right out either way.

    My fear is that regulators will try to ban open models because the can’t possibly control them. That wouldn’t actually work, of course, but it might sound good enough for an election campaign, and I’m sure Microsoft and Google would dump a pile of cash on their doorstep for it.