archive.is link

Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        13
        ·
        14 days ago

        After all these years, I’m still a little confused about what Forbes is. It used to be a legitimate, even respected magazine. Now it’s a blog site full of self-important randos who escaped from their cages on LinkedIn.

        There’s some sort of approval process, but it seems like its primary purpose is to inflate egos.

        • alyaza [they/she]@beehaw.orgOPM
          link
          fedilink
          arrow-up
          8
          ·
          14 days ago

          As of 2019 the company published 100 articles each day produced by 3,000 outside contributors who were paid little or nothing.[52] This business model, in place since 2010,[53] “changed their reputation from being a respectable business publication to a content farm”, according to Damon Kiesow, the Knight Chair in digital editing and producing at the University of Missouri School of Journalism.[52] Similarly, Harvard University’s Nieman Lab deemed Forbes “a platform for scams, grift, and bad journalism” as of 2022.[49]

          they realized that they could just become an SEO farm/content mill and churn out absurd numbers of articles while paying people table scraps or nothing at all, and they’ve never changed

  • derbis@beehaw.org
    link
    fedilink
    arrow-up
    37
    ·
    15 days ago

    How well does the “AI detection startup’s” product work? This is a big unsolved problem but I’d be hecka skeptical.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    15 days ago

    It’s not so much that it’s AI generated … it’s also AI influenced.

    I know so many professional office workers who once wrote some of the most boring sometimes stupid emails because they didn’t know how to write or get their message across or constantly miscommunicated things because they worded things wrong … now all of a sudden they’ve become professional writers and all their emails look like auto generated messages.

    I’m guessing that many writers also take the AI shortcut. They get a bunch of content generated from an AI than just rewrite it for themselves. Some content i see is lazily edited and some is heavily. But I get the feeling that just about everyone is using it because it’s an easy way to get a bunch of work done without having to think too much.

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      14 days ago

      At work? Yeah I’m gonna use AI to write that email. I didn’t think or do anything more than the minimum required before, I’m not starting now. AI just makes it so that the same garbage I would sent before, now smells nice.

      If you like writing as an art. Why would you have the machine do that for you? If you like thinking, you can do the thinking and let the machine do the typing for you.

      All of these are different uses.

    • Pete Hahnloser@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      15 days ago

      The implication that rewriting GPT output makes one a professional writer … not sure we’re on the same page there. If you know how to use it for those results, great!

  • haroldstork@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    15 days ago

    Omg the amount of times I’ve clicked on a Medium article in the last month and immediately knew it was AI is so frustrating!!! They aren’t even helpful articles because you can tell there is no real understanding.

    • Beej Jorgensen@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      ·
      14 days ago

      I think the difference is scale. Before it was x% of humanity making shitting opinions where x < 100. Now it’s x% of humanity+AI, where x is, say, 100,000% of humanity. I don’t think we’re currently equipped to separate the wheat from that much chaff.

  • Omega_Jimes@lemmy.ca
    link
    fedilink
    arrow-up
    13
    ·
    15 days ago

    The best part about this, is that new models will be trained on the garbage from old models and eventually LLMs will just collapse into garbage factories. We’ll need filter mechanisms, just like in a Neal Stephenson book.

      • Omega_Jimes@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        14 days ago

        I’m in university and I’m hearing this more and more. I keep trying to guide folks away from it, but I also understand the appeal because an LLM can analyze the code in seconds and there’s no judgements made.

        It’s not a good tool to rely on, but I’m hearing more and more people rely on it as I progress.

    • RickRussell_CA@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 days ago

      Perhaps, but I don’t read anything on Substack unless I’m subscribed. Reputation is the entire point on Substack, without it, the content will get no traffic.

  • Storksforlegs@beehaw.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    14 days ago

    the first person who develops a browser that effectively filters out AI results is going to do very well

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 days ago

    I just had one of these! Literally each image was AI generated and everything real like it was from openai. It was a Google search for something like “kubernetes custom deployment rules” and it was a result that was like “kubelat.medium.com” or something. They just take the most asked questions and generate entire articles about them.

    I just went to the source and asked chatGpt directly. I got a better answer anyway