• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: August 11th, 2023

help-circle








  • Yes, there are people with well argumented reasons against loosening immigration laws or giving more rights to illegal immigrants. But it is a fact that there is a large number or immigrants who oppose them simply for the reason that others shouldn’t get what they got without going through the same hard work. I’ve seen very angry groups of immigrants opposing drivers licenses for illegal immigrants for just that reason. It wasn’t about best of the best or extraordinary ability.

    Same for student loan forgiveness. There are a lot of people who oppose it just because “it isn’t fair” that they had to pay theirs and others wouldn’t.

    I think that logic is flawed and petty.

    I personally have not much stake in the matter. I’m a legal immigrant, doing one of those jobs that required proof that I’m highly qualified. And I got to be highly qualified for free because I didn’t study in the US.





  • Hallucinations are an issue for generative AI. This is a classification problem, not gen AI. This type of use for AI predates gen AI by many years. What you describe is called a false positive, not a hallucination.

    For this type of problem you use AI to narrow down a set to a more manageable size. e.g. you have tens of thousands of images and the AI identifies a few dozen that are likely what you’re looking for. Humans would have taken forever to manually review all those images. Instead you have humans verifying just the reduced set, and confirming the findings through further investigation.



  • Make a large enough model, and it will seem like an intelligent being.

    That was already true in previous paradigms. A non-fuzzy non-neural-network algorithm large and complex enough will seem like an intelligent being. But “large enough” is beyond our resources and processing time for each response would be too long.

    And then you get into the Chinese room problem. Is there a difference between seems intelligent and is intelligent?

    But the main difference between an actual intelligence and various algorithms, LLMs included, is that intelligence works on its own, it’s always thinking, it doesn’t only react to external prompts. You ask a question, you get an answer, but the question remains at the back of its mind, and it might come back to you 10min later and say you know, I’ve given it some more thought and I think it’s actually like this.