• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: July 11th, 2023

help-circle
  • Clearly, climate change is on the rise. It is absolutely a fair price to pay for all the advancement. For almost all of history, we were dying under 30 years, we are close to triple that thanks to technology and all signs point to lifespan increasing. For pandemics, even if we outrageously compare just the number of deaths, not even percentages as we should, many pandemics of the distant past were way way worse than the only one almost all of us have seen in our lifetimes.

    https://www.visualcapitalist.com/history-of-pandemics-deadliest/

    I know you have it in your head that technology is evil or whatever, but that falls flat when you look up the numbers you are so afraid of. You are like a teenager complaining about how awful their parents are when literally everything you love and enjoy has come directly from them. I would not be surprised in the least if you are a literal teenager with how little you have heard about history.


  • It is honestly amazing how little you know of history. We literally had over 100 weeks with more casualties than the entire Russian and isreali wars combined back in ww2. And somehow you think wars are now worse. It’s honestly mind boggling.

    Also, it is absolutely undeniable that murder is way down. You haven’t looked, and you can’t imagine how far off you are. The same for pandemics, it’s not even comparable. You are arguing that dinosaurs are smaller than most mice and it would be hilarious if it wasn’t so sad that you are so sure of yourself and so completely wrong about something you refuse to research. It is literally so easy to quickly look up and find you are wrong.


  • I recommend reading “The Better Angels of Our Nature” by Steven Pinker. People love to complain about how much worse quality of life has gotten for people, but when actually pressed for specific ways it has gotten worse, they are almost always arguing from a complete ignorance of history. Lifespan is much longer, healthspan is much longer, rapes are way down, murders are way down, torture is way down, dying in childbirth is way down, incest is way fown, pedophilia is way down, starvation is way down, dying from wild animals is way down, wars are way down. Problems now for lots of the world are things like people bickering over who gets to be next to who when they pee and who called a “he” a “she”.

    This idea that the world is way worse than it used to be is absurd and just shines a massive light on how popular it has gotten to selfish brats completely oblivious of where we came from.




  • Yesterday’s AI is today’s normal technology, this is just what keeps happening. Some people just keep forgetting how rapidly things are changing.

    You’ll join this “cult” once the masses do, just like you have been doing all along. Some of us are just out here a little bit in the future. You will be one of us when you think it becomes cool, and then you will self-righteously act like you were one of us all along. That’s just what weak-minded followers do. They try to seem like they knew all along where the world was headed without ever trying to look ahead and ridiculing anyone who does.


  • Every word in every language changes over time. The term AI changing is the absolute normal. It’s not some mark against it.

    Current llms are phenomenally beneficial for some things. Millions of developers have had their entire careers completely changed. Teachers are able to grade work in 10% of the time. Children through to college students and anyone interested in learning have infinitely patient tutors on demand 24 hours a day. The fact that you are completely clueless about what is going on doesn’t by any stretch of the imagination mean it isn’t happening. It just means that you not only feel like you are “beyond learning”, it also means that you don’t even have people in your life that are still interested in personal growth, or you are too shallow to have conversations with anyone who is.

    This is just beginning. The more you cling to being in denial of progress, the further you will get behind. You are denying any mode of transportation other than horses even exists, while people are routinely flying around the world. It most likely won’t be too long until your mindset is widely accepted as a mental disorder.



  • I use ai when I use search engines. This makes the search engines better. I also use ai when I get spotify suggestions. I use ai when I use autocorrect. I use ai without even realizing I’m using ai and the ai improves from it, and I and many other people get an improved quality of life from it, that’s why nearly everyone uses it just like I do.

    So, @givesomefucks , do you also regularly use ai that improves from from your usage? Or are you not a hypocrite who thinks there is something morally bad about specific ais that you don’t like while doing exactly what you claim to be against with other ais? How are your moral lines drawn?




  • I think there may be some confusion about how much energy it takes to respond to a single query or generate boilerplate code. I can run Llama 3 on my computer and it can do those things no problem. My computer would use about 6kWh if I ran it for 24 hours, a person in comparison takes about half of that. If my computer spends 4 hours answering queries and making code then it would take 1kWh, and that would be a whole lot of code and answers. The whole thing about powering a small town is a one-time process when the model is made, so to determine if that it worth it or not it needs to be distributed over everyone who ends up using the model that is produced. The math for that would be a bit trickier.

    When compared to the amount of energy it would take to produce a group of people that can do question answering and code writing, I’m very certain that the ai model method is considerably less. Hopefully, we don’t start making our decision about which one to produce based on energy efficiency. We may, though, if the people that choose the fate of the masses sees us like livestock, then we may end up having our numbers reduced in the name of efficiency. When cars were invented, horses didn’t end up all living in paradise. There were just a whole lot less of them around.


  • This is an issue with many humans I’ve hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

    This is so similar to many people’s complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it’s not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it’s worth it if it isn’t even as good as people yet.

    I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn’t have internet access, and there really weren’t agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can’t be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

    I don’t have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.


  • This is an issue with many humans I’ve hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

    This is so similar to many people’s complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it’s not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it’s worth it if it isn’t even as good as people yet.

    I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn’t have internet access, and there really weren’t agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can’t be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

    I don’t have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.


  • I think without anything akin to extrapolation, we just need to wait and see what the future holds. In my view, most people are almost certainly going to be hit up side the head in the not to distant future. Many people haven’t even considered what a world might be like where pretty much all the jobs that people are doing now are easily automated. It is almost like instead of considering this, they are just clinging to some idea that the 100-meter wave hanging above us couldn’t possibly crash down.


  • I think having it give direct quotes and specific sources would help your experience quite a bit. I absolutely agree that if just use the simplest forms of current LLMs and the “hello world” agent setups that there are hallucination issues and such, but lots of this is no longer an issue when you get deeper into it. It’s just a matter of time until the stuff that most people can easily use will have this stuff baked in, it isn’t anything that is impossible. I mean, I pretty much always have my agents tell me exactly where they get all their information from. The exception is when I have them writing code because there the proof is in the results.


  • Most positive use cases are agent-based and the average user doesn’t have access to good agent-based systems yet because it requires a bit of willingness to do some “coding”. This will soon not be the case though. I can give my crew of AI agents a mission, for example, “find all the papers on baby owl vocalizations and make 10 different charts of the frequency range relative to their average size after each of their first 10 weeks of life”, and come back an hour later and have something that would have been 100 hours for a grad student just last year. Right now I have to wait an hour or so, soon it will be instant.

    The real usefulness of these agents today is enormous, it is just outside of the view of many average people because their normal lives don’t require this kind of power.


  • Yeah, it’s trajectory thing. Most people see the one-shot responses of something like the chatgpt’s current web interface on openai’s website and they think that’s where we are at. It isn’t though, the cutting edge of just what is currently openly available to people is things like CrewAI or Autogen using agents powered by things like Claude Opus or Llama 3, and maybe the latest gpt4 update.

    When you use agents you don’t have to baby every response, the agents can run code, test code, check latest information on the internet, and more. This way you can give a complex instruction, let it run and come back to a finished product.

    I say it is a trajectory thing because when you compare what was cutting-edge just 1 year ago, basically one-shot gpt3.5 to an agent network with today’s latest models, the difference is stark, and when you go a couple years before that to gpt2, it is way beyond stark. When you go a step further and realise that there is lots of custom hardware being built(basically llm ASICs-traditionally a ~10,000x speedup over general use gpus), you can see that soon having instant agent based responses will be the norm.

    All this compounds when you consider that we have not hit a plateau and that we are still seeing that better datasets, and more compute, are still producing better models. Not to mention that other architectures, like state-based Mamba, are making remarkable achievements with very little compute so far. We have no idea how powerful thinks like Mamba would be if they were given the datasets and training that the current popular models are being given.


  • Yeah, I absolutely agree. About a month ago, I would have said that Suno was clearly leading in AI music generation, but since then, Udio has definitely taken the lead. I can’t imagine where things will be by the end of the year, let alone the end of the decade. This is why it’s so crazy to me when people look at generative AI and act like it’s no big deal and just a passing fad or whatever. They have no idea that there is a tsunami crashing down on us all and they always seem to be the ones that bill themselves as the weather experts who have it all figured out. Nobody knows the implications of this, but it definelty isn’t an inconsequential tech.


  • “A solution in search for a problem” is a phrase used way to much, and almost always in the wrong way. Even in the article it says that it has been solving problems for over a year, it just complains that it isn’t solving the biggest problems possible yet. It is remarkable how hard it is for people to extrapolate based on the trajectory. The author of this paper would have been talking about how pointless computers are if they were alive in the early 90s, and how they are just “a solution in search for a problem”.