One use of LLMs that I haven’t seen mentioned before is to use them as a sounding board for your own ideas. By discussing your concept with an LLM, you can gain fresh perspectives through its generated responses.
In this context, the LLM’s actual comprehension is irrelevant. The purpose lies in its ability to spark new thought processes by prompting you with unexpected framings or questions.
Definitely recommend trying this trick next time you’re writing something.
It’s great as a mental prosthetic. When I am tackling a new complex topic like say a new cloud platform I’m learning, i can test my understanding of the implications of a change to the console settings. I tell it what i think and ask it to check my understanding. Really speeds up my learning, but I don’t rely on it exclusively. I will write my own dang emails, thank you.
Yeah, I mainly like it as a rubber ducking tool. And specifically in contexts where I already understand the topic well and just want prompts to stimulate more ideas on the subject.
That’s exactly right, you have to know enough about the subject to smell a bullshit answer.
My issue is that they train their models on your data, so that fresh new idea becomes it’s fresh new idea.
You can run models locally nowadays on fairly modest harware. I use GPT4all and it works great https://github.com/nomic-ai/gpt4all
Fair enough
LLMs are great for drafting emails and summaries. Not so sure about a fresh perspective but seems like in some situations that could happen.
I’ve found them to be helpful in that regard. I find it’s really helpful when I want to express some concept and I can feed my half baked idea to a chat bot, and it can expand on it, and that helps me flesh out the idea more.