Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I covered this:

> I talked to GPT-3.5 and GPT-4 a decent amount while writing this article. While they lied to me a lot and most of the information was useless, they were sometimes very helpful for working through problems. LLM assistance can be net positive if you’re aware of their limitations and are extremely skeptical of everything they say. That said, they’re terrible at writing. Don’t let them write for you.

https://cpu.land/epilogue#acknowledgements

To elaborate, I had perhaps 4-6 "conversations" with various GPTs. They consisted of me asking some question or expressing confusion about something I was having trouble researching in case the LLM could either pick up on my confusion and be helpful, or give me a better source to look through than Google. The latter approach never worked, it always made up bullshit, but the former did once or twice — before the conversation deviated into lies, at least, the models helped me get my thoughts straight.

At their best they felt like talking through a problem with someone smarter than me. At their worst they were a waste of time and actively misleading. They were usually at their worst. I did not use language models as primary sources for anything; where they helped me clarify my thoughts, that simply helped me know what to research normally, and the only other time I used them was to find a file in the Linux kernel that contained some code I was looking for but didn't know verbatim.

Otherwise, the article is entirely originally researched and certainly originally written.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: