Aside from being tremendously boring, representing generated ideas as my own in a context like this blog would be borderline fraudulent. I struggle to see the practical difference from plagiarism.
Legally, it seems the powers that be have decided there is absolutely nothing noteworthy about prompting an LLM for some text and then publishing it as though it was written by the person who generated it.
Socially, we should reject that. Presenting LLM-generated text as human created is a lie. "I wrote this" means "these are my thoughts" and if you generated it, they are not your thoughts and it is a lie to imply the writing is your thoughts instead of a next-token prediction, even if it is a very good, thorough, and useful next-token prediction!
It should be seen socially as shameful to replace your own thought with the output of an LLM, especially when using that output to bamboozle an audience into believing the output was genuine and novel thought from a fellow meatbag. It cannot be proven if text is or is not generated, but if you catch your friend, your spouse, your mom, using ChatGPT to write a text message, a note in a birthday card, or a blog post, scold them! That is an amoral act at least on par with not returning your grocery cart to its corral.
Passing off generated fiction, personal correspondence, advice, or even opinions as genuine writing commits the same lie which LLM psychosis sufferers have fallen victim to:
the reader is misled into believing that they are reading genuine insights from the experience of another living breathing conscious being, when in fact they are reading a randomized approximation of a million Reddit threads.
Would you do that to someone you care about? Replace your carefully chosen words to them with a poor approximation of your feelings laundered through a filter of Redditors?