I read a post this morning. The subject was real and the writer had clarity and strong convictions. But I could not unsee something.
What I read was a cadence. Balanced paragraphs. A quote placed exactly where a writer reaches for authority if they think they have not earned it yet. The sentence worked— not the argument.
I recognized the cadence because I have been learning to spot it. The models produce a particular kind of language now, and it is spreading. Writers with real things to say are drafting through the tools, and the tools produce polished prose. That convince the writers that it says what they want to say, and believe the work is done.
It isn’t. The prose is fluent. But something else that is not the prose is still unwritten. I’ve been thinking about the difference between the two.
When the writing problem is small, the tools don’t substitute for anything. A memo, a summary, a routine explanation. The output is fine and useful.
When the writing problem is large, the tools fail but pretend otherwise. The writer has a conviction. The machine drafts around it. The sentences get the shape. The texture of writer’s experience does not arrive. Lived experience is not in the training data.
I am not sure what the readers feel. It depends on who reads it.
Writers publish these pieces, that’s fine. I don’t debate the tool. I don’t argue about authenticity. Most people can’t yet read what is missing, and the ones who can, may not know how to name it.
What I think important to realise is, if you have seen real things, the machine cannot carry them for you. It can produce competent prose around them. It cannot produce a prose that could carry them.
Maybe the test could be: after you finish a draft, is whether you know something now that you did not know before? Whether something got surfaced that wasn’t there already? If yes, the piece is yours. If no, the draft is still ahead of you.
The writers with something to say should write. They can use whatever helps. But also, should notice, at the end, whether the thing got said or whether the prose pretends like it did.
This may not sound significant, I know, but it is what makes the difference. The models deal with the expression in terms of vocabulary. Its saying is dictated by the vocabulary of the cultural and technical aesthetics of the language. When the training distribution doesn’t support this, the models produce broken prose. The reason why they succeed in English and fail in Tamil. But that’s a different thing. The concern here is, the models can help the lack of vocabulary or coherence, but they cannot resolve the gap between what is felt and what is said in semantics. The gap which actually warrants the writing in first place.
So, the acid test is, if the gap disappears at the end?


