I have a colleague, a careful and intelligent person in a role that is not engineering, who spent two months earlier this year building a system that should have been designed by someone with formal training in data architecture. He used the tools well, by the standards by which use of the tools is currently measured. He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked. The work was wrong from the first day. The schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field. Several of us did know. When opinions were voiced even as high as a V.P., he fought back. The room had been arranged in such a way that saying so was not a contribution; his managers were too invested in the appearance of momentum to want the appearance disturbed. The work will continue, in all probability, until it is shown to a stakeholder, and they decide not to invest.
This is the part of the phenomenon I find hardest to write about. The tool did not make him a worse colleague. It made him able to impersonate, for months, a discipline he had never trained in, and the impersonation was good enough that the institutional incentives all bent toward letting him continue. Perhaps it’s a failure of management, but I have been finding management to be so eager to embrace AI that they’re willing to accept the risk.
— No One’s Happy, in Appearing Productive in The Workplace