> Institute X – Strategic Transformation & Executive Coaching

AI Will Probably be a Productivity Bust

Before we get to the bold productivity prediction, know where I’m coming from…

I am of two minds on Artificial Intelligence: the large language models of Chat GPT and so forth. Like some of those eminent in the field, I think AIs may represent a mortal threat to humanity. Even if the computers do not take over and keep humans as pets, it is an absolute certainty they will NOT be an unalloyed good. AI is already being weaponized for fairly pedestrian fraud and trolling… and everything that comes therefrom. It will not get better.

I remember the hullabaloo about pocket calculators in math class and the decline of the Western world. As I agonize over the clerk lost without the cash register making change or stymied by why I would pay a $3.79 charge with a $5 bill and 5¢—even after having it explained—I know it turned out to be true—perhaps not as imagined.

That’s basic numeracy. With many forces against literacy, the last thing we need is AI to do the reading and the writing too. And yet, I want to commune with the so-called “techno optimists” that AI will be a salvation… somehow.

As a one-time executive, I get it all the more. People are troublesome. They cost a lot, can be unreliable, make mistakes (probably because of the innumeracy and illiteracy…), and can be both belligerent about not “evolving” and slow to execute. Expressions of unproductive. We want to do more with less and its hard work to get people to simply do more. Especially if (because of the innumeracy and illiteracy?) they are not adept to it.

AI will probably be a productivity bust

The easy conclusion today (probably due to a lack of historical understanding) is that AI will be the productivity solution. Not being said is that to do so, AI must be an alternative to unproductive humans. This replacement theory is straight-forward: if, despite various (digital) technologies at hand, labour productivity is dismal but would increase with AI, and AI does the human’s work: why bother with the redundancy? Still, AI as productivity panacea is the current favourite theory of senior leaders everywhere, including leaders who should know better.

We must restrict and maintain a definition of productivity, or risk colloquially slipping between total production and efficiency of production. Let’s ensure we always mean the latter. That immediately removes “longer hours” or more work from the equation, focusing squarely on rate of output. To be fair, “quality” must be held steady.

While space inhibits proper exploration, we need to bear in mind the specific output for which productivity is a concern, being precise about its contribution to the production chain. Too loose a view will be meaningless.

There are a number of problems with this AI productivity theory. Perhaps they will be sorted out eventually. In the meantime, let’s consider a few of the more obvious ones.

“AI-washing” every problem, including a promise and perception of productivity increase, does not itself make AI applicable, let alone necessary to realize productivity gain from (using) technology tools.

The vast majority of realizable, genuine productivity gain is accessible with non-AI technology already for operation/production. Digital transformation v1.0’s purpose was to digitize information and operation. That means all the data and choices of what and how to operate are rendered and kept as 1s and 0s. This is acceptable to, analyzable by, and actionable with 1/0 instructions by a computer. All that’s needed are explicit, unambiguous instructions for processes and procedures (aka code).

That code exists or can exist without AI. Sophisticated and even complex algorithms are well proven. The granddaddy of this straight-forward productivity improvement by technology is mechanization/robotics. Social media algorithms are a more current proof point. Well proven on both the shop floor and in cyberspace, these algorithms are:

We don’t hear much about capital productivity except implicitly or in financial calculations by a CFO (e.g., IRR, capital cost, etc.), but labour productivity and capital productivity need to be properly tuned to each other. Unless it replaces the organic intelligence of the executive function, AI is unlikely to improve either or they synergy among them for real productivity.

If for only the sake of argument, let’s agree that AI cannot be trusted to be left to its own devices; that it is as unpredictable as a precocious, charming tween. As such, regardless of “intellectual” prowess, it’s probably inadvisable to simply let it run amok without supervision.

Accepting this (obvious) condition, at least for now, leads to consequent outcomes and conditions. First, in the office: to expected labour change. At least to some extent people may no longer do the apparently difficult work of research (i.e., Googling) and results synthesis to formulate assessments or analyses. Having been excluded from this tedium, whether they have any place rendering judgment on or developing conclusions and resultant actions/strategies is an open question. Assuming they are, the focus is on higher value-add work.

There also is an unquestionable lesser conclusion that humans will be required to tell AI what to do. This is “prompting,” which is not much different than search query except the prompter can specify the type, nature, and perspective of the AI output. For example, not merely search all x but report on it as a y in a PowerPoint recommendation. The difference is judgment, which we’ll address below.

At the very least, AI must be initiated via prompt. To the extent that each prompt is discrete that means an ongoing need for a person/people. Of course, for good results these people need to make good prompts.

Setting aside the possibility of an antisemitic response, AIs still hallucinate. They make things up presuming (to the extent that is the appropriate word to describe what/why they do what they do) their output is probabilistically consistent with the information ingested. If we assume the organization does not want to propagate and act on fictions, the human needed to review all output must also “know” more. They must be aware of the range of relevant responses and of sufficiently critical thought capability to challenge and overrule the AI.

At least for the foreseeable future, directing and maximizing AI utility—for meaningful productivity gain for instance—will require human handlers able to outthink the machine.

Much less about the AI itself, those directing and outthinking the AI must be operationally central people, not the IT department. Because AI is output from technologies, the IT department will expect control and authority. This, however, would be akin to making the factory maintenance department responsible for configuration and use of manufacturing production lines. To some over-confident IT executives and consultants—not to mention the tech bros, this will be a hard pill to swallow.

“Artificial” is the modifier for the value-add quality “intelligence.” Not intelligence in the sense of information gathered either—that would be Google. Intelligence in the sense of consideration and judgment. Without that, as fake as it may be, it’s hard to rationalize the hullaballoo—especially the fevered hopes for productivity breakthrough.

Consideration and judgment are the stock in trade of executive management. With that in mind, let’s reconsider how office-types use AI for productivity in their (limited) part of the production chain. A long time ago I learned that management functionality comprises activities referred to by the acronym POSDICR: Planning, Organizing, Staffing, Directing, Informing, Controlling, Reporting. There is arguably some art to these or the entire process… or the entire management edifice is on shaky ground before AI. Certainly, it is apparent how an algorithm could take on each function individually.

Employees using AI as a supercharged search engine will not increase productivity materially. Those gains ought to have come from access to search over the past two and a half decades. Among my clients, excess information is usually not helpful. Rarely has additional, esoteric information produced breakthrough results. More typically, it slows things down unnecessarily because increased information load tends to confuse, leading to “paralysis by analysis.” Resulting decision deferral and delay is, by definition, counter-productive for management/office work.

The critical second order impact of employees reading even less as a result of AI summaries will only inflate illiteracy and accelerate the decline of real organizational intelligence. Every summation (AI-generated or not) necessarily omits important or relevant points. More than enough of that happens organically, never mind artificially. As Ezra Klein said in his last Vox podcast:

Klein implies hubris. How often does that end well?

Ultimately, to have AI replace labour’s synthesized knowledge and judgment is effectively to replace labour. Maybe that’s the point. Of course, AI’s judgment is proven to be dubious. (Though in fairness, so has labour’s—at all levels.) To have AI replace executive judgment, or even to shape it—which is what AI-driven strategies, summations, and so forth amount to—should bring into bold relief the earlier, AI-driven challenge to executive purpose. It suggests the executive role—that of judgment—can be outsourced to AI. Is that really where we want to go?

I would not presume to know exactly what executives, from Canada’s prime minister to the leader of a Fortune 500 to a small businesswoman, intend to gain from AI each of their individual cases. That is, how they see AI being used and increasing productivity in their circumstance. If, however, they do not have a specific intent beyond “AI will increase productivity,” they’re already in trouble.

Whatever it may be, the AI solution will not fully replace people (at least not in the short run). It will, however, require that the people and their functional roles change. Probably significantly. This is where we must start.

Because they will coexist, tuning the new relationship of labour to (AI) capital is critical. To ignore it or leave it to be sorted out through undefined “HR Action” or “change management” later merely sows failure. For clarity: if the organization’s labour, specifically management, is not reoriented to properly use AI to produce productive value, let alone not prepared to elicit and reap enhanced productivity from the AI, the entire endeavor will fail. This is a certainty as far as productivity enhancement goes, and probably well beyond.

Moreover, specific focus on productivity gain does not require the judgment capacity of AI. Beyond mechanization, technology is already well-entrenched in the automation capacity of decision-making algorithms and rules-based processing by and with digital assets emplaced during the transformation. The judgment can still come from the humans paid for that capability. None of this need go away.

One final question that presumes wholesale replacement of labour is not and cannot be the end game:

Institute X consults on transformation and provides leadership coaching. One online presence is The Change Playbook, which has abundant pragmatic guidance for making change happen. Subscribe to be notified of new, fresh content and contact us so we can help you in your specific circumstances.

error

Enjoy this content? Please spread the word :)