I’d like to make a prediction. I’ve been noticing a second-order effect of AI that I haven’t seen discussed enough:

Offices will get quieter.

As LLMs get embedded into day-to-day work, we’ll use them to pre-validate our thoughts and manufacture confidence before we speak.

Why risk a half-baked idea when you can run it through an LLM first? Why expose uncertainty when the model can give you (a) language, (b) structure, and (c) plausible edge cases on demand?

The baseline quality of “something worth saying” goes up. And when the baseline goes up, the casual, exploratory talk gets taxed.

LLMs increase the social cost of being wrong in public.

The Topology Shift

Previously, a conversation was basically a mesh network.

A → B → A → B

Lots of partial thoughts, quick corrections, and detours.

Now it’s shifting toward a hub-and-spoke model, with the AI as the hub:

A → LLM → A → B → LLM → B

If that’s true, then a lot of thinking and retrieval starts happening in private context windows before it ever reaches another human.

My guess is that the total volume of spoken words inside corporate offices declines non-trivially.

Osmotic Learning

The most immediate impact might be on osmotic learning.

Historically, junior engineers and analysts learned by proximity. Now imagine a senior engineer silently prompting Claude to “architect a solution for X,” and then pasting a beautifully structured design doc into Slack. We risk short-circuiting a model of learning that’s powered professional growth for centuries.

But there’s a counterweight: juniors also now have an infinitely patient, infinitely knowledgeable tutor. Maybe the apprenticeship model moves from passive to active interrogation.

More screen time and less ambient learning I guess.

Homogenization

This has been discussed elsewhere (and better), but it’s topical to bring up here:

If we all consult the same RLHF’d models, does strategic thinking regress to the mean? If Company A and Company B both ask their AI strategy advisor “how should we enter this market?”, they’ll likely get remarkably similar advice.

I’ll hold judgment here, though; as context windows deepen and models learn our specific business logic, this ‘averaging out’ might disappear.

The Loss of Collision

Innovation often comes from friction and AI smooths out that friction, giving you a polite, well-structured answer.

This is debatable, but my instinct is that it leaves less room for the “wait, but why?” moments. We’ve generated enormous economic value by digitizing workflows and gradually removing human interaction across the chain. AI does that more aggressively, by empowering an average person to do more end-to-end work with fewer touchpoints.

But the social story might be less and less human conversations. On the other hand, we love our meetings. Middle ground seems to be calendars full of meetings, yet an office that is relatively silent.

I assume Google brought a weaker version of this shift in the 2000s. “Oh you want to learn about TCP_NODELAY? Nagle is the expert — go talk to him” became “Google it.”

Now it’s becoming “Gemini it.”