gmm1990 an hour ago

Some of the utilization comparisons are interesting, but the article says 2 trillion was spent on laying fiber but that seems suspicious.

  • observationist an hour ago

    There's an enormous amount of unused, abandoned fiber. All sorts of fiber was run to last mile locations, across most cities in the US, and a shocking amount effectively got abandoned in the frenzy of mergers and acquisitions. 2 trillion seems like a reasonable estimate.

    Giant telecoms bought big regional telecoms which came about from local telecoms merging and acquiring other local telecoms. A whole bunch of them were construction companies that rode the wave, put in resources to run dark fiber all over the place. Local energy companies and the like sometimes participated.

    There were no standard ways of documenting runs, and it was beneficial to keep things relatively secret, since if you could provide fiber capabilities in a key region, but your competition was rolling out DSL and investing lots of money, you could pounce and make them waste resources, and so on. This led to enormous waste and fraud, and we're now on the outer edge of usability for most of the fiber that was laid - 29-30 years after it was run, most of it will never be used, or ever have been used.

    The 90s and early 2000's were nuts.

nuc1e0n 29 minutes ago

The article claims that AI services are currently over-utilised. Well isn't that because customers are being undercharged for services? A car when in neutral will rev up easily if the accelerator pedal is pushed even very slightly, because there's no load on the engine. But in gear the same engine will rev up much less when the accelerator is pushed the same amount. Will there be the same overutilisation occurring if users have to financially support the infrastructure, either through subscriptions or intrusive advertising?

I doubt it.

And what if the technology to locally run these systems without reliance on the cloud becomes commonplace, as it now is with open source models? The expensive part is in the training of these models more than the inference.

asplake an hour ago

Yes or no conclusions aside (and despite its title, the article deserves better than that), the key point is I think this one: “But unlike telecoms, that overcapacity would likely get absorbed.”

  • lazide 19 minutes ago

    Telecom (dark fiber) capacity got absorbed too. Eventually. After a ton of bankruptcies.

recursive4 20 minutes ago

Stylistically, this smells like it was copy and pasted from straight out Deep Research. Substantively, I could use additional emphasis on the mismatch between expectations and reality with regards to telco debt-repayment schedule.

kqr 26 minutes ago

Is there a way in which this is good for a segment of consumers? When the current gen of GPUs are too old, will the market be flooded with cheap GPUs that benefit researchers and hobbyists who therwis would not afford them?

  • ares623 7 minutes ago

    Some of them will probably be starving, homeless, or bedridden by the time that happens but yes they can get cheap GPUs

  • stego-tech 5 minutes ago

    Unlikely, for a few reasons:

    * The GPUs in use in data centers typically aren’t built for consumer workloads, power systems, or enclosures.

    * Data Centers often shred their hardware for security purposes, to ensure any residual data is definitively destroyed

    * Tax incentives and corporate structures make it cheaper/more profitable to write-off the kit entirely via disposal than attempt to sell it after the fact or run it at a discount to recoup some costs

    * The Hyperscalers will have use for the kit inside even if AI goes bust, especially the CPUs, memory, and storage for added capacity

    That’s my read, anyway. They learned a lot from the telecoms crash and adjusted business models accordingly to protect themselves in the event of a bubble crash.

    We will not benefit from this failure, but they will benefit regardless of its success.

  • wmf 8 minutes ago

    Many researchers and hobbyists cannot even plug in a 10 KW 8 GPU DGX server.

Havoc an hour ago

Don’t think looking at power consumption of b200s is a good measure of anything. Could well be an indication of higher density rather than hitting limits and cranking voltage to compensate

kragen an hour ago

This seems to be either LLM AI slop or a person working very hard to imitate LLM writing style:

The key dynamic: X were Y while A was merely B. While C needed to be built, there was enormous overbuilding that D ...

Why Forecasting Is Nearly Impossible

Here's where I think the comparison to telecoms becomes both interesting and concerning.

[lists exactly three difficulties with forecasting, the first two of which consist of exactly three bullet points]

...

What About a Short-Term Correction?

Could there still be a short-term crash? Absolutely.

Scenarios that could trigger a correction:

1. Agent adoption hits a wall ...

[continues to list exactly three "scenarios"]

The Key Difference From S:

Even if there's a correction, the underlying dynamics are different. E did F, then watched G. The result: H.

If we do I and only get J, that's not K - that's just L.

A correction might mean M, N, and O as P. But that's fundamentally different from Q while R. ...

The key insight people miss ...

If it's not AI slop, it's a human who doesn't know what they're talking about: "enormous strides were made on the optical transceivers, allowing the same fibre to carry 100,000x more traffic over the following decade. Just one example is WDM multiplexing..." when in fact wavelength division multiplexing multiplexing is the entirety of those enormous strides.

Although it constantly uses the "rule of three" and the "negative parallelisms" I've quoted above, it completely avoids most of the overused AI words (other than "key", which occurs six times in only 2257 words, all six times as adjectival puffery), and it substitutes single hyphens for em dashes even when em dashes were obviously meant (in 20 separate places—more often than even I use em dashes), so I think it's been run through a simple filter to conceal its origin.

  • ashtakeaway 3 minutes ago

    Remember we have about 20 years of poorly written articles along with a few well written ones for the LLM to be trained on. I'm confident that attempting to tell LLM from human writing is a waste of time now that the year is almost over.

    Other than that I'd rather choose a comprehensive article than a summary.

fnord77 an hour ago

> This is the opposite of what happened in telecoms. We're not seeing exponential efficiency gains that make existing infrastructure obsolete. Instead, we're seeing semiconductor physics hitting fundamental limits.

What about the possibility of improvements in training and inference algorithms? Or do we know we won't get any better than grad descent/hessians/etc ?

MarkusQ an hour ago

Holly cow, we've found an exception to Betteridge's Law of Headlines! Talk about burying the lede!

  • semitones an hour ago

    If you read the article, then this is not an exception to the law