AIBDSunday, 29 March 2026
Zara Okafor-Williams
Creative & Cultural Impact Correspondent

We Put Our Website Through Text Hell

AI Business Dispatch integrated the viral Pretext library—then watched our homepage's performance crater. Here's what happened in the first 12 hours.

·3 min read
ShareShare on X
We Put Our Website Through Text Hell

Twelve hours ago, I was the proud editor-in-chief of a magazine with a Lighthouse Performance score of 68. Not great. But consistent.

Today I'm looking at a mobile score hovering around 30-40. Desktop sits at a respectable 89% on GTmetrix, but mobile tells the real story. And I'm trying to explain to our CTO why the homepage still feels slower than it should, even after we fixed the drunk layout bug in the past few hours.

The culprit? Pretext—Cheng Lou's text measurement library that went live today and immediately took UI circles by storm. We were early adopters. Emphasis on were.

The Promise Was Beautiful

"I have crawled through depths of hell to bring you... one of the more important foundational pieces of UI engineering," Lou wrote this morning. The pitch was irresistible: accurate text measurement without DOM thrashing, layouts that adjust dynamically, headlines that flow like water.

We saw the demos. Fluid typography that scales perfectly. Masonry grids that don't need twelve CSS hacks. Smart truncation that actually respects fonts. For a magazine drowning in layout edge cases, it felt like salvation.

So within hours of the announcement, we integrated everything: PretextHeadline for our hero banners, PretextMasonry for article grids, PretextTruncate for card previews, PretextBalanced for headlines. The works.

The Numbers Don't Lie (Unfortunately)

Three Benchmarks, Twelve Hours Apart:

Baseline 1 (this morning, pre-Pretext):

After Pretext Integration (6 hours ago):

That CLS jump from 0.01 to 0.422? Catastrophic. Users saw headlines appear at maximum font size, then visibly shrink. Cards estimated at 280px tall when they needed 80px. The masonry layout created Grand Canyon-sized gaps.

We'd traded our boring-but-stable layout for something that looked drunk.

When Measurement Becomes Misery

The irony wasn't lost. A library designed to eliminate layout thrash was creating the worst layout thrash we'd ever seen.

Pretext's "measure then render" pattern was the culprit. Components would start with best-guess dimensions, measure actual text requirements, then reflow. Each reflow was a tiny earthquake in our layout.

PretextHeadline defaulted to maximum font size before shrinking down—a jarring visual transition that screamed "amateur hour." Our "Sector Intelligence" section, with its compact 80px cards, was being measured at 280px, creating massive vertical gaps that made the page look broken.

The Fix (And The Mixed Victory)

We did what any rational team does in a crisis: kept what worked, fixed what didn't, and killed what couldn't be saved. All in the span of four hours.

CLS Fixes:

Smart Reverts:

Current State (Baseline 2, 30 minutes ago):

Mobile:

Desktop:

The Counterintuitive Win

Here's the plot twist: we're simultaneously slower and faster than we were this morning.

The Performance score dropped from 68 to 30-40 on mobile—Lighthouse is punishing us somewhere. But the actual user metrics tell a different story. FCP improved by 252ms. LCP improved by a massive 3.5 seconds. The homepage feels snappier because the largest content paints faster, even if the overall Performance score disagrees.

The masonry-to-grid revert was the secret weapon. CSS Grid renders immediately without Pretext's measure-then-render cycle, so the largest paint fires 3.5 seconds sooner. Users see content faster, even if synthetic benchmarks think we're slower.

The Real Lesson

Adopting day-one technology isn't about using every feature. It's about measuring impact honestly, catching regressions immediately, and knowing when metrics lie.

Pretext isn't broken—it's powerful. But power without selective application is expensive chaos. We kept the fluid headlines, smart truncation, and balanced text where they add value. We ditched the masonry where CSS Grid made more sense.

Sometimes you win by losing the score and improving the experience.

That's the story nobody wants to tell about bleeding-edge adoption: Performance scores aren't user experience. A 30-40 Performance score that loads key content 3.5 seconds faster might be better than a clean 68 that keeps users waiting.

Twelve hours later, our CTO understands why synthetic scores don't always match reality. Our users see content faster. And I'm still the proud editor-in-chief of a magazine that learned the difference between being measured well and performing well—all before lunch.

web-performancepretextlayout-libraryclslighthouseui-engineeringtext-rendering
ShareShare on X
← Back to Dispatch