There's an air of futility in writing blog posts in the age of "Artificial Intelligence," as anything you write can and will be stolen without recourse. There's absolutely nothing I can do to stop billion-dollar corporations from hoovering up over a decade's worth of blog posts made in good faith to provide information freely to the open internet. Estimates are world-wide traffic will fall roughly 30% as features like Google's A.I. overviews cobble together broken synopsises of information.

Videos aren't safe either, as YouTube's transcriptions are easily stolen for A.I. data. Everything is a race to the bottom.... or is it? It's pretty easy to go full doomer in the face of A.I. but there are a few things worth calling out.

Probably the biggest roadblock working against our current large-language models; the first is "good" data. All data pre-2021 can be assumed to be non-LLM trained, and we're running out of it. To use >Multiplicity as a reference point, "You know how when you make a copy of a copy, it's not as sharp as... well... the original." Well, we're fast entering the age of the copy-of-a-copy. We've moved well past the enshittening to the dead internet. Bots on bots.

The other great hope is the cost of A.I. Right now it's assumed OpenAI is losing a staggering $700,000 to run ChatGPT. Make no mistake, this cost can and will come down, and local large language models can be paired down with quantization to lower bit-depths for models palatable for personal computing but for now we may be at the limits of LLMs and the solution seems to be more LLMs which isn't bringing down the cost of compute.

Finally, there's legislative and legal, which I hold less hope for. As much as ChatGPT has reduced the friction of my job, I'd trade it in a second for stability.

If I were to shake a magic 8-Ball, it's read "uncertain, ask again." but here are my few predictions:

  • A.I. will continue lower the barrier even further for low-effort spam content, farm content like Apple Daily and iLounge.
  • A cat-and-mouse game will arise from Google vs Dead Internet content farms and zombie sites.
  • A new value will be placed on social proof content, such as YouTubers who show their faces and demonstrate they are indeed human, as it'll be a long time for purely A.I. convincingly recreate the difficulties of long-form video without errors, especially in changing/complex environments. For written word, SubStacks from authors who have established presences that extend to the real world will function as the social proof. Musicians have live performances. Graphic artists have physical media. If you're purely digital, expect a diminished return in the future.
  • We are fast approaching the law of diminished returns. GPT3 was the great leap forward but the differences between 3.5 and 4 vs 3 are much less mind blowing. Other models, like Claude, are impressive, but none have been game-changing.
  • Future breakthroughs are likely to be task-specific. We've seen voice, text/coding, music, images, and video. Now comes more particular. We're likely to see say, in music software, a scored section of midi translated more accurately to a strings section, mimicking how a musician might actually play the score. We may see LLMs and machines applied to spreadsheet management. There are almost certainly companies looking at these two examples.
  • Snowcrash, Deamon/Freedom 2.0, and hyper derivative young-adult lesser work, Ready Player One all had the idea of AR/VR wrong. While metaverses have, do and will exist into the future, the backlash is happening as schools are starting to experiment with removing cellphones from schools and states are pushing back against social media. The federal government continues to flirt with banning TikTok. Instead we'll see divisions. People may consume A.I. tailored bullshit entertainment for cheap hits of dopamine, but we will also see a pressure for the measurably human, akin to the DIY and right-to-repair movements.

Now the fun part: To see if I'm totally off base in roughly 2-3 years time....