Traditional computing could be boiled down to operations over numbers. Every app, microprocessor, website, and shitpost is ultimately reducible to binary digits, the star-stuff of digital information. Every output is entirely predictable. *slaps roof of traditional computing* Yessir, reliable foundations.
But AI’ll soon soften your cough. It’s no longer just operations over numbers; it’s operations over information. AI systems essentially eyeball some data, perform the robot version of a rorschach test, and then just freestyle it live. AI makes gloopy, fuzzy, inconsistent decisions.
Neither are without their charms. In fact, each can be both astonishingly capable and remarkably dumb in their own special ways.
What you really want is an elegant combination, the best of both worlds. So now an intriguing design puzzle is emerging: how do you take these two very different approaches to conceiving of what a computer even is, and alloy them into something new?
Maybe you try to tame the LLM: add guardrails and safety nets and suggestions and whatever else is needed to bring the beast to heel long enough to get some work done with the damn thing. Or, maybe you try to extend the existing paradigm by stuffing little AI trinkets into existing products. Lots of people are now excited about Agents, the goal-oriented, System 2 reasoning, action-taking, and generally better-looking descendent of crummy old chatbots.
So computers are about to get weird! There’s a ton to figure out. No doubt it will all make sense in retrospect. But it can also be hard, after a big shift like this, to viscerally remember what the before state was actually like. Did the resort match the promise of the brochure? What didn’t you know back then that seems obvious now?
For fow we’re still in the middle, with the before and after states still only an armspan away. So setting aside the bigger societal questions, here are some high level product-ey questions that I don’t currently have confident answers for, but I’m interested to see what happens: