Disclaimer: This post is an experiment. I “collaborated” with ChatGPT to compose this. I’ll write up more fully what our back-and-forth collaboration looked like, but it started with a very rough draft, then some back and forth, and finally back into Word for some final cleanup before posting.
Here we go.
I wanted to share a multi-part perspective on where I see generative AI today.
Part 1 looks at the broader arc of the technology and why it feels so significant to me.
Part 2 is some of my general observations and concerns about this AI revolution.
Part 3 is something more personal, an experience I had today that genuinely shifted my sense of what is now or soon to be possible.
Finally: My conclusion.
And post-finally: my non-AI-assisted thoughts.
I know I complain regularly about AI slop, and I really do believe that a lot of low-effort, low-quality AI content is being barfed out and posted online.
But that does not mean I am an AI hater or that I fail to see the technology for what it is. In fact, I see it as one of the largest technological shifts I have ever witnessed in my lifetime. For context, I was born before the internet existed, studied electronics before the first cell phone was used, learned science and technology before the personal computer was a mainstream idea, and built internet applications before the World Wide Web had taken shape. I also expect that in the not-too-distant future, hopefully within my lifetime, we will achieve a new source of clean energy through sustained and practical fusion power.
There have been many world-changing advancements that I have watched unfold, and in some cases participated in. Yet it is becoming clear that what is coming out of the field of computer science, generally labeled as AI, is pushing the arc of human progress sharper and faster than anything I have seen before.
At the same time, I see several short/medium-term risks that worry me. These are not abstract sci-fi fears; they are immediate and very human.
First, the resource pressure. AI training and inference require enormous compute, compute requires dense data centers, dense data centers require massive amounts of power, and power requires cooling. That cooling almost always relies on water. Power and water are not unlimited, and they are the same resources people need for their homes and taps. This becomes a zero-sum game in the short term. What gets allocated to AI does not go to someone’s air conditioner or water supply.
Second, the money flow is getting strange. There is a staggering amount of capital being poured into AI and the infrastructure around it, and much of it feels like pure float. Companies are building data centers using money promised by big players like OpenAI, Meta, Microsoft, and Anthropic. Those players are getting huge investments from funds that are betting largely on chip makers. Most of those chips are coming from NVIDIA, which is happy to lend customers the money to buy the very chips NVIDIA is selling or to lease them through the same lenders. There is an odd, circular quality to the financing, and it is not always clear what cup the money is under in this shell game.
Third, and this is the one that truly worries me, is misplaced trust. I am not concerned about a superintelligence deciding to recreate Terminator. Not yet. I am concerned about people putting too much faith in AI to replace human judgment long before it actually can. Laying off recruiters and replacing them with AI agents will result in bad hires. Replacing support staff with AI will create frustrated customers. Replacing skilled creators with prompt engineers will lead to garbage. Letting AI make business decisions will lead to expensive mistakes. Allowing AI to influence life-or-death decisions in military contexts is simply reckless. None of these failures would be the fault of the AI. They would be the fault of people asking technology to do what it is not capable of doing.
Today was the first time I watched an AI-first film that felt genuinely good. Not interesting as a novelty, not impressive “for AI,” but actually good, and surprisingly emotional.
The filmmaker, a coworker of mine with experience in filmmaking, video editing, and of course storytelling, started with a simple premise. From that premise, an LLM generated a full scene list, which they reviewed, trimmed, and refined.
Next, they provided examples of the cinematic style and framing they wanted. Using those examples, the AI created the text-to-video prompts for each shot. The initial outputs were then adjusted, sometimes regenerated, based on their feedback to the LLM.
Once the footage was ready, the process shifted into a familiar workflow. The clips were brought into a traditional NLE, meaning a Non-Linear Video Editor, where they were trimmed, arranged, mixed for pacing, and color graded.
The result was surprisingly strong, and honestly, it moved me in a way I did not expect.
So, what is my point here? I have seen technology move from clunky, weird, and barely interesting to anyone outside the small group working on it. Sometimes I joke about the early days of the internet, when services like Veronica, Archie, NNTP, Finger, and even the legendary <blink> tag felt cutting edge.
What we are calling AI today is following that same pattern. It will keep getting better, even if there are moments when it stumbles or slides backward. As you criticize it, and there are plenty of reasons to do so, try to keep the longer history of technological progress in mind.
AI will eliminate jobs, and at the same time create a future none of us can fully imagine. Some of that future will be good, some of it will be uncomfortable, and some of it will require real work from all of us to navigate wisely.
Use the technology responsibly, stay curious, and most importantly, take care of each other as we move into whatever comes next.
As much as I tried, that is not my voice. First read-through, it said everything I wanted to say, but it did not sound like me. All the content, none of the person.
What my coworker filmmaker accomplished gave me hope that there are some really interesting and promising AI advances in the future, but right now they are the exception, not the rule.
But, like I said, or maybe ChatGPT said it — it will only get better.
BTW: I purposely put that em dash there.