AI II: Electric Boogaloo
15 months ago, I outlined some of my thoughts on AI as there was an ongoing freak-out over how superintelligent general intelligence was going to kill us all imminently and about to usher in a new world dominated by Skynet and Terminators. I also wrote up how/why the Materials Science + AI output is crap back in January, but things have accelerated (Pace and Stupidity, as they say) in the last 4 months. We thought we were getting The Matrix and got a Simone Giertz joint instead.
“A step backwards is still a step, and my fitness tracker still counted it towards my daily goal.” Simone Giertz, 2023
Along the way OpenAI tried to fire their CEO, Microsoft said “hey, waitaminute…” and by Monday the OpenAI board was out. Google and Microsoft announced a bunch of non-discoveries of new materials with the help of AI, Google got worried by Microsoft’s publicity around OpenAI/ChatGPT/CoPilot and rushed the lobotomization of their primary products (search and ads) onto an unsuspecting world. Basically, if you bet on stupid you did very well over the last year, doing this stuff in the real world isn’t some computer science homework assignment where you can check it in if it compiles (hint — in many of these cases the compile-time check was probably the only thing that worked).
I’ll score my 10 beliefs from Feb 23, make any updates that seem prudent, and replace any that have already occurred. In summary, after 15 months, 1 prediction has come to pass, 8 remain on track, and 1 was busted. I think that one is really a dumb thing, but it was a thing that happened and while we aren’t getting the positive tool for representation we are getting all the variants of tools for gratification (I should have seen that as more likely).
#1 — Superintelligent AI risk. We’re only 6% through the period of performance and everyone isn’t dead yet. That’s turkey logic right there, thinking that farmer is a great person for always bringing food up until thanksgiving. If we’ve seen anything at this point it confirms the hype cycles are running way too fast on these things to enact a far-sighted human eradication project. On Track.
#2 — Pace of Adoption. We’re 3% into this period of performance and this’ll be hard to evaluate in real time. On the one hand, Nvidia’s stock is way up (along with Microsoft’s CO₂ emissions), and bad photos with too many teeth/fingers/arms show up all over the place. On the other hand, that’s all crap and makes your product/business/you look like you don’t care about your brand or quality so it’ll pass (in my estimation). Folks who do the demos for these generative AI tools are keeping 1 for every 300 frames they generate, as I’ve stated earlier these things flood the zone with bullshit that the expert then has to wade through to make sense of. On Track?
#3 — Nations. Useless. Scuttlebutt suggests Israel is letting an AI pick targets so humanity is collectively way out over our skis here. Traditionally when a “human approval” system is deployed the computer learns right quick who is making the veto decision and attempts to remove the veto from the situation. Doesn’t stop the electeds from mandating deployment, though. On Track.
#4 — AI Substitute for low-value human work. This one is more complicated that it seems. On the one hand, clearly yes. Folks are applying AI to many problems where a seat-warmer is required. On the other hand: the Just-Walk-Out AI store checkout technology was a crew of low-paid workers ringing up the charges from a remote call center. The AI companies are so busy innovating and chasing growth they are often forgetting to build the AI pieces they’ve been selling. Mechanical Turks, all the way down? On Track.
#5 — AI for necessary and tedious tasks. This is the perineal hope for productive use of AI, nominally the motivation for self-driving, and always just a few bugs from ready for mass adoption. Meeting Summaries are small trained models to report out vast corpus of technical documents are a thing that is done today, but not yet as a curated product or service. You have to parse the files and spend the tokens, so we’re close but not over the hump yet (not yet a mass market product). Nearly There.
#6 — AI in dangerous environments. Mining, yes. Construction, yes. Agriculture, yes. Trucking, yes? Forestry, maybe? Progress in many areas, one can question the degree of autonomy vs. repeating a macro here. On Track.
#7 — AI Managed Grid in Korea, China, and Europe. The SAP page for smart grids was launched in July 2022 and mentions that technology, including AI, makes the difference vs. regular old grids. SAP is a build it to spec player for very large customers so that boilerplate is less a product than a statement of directional capability. In the US we’re still in the “writing papers about how this can help” phase of technology development and adoption and not building it yet. Over at the Korea Institute of Energy Technology they’ve got a plan to modernize. On Track.
#8 — AI+Crisper. Thankfully we’ve not had this moment yet, 6% into the period of performance. We have seen rival approaches to inorganic synthesis with the expected sloppy fundamental science work, so I think it is only a matter of time until someone does this and it doesn’t work the way anyone expects. On Track.
#9 — Gendered AI. I was thinking about providing skills and perspective to domains where existing candidates were scares or missing, but someone went to do it for dating. Bumble has announced an AI agent that dates other AI agents for other users and if things go well they’ll promote a match between the humans. So we’re not getting a curated social feed from our kids for our parents, but the matchmaking intermediaries that creates kids and parents. Ahead of Schedule! I’m going to call this one wrong, because those crazy kids actually tried to do it.
#9–2 — I’ll replace #9 (as missed) with something safe. AI will consume 100x more electricity than Direct Air Capture in 5 years. This might be a super low-estimate, I haven’t looked anything up on it, but we’ve got a noticeable signal already coming into Microsoft’s annual emissions reporting and despite the grand claims of the DAC crowd in this next vital ramp-up period the energy expense is all going to be one-way traffic. I could double up and call out the water consumption of AI as 100x the water consumption for DAC, why not? AI is going to grow and remain a large draw even as new chips, algorithms, and small models come to market — Jevons Paradox after all.
#10 — Information Overload. This was already manifesting in degraded search results but we caught a glimpse of something much more in the botched rollout of Google’s search assisted AI. We could laugh when it told you to eat one or two small pebbles each day (halite and ice qualify, by the way) and that Kenedy was assassinated to keep him from finishing that 7th degree from UW-Madison. But then it started telling you to make chlorine gas to clean your washing machine and propagating long discredited racial pain tolerance beliefs. It presumes, in the basic and classic demonstration of gender bias, that Doctors are Male and Nurses are Female and responds accordingly. These systems work in health-care and insurance, making health, welfare, and benefit decisions on behalf of folks who may not be well served by these biases. This seminar from Dr. Damien Patrick Williams has the story — we really shouldn’t have used the ENRON emails to trail these LLMs. But beyond the bullshit factory, Google showed something frightening in how they integrated the tool.
#10–2 — I’m going to replace my #10 (as met) with something new and dangerous. Google will attempt to summarize source material to prevent bleed-through of advertising and impressions to the source content. The UW-Madison example was particularly telling: the AI summary listed 8 of 13 rows and paraphrased (incorrectly) the text of the website it pulled the information from. This could be done to assert ‘fair use’ within the search results page and prevent the user from clicking through to the source page. This keeps the attention cursor on Google.com and coincidentally keeps all the revenue with Google (end of referrals and display ads in your own sites). We will see another attempt, inside 5 years, to close off the web. Mike Masnick, over at TechDirt, has a writeup.