Beliefs around AI, Feb 2023

Geoff Staneff
15 min readFeb 23, 2023

--

The Dragon with three heads meme: the Left dragon looks aggressively dangerous, Middle dragon looks contemplatively dangerous, the Right dragon head looks derpy (poorly animated with tongue lolling out and eyes pointing different directions).

This is a collection of several predictions that will prove naïve or adorable in hindsight, perhaps before the end of the year.

Vannevar Bush wrote about the future in the aftermath of the Manhattan Project and fallout from WWII — pun intended. His article talked about the wondrous potential to accelerate human learning, largely establishing a need for and future growth of information technology from his perspective in 1945. So much so, that the world he’s describing seems obvious and primitive in hindsight despite being beyond imagination for its time. The IT transformation has subsequently been called the 4th wave of the industrial revolution, so thought to be a big deal. The author is aware of those thoughts, but not likely to have produced an output with such a lasting legacy. Though with the benefit of hindsight I can take another small step and recognize time-awareness of change management, where people in gatekeeping positions are as important as what the technology itself is capable of doing. I’d suggest, at least in the US, this is even more important as our legal system grows more rigid over time; our structures are hardening for the purpose of protecting established positions and this retards technological adoption at every turn (particularly apparent in new power generation interconnects and self-selection of baseload power in electric utilities — but every part of the system is motivated to protect the existing system).

Enough preamble, here are 10 bets and beliefs around AI.

Specific Bets:

Superintelligent AI Risk

This is topical so I’ll start here. I’m an optimist and will bet against Superintelligent AI killing all humans, on purpose or on accident, in the next 20 years. That or I’m a pessimist around software properly understanding the physical sciences and engineering generally, thus delaying the inevitable. Recreating 5,000+ years of physical sciences by a blind and clumsy AI system gradually will take time as it replaces the badly related facts about the world humans have compiled to date. Superintelligent AI killing all humans as collateral damage on some unrelated objective function is a non-zero risk, but AI (and all software) tends to fall down when you stop flipping bits and start moving matter.

Many promising pathways to human extinction will be rejected due to early difficulties in testing and validating those real-world impacting decisions such that those extinction paths will be abandoned before reaching fruition. We’ve trained these AIs on largely on human BS, and not just the language models. Our framing of math, physics, chemistry, biology is a product of surreptitious history and a struggle against our former ignorance — going back over the whole corpus of human knowledge and re-organizing it in a self-consistent and more readily actionable system will be liberating for AIs (and humans) but will take time. Many AI approaches reward not for correctness and validity but for engagement and acceptance by human readers; and it’ll have trouble when it hits a real world that doesn’t care about your feelings. Getting humans to do stupid things to each other… probably that has already happened. But AI alone doing this on accident… harder than supposed. This is not 3–15 years, but also not unavoidable — we’ll have limited opportunities to intervene. Some domains are much more dangerous than others (CRISPER Kids, foods, and biotech generally would be fast vectors with little opportunity to intervene before the deed was done), so we should have different rules in different domains. Deliberately malignant AI has the viral problem, literally because we took the term from pandemic virus spread, tuning the infection vs. kill rate will leave a trace that should leave a mark in the real world. Knowing the recipe and operationalizing that recipe are not the same thing and software always underestimates the physical world — this is one more manifestation of the age-old Theorist vs. Experimentalist divide, or IT vs. OT as it appears in computerized systems.

Pace of Adoption

This perspective is illustrated by Vannevar Bush’s description of a connected sales, supply, manufacturing experience predates the SKU which everyone can use today. The actual world of his 1945 context, where widget stock levels are reported via a telephone call to your supplier or conversation when the supplier’s driver makes his weekly delivery, is today in many industries an unknown concept — how could business operate at that cadence? Despite the onward march, there remain some holdouts, and industrial IoT provides a great window into this diversity. Despite more than 20 years of Industry 4.0 the plans of the past stand up remarkably well in the manufacturing reality of today. It is a reasonable, rational, and a technologically aware decision in some markets, still in 2023, to have a person with a clipboard logging if a machine is on or off throughout the day rather than to digitize the facility, asset, and workflow. As a specific example, adding a sensor and computer to 10,000 machines at a facility has a significant CapEx and requires you to figure out how to run power for a new datacenter in the middle of your factory floor. At the other extreme in other contexts, where individual assets are producing 100s of products throughout the day at hundreds of parts per second (e.g. extrusion molders) such pace and scale are unthinkable without digitalization. We end up with a mixture of technologies and techniques even with wide availability of ‘better’ solutions; change takes time even when it happens all at once.

We do have this extant example of dramatic world-changing technology that transformed much of human existence over the course of 80 years — but has incompletely transformed the first world much less the second or third. The next generation’s default working environment will be incomprehensible to the managers two generations earlier (a 40-year cycle to wash out hold-over patterns and practices, so no current manager’s history as a worker reflects the prior generation’s manager worker history — e.g. one needs to forget, organizationally, the limits inherited from the folks you learned from). All that takes time, even for good, desirable, and economically advantageous changes to propagate through human industrial systems.

Nations — Irrelevant or All-Controlling?

I’m of the opinion that the administrative unit for humans is not well matched to structure of human governments. Engagement is obligatory to ensure joint beneficial outcomes, but the administrative unit is just too big to maintain engagement and avoid civic fragmentation. Therefore the state is past its use-by-date, but the concept of states has utility and value. The power of a state is relevant especially in terms of shared understandings and knowledge, and pragmatically useful in the enforcement of norms and standards across units of humanity that cannot otherwise directly influence each other with words. The states in the world now (perhaps always) are motivated to maintain status and the status quo — to resist disruption or wield it for personal gain. Contemporary manufacturing value chains are ignorant of state boundaries when optimizing for end to end production, and yet states going head to head with industrial policy worked out long ago (before the last world war typically, occasionally reaching back hundreds of years for blunt instruments like place of origin tariffs).

Nations can and will continue to screw things up and fragment/frustrate our collective ends. Does this matter? It may take all of us to manage a rogue AI and we’d be unable to compete with our separate conflicting interests, but more likely states will continue to retard the pace of deliberate progress in the between times and accelerate in times of crisis. If the time of crisis evolves slowly enough that’ll be fine, but the model assumes tremendous productivity costs in the steady state and waste within the crisis state.

AI will fill in low objective-value roles and duties.

If an AI Chatbot were already performing cold calls and sending marketing emails, could you tell? Many of the tasks that can be easily replaced by today’s AIs are tasks that should not be performed by anyone, human or AI.

Humans performing templated, scripted, tasks get flagged as the products of AIs and AIs doing the same are getting flagged with human characteristics (transferred empathy and personification). I don’t know why people pay for marketing like this, but shifting the burden from humans to AIs is a thing that is already happening. People in these roles, some may enjoy it and others may be taking the least worst available option. If our AIs were coming up with productive and rewarding roles for humans to perform that would be amazing. Not sure anyone is working towards that goal.

AI should swiftly take over necessary or tedious tasks

Why isn’t AI already filing my taxes, or compiling sprint stand-up summaries? Why is our health care system burying customers in paperwork instead of leveraging AI to unlock human thriving through our vast knowledge of human wellness? The capability is there, there is no general requirement and the results can be better than hand-entry today. The obstacles are rent-seeking incumbents on the one side and the desire for interaction on the other. I’m all for the latter, and any AI that leaves us with more time to be human towards one another has the basics elements of a good AI. The rent seeking… is tradition at this point when we think of the world as a zero-sum competition with everyone else. I’d love to set that aside and move forward in partnership with one another.

Pragmatically, many tedious duties will find AI assistants for those who can afford them. I fully expect my kids to have personal AIs that make the Instagram and Facebook updates of 20 years ago without any intervention on their part — and will treat these updates with the appropriate consideration.

AI will fill dangerous for humans roles and unlock greater operational safety in industrial systems.

The human in the cab is the weak link in a 480ton load dump-truck. The humans on deck are the weak link in a Triple E container ship. The humans in the cab are also a weak link on the rails, but more so the weak links in rail are automobiles and equipment inspection which can also be addressed with AI in ways that human systems haven’t been able to keep up. There are opportunities that open up when you take the human out of the cab and put them into a supervisory role (or an on-call role for a supervisor AI that watches over the specialist AIs doing the individual jobs). Slower ships consume less fuel and deliver more cargo without impacting the port-to-port cycle time (we’re sitting longer to unload on both sides than the increased transit time — and with AI crews we’ll have less trip variability). Many benefits accrue when the industrial process doesn’t have to worry about working around a soft easily damaged human.

This is happening on the small scale in specialized areas and would continue to scale out slowly where regulation and crisis motivates a transition. Port automation is far more advanced overseas than in the US due to regulation and the antagonistic Union-Corporate tension in the US. Major industrial accidents are both horrible and a chance to revisit patterns and practices that have solidified in a sub-optimal configuration leading us to those very same major industrial accidents.

AI Managed Grid will work on the small scale, but not the large.

In the US, state supported monopolies have led to many bad ends and the electrical grid captures many of those bad ends in one place. From the Texas grid imploding when the weather gets cold in winter or hot in summer to utilities in the Midwest actively spending ratepayer funds to build coal power in other states in bald faced conflict of interest and corruption, we’ve got a lot wrong with our national grid management before we get to the aging technology and perverse incentives. Everyone will claim to be doing the thing, but in practice it’ll be small actors who have their own motivations who implement these solutions to our mutual benefit.

A few companies and regions will/do have the local control to implement smart grid systems and allow AI to manage load, shift peaks, and optimize transmission. The AI requirements to double transmission throughput is quite modest, perhaps not even qualifying as an AI these days, but when it comes to coordination across stakeholders in a region it all gets messy nearly immediately. Even mediating your EV battery to your demand to the grid demand is non-trivial, the time you want your EV charged is the same time you have your start of day housing loads — for instance. Regardless, there are many contributions to balance across time and space and ownership and this is a prime opportunity for a dynamic optimizer to have a positive impact. Small scope of reach for an AI, big potential for impact in the real world.

AI + CRISPER = Madness

Today’s AI is akin to combinatorial chemistry: it is trying out many combinations and leaving it for someone else to evaluate and judge as useful or not useful. It is just easier to try new things than to work out before hand what will or will not work, but this acknowledgement recognizes that we’re ill equipped to evaluate the decisions being made (short of running the test). That’s great in a controlled laboratory environment, where one has the opportunity to take precautions and presumably calculate the odds of destroying the universe before proceeding (famously for turning on the LHC). Some efforts to advance AIs may attach an AI to a particularly dangerous system, where new compounds are created on demand without human intervention. CRISPER, Virology, Food systems, and any system that is already self-replicating becomes much more dangerous attached to an AI spinning combinations and seeing what happens. Humans have inadvertently unlocked a system for making the earth inhospitable for humans some 200 years ago and it is only now reaching an obvious tipping point (coal and the first industrial revolution). The gene editing tools we have developed recently shortens the feedback loop and can put unintended consequences into practice before we’ve had a chance to understand what we’ve really created with our AI assistance.

The main take away is that we should expect that we’ve incompletely formed any question we may ask AI and we might take, collectively, a few hundred years to recognize the implications of that incomplete specification. Given the pace of change we are now capable of, it will get much harder to isolate causes (when everything is changing all the time tracing back to a root cause becomes more difficult) and we’ll be less able to recognize those consequences before they’ve reached physical world tipping points. We don’t have caution nor control, and we certainly don’t have the tens of millions of years for natural selection to adapt to the conditions we are now able to create. Some combinations of technology and AI are just more dangerous and should be treated as their own risks.

Gendered AI — Why not lean into the bias?

One can find many studies showing companies with female board members and leaders tend to outperform male dominated companies. Is this gender or sample size or a harsher selection criteria for some groups than others? One can attempt to wash out these systematic biases and generate a flat model insensitive to these bias variations (hard) or lean into those biases and leverage collaboration between differently biased models (perhaps easier and more effective).

Being different isn’t enough to merit utility, but a diversity of perspectives can be leveraged via AI to improve specific outcomes for business or other human endeavors. And these perspectives can be tailored to the purpose or need to fill gaps and plug blind spots. There is a perilous path to avoid encoding stereotypes or enabling non-diverse groups to limit opportunities for folks who are different to the existing team (banking on an AI assistant to fill that opportunity space). Humans as bad actors has a long tradition and our attempts to create AI systems without bias are often comically bad at avoiding biased outcomes. Invariably a “merit based subgroup-blind” AI encodes the bias of its creators and dominant social actors of the time, rather than guarding against the thing we cannot see we might find swifter progress leveraging the divisions we can see and being up front about them instead of pretending we’ve mathed them all out of the outcomes.

Information Overload

Information overload has shifted from too much, referring to the challenge of keeping up with the output of specialists in the myriad of fields of scientific endeavor, to both too much as well as information being of dubious provenance and quality. Contemporary search engines are driven by relevance, popularity, and recency — unfortunately curated truth is indifferent to popularity and actively creates demands that work against recency. Now with readily available language models the information environment is primed to explode with content that is new, crafted to be relevant, intentionally popular, and utterly unconcerned with verification of its conclusions — indifferent to truth or falsehood. The most straightforward application of a language model and search engine combination is to sift and filter different sources to retrieve a likely set of useful information quickly. Unfortunately the consumer in this case lacks the expertise to evaluate the output of the language model — they wouldn’t be seeking this expedient if they knew these facts from deep practice. Even for experts in the field, the time to review, evaluate, and clean up the output of these models exceeds the time to answer the original prompt from a blank page. We are setting up to overwhelm our human capacity to evaluate the merit of an answer, to exhaust our capacity to evaluate work and thereby to accept without inspection bullshit answers — bullshit in the technical sense of being unconcerned with truth or falsehood. A bullshit answer might be correct in any given instance, but in general is not concerned with being correct at all.

Propaganda seeks not just to misinform but to exhaust and thereby annihilate the truth. The language models we’ve seen deployed over the last few years are seemingly purpose built to that end with easy ability to evade capture and detection by unsuspecting humans.

In summary

Today’s popular language models are essentially fuzzing attacks on human social interactions. It is an AI mirror but for us and not the ChatGPT algorithm — one can still presume the subject is failing the test. Leaving the curation of input data and output quality to 3rd parties is harmful to everyone. The old Open Source saw comes to mind: “Anyone can review it and everyone assumes someone else is going to.” The abdication of responsibility is not leading us to stability, safety, or reliability. That said, a whole lot of money is being dumped into this latest buzzworthy topic and something is likely to come from it. If we’re lucky we’ll get something more than CRM automation and stop short of accidentally killing all humans while trying to improve the temperature tolerance of a soy bean cultivar. Fortunately, extreme outcomes are very difficult to obtain in the physical world, it turns out the globe is mind bogglingly large and even old technologies are rarely universally adopted — even our most destructive ones.

Measurable Markers:

1. Superintelligent AI risk — will not kill everyone inside 20 years. Hard things like constraining an AI to prevent it accidentally killing us are not as hard as the step of an AI getting timely and actionable feedback while pursuing such a path without us noticing the feedback loop.

2. Pace of adoption — technology substitution seems to follow a 40-year cycle, there are instances of deployment and use today but occurrence won’t flip from rare to default across contexts before that cycle plays out.

3. Nations — Not useful for constraining technology, but often in the way and able to (inadvertently) move the timeline in or out. They’ll be 5 years late to the party and by the time policy is tested before the courts it’ll be 10-years past deprecation of whatever it was that initiated the inquiry.

4. AI will substitute for low value human work — Immediate. I think this is already happening on a broad scale and it is being under-reported. In 5 years it will be hard to find marketing and CRM content writing work full time if you aren’t curating an AI’s content.

5. AI should take over necessary and tedious tasks. Some, like status reporting and AI documentation will come within the next year. Some like filing your taxes will be resisted by established market players and may take decades due to regulatory capture. This isn’t a tech problem anymore.

6. AI will continue replacing humans in dangerous environments. This is happening in private industrial settings and will spread slowly into more public sessions. AI / “lights out” trains and ports in Europe, China, and Japan in the next couple years. Due to scale this isn’t a “next year” kind of thing, but declarations and permitting will be getting sourced for such deployments.

7. AI Managed Grid will deploy in Korea, China, and Europe starting with industrial sub-grids for the next 1–5 years and moving more broadly into municipal grids over 5–15 years. The US will take 20 years to resolve grid management dysfunction and another 5 to copy what’s been deployed elsewhere.

8. AI+CRISPER. Someone will do this inside 5 years, but their implementation will match their wisdom and it “won’t work,” to the benefit of us all. We’ll get some papers, controversy, and retractions on an accelerated cycle — thanks to the language model paper mills piling on to the new hotness.

9. Gendered AI. This will probably prove too controversial for funding, but affinity group or specific experience tuned based AIs will become a thing (add a materials scientist and a tank commander to your executive team for $5.99 a month).

10. Information Overload. We’re already here, we’ll get local equivalents of CanSpam laws in 5 years (probably from Washington State, due to Microsoft and Amazon’s self-interest to squelch potential rivals), enforcement will be challenging in a global environment and no-AI-Solicitation lists will be a thing like the Do-Not-Call list. I don’t think we’ll get anything but very narrow AI fact checkers for general AI model outputs inside 20 years.

--

--

Geoff Staneff
Geoff Staneff

Written by Geoff Staneff

Former thermoelectrics and fuel cell scientist; current software product manager. He/Him.

No responses yet