As the AI revolution seems to be fully underway there is a lot of talk about AI generating code, or AI generating powerpoint. But even now I can tell when I am reading something generated by AI (even if I still sense the original writer), and I can tell if an image is generated by AI. I wonder is this what we need AI to do, or do we need to rethink tools and communication wholesale.
AI Coding is the wrong prompt
I am seeing, and thinking, a lot about AI tooling around code: code generation, productivity analysis, agile optimization. Software built by machine will not be an exercise in optimizing existing workflows but in creating new workflows all together. Instead of asking AI to create code we should ask AI to create software. I dont care how, maybe use binary, maybe use slime molds, we don’t have to be perscriptive. I find this analogous to the moment when we realized that we could create faster webpages by working outside the DOM, with frameworks like React, or languages like ELM or web assembly. There is a framework I am trying to remember that totally did away with the DOM but I cant recall it.
I’m thinking about what post-code driven software development will look like?
Rethinking the os, the container, the packaging system (nix)
Much of our computer stack, as users, has to do with the ways that humans have created systems. Operating systems interoperate with low level input/output devices, such as video, or keyboards. When I prompt gpt with questions related to changing software development or usage there is always a response about multimodal generation (sound, video, and 3d, etc). But true multimodal would be generating physical objects through 3D printing, plotter/brush paintings, or chemical synthesis/gene editing crispr style.
In sanskrit there is the idea that is you say a word properly whatever the word represents will appear. A friend asked, could AI do that?
I did ask chat gpt : what would be a package manager or a container designed from the ground up by ai?
And the responses made me consider that perhaps these tools will be unnecessary with llm, with code that autogenerates packages/code dependencies, or device drivers, and what not.
I’m thinking about how post stack technology will be built ?
LLM Principles – Beyond LLM + RAG + Agent (LRA) Framework
Right now I am trying to understand how an engineer, and a software leader, how do we think about LLMs and AI. The faster, better, LLM development will be done by the mathematicians, hardware engineers , and material scientists – perhaps in collaboration with LLMs themselves. But what else needs to happen to make these developments truly revolutionary? Right now we all seem to be talking about agents, software that acts on our behalf, and RAGs, custom AI graph databases that augment the core LLM engines with domain expertise.
I used to think that AI is the way we can back into an autonomous and decentralized web. While we may have the core LLM + RAG + Agent framework, is this the end. Obviously not.
What I am thinking about is what are the principles of LLM and what are the principles of a personal LLM vs a collective LLM.
End Note
While I was asking chatgpt questions related to these thoughts, I kept getting responses about intent and and about goals. This made me think about the language of Operations Research and Systems Design pioneered by Norbert Weiner and Stafford Beer.
The difficultly in these systems is how to appropriatAI te model both the current state and the future state, what level of resolution/detail do we need to generate the future state from the current state, do we have enough vision to in detail envision a future state, or is it hazy and vague.
A future powered by AI that closes the gap between the current and the future state will have to answer these questions first.