AGI is already here.
I didn’t meet it in a model card or a benchmark chart. I met it in a browser tab, watching Deep Research work.
I had given it a question that shouldn’t have an answer: something sprawling and vague, the sort of task you hand to a junior analyst and expect back in a week. I watched the trace in the corner of the screen. It opened a tab. It read. It discarded. It followed a lead, hit a dead end, backed up, and tried again. It critiqued its own logic. It rewrote its own thoughts. Within ten minutes, I received a report that was indistinguishable from the work of a domain expert.
It wasn’t just generating tokens. It was working.
Most people imagine AGI as a monolith: a single, towering mind, a God in a box. GPT-6. Opus 7. A brain with a version number. But that is not what it feels like up close. The thing that feels like AGI is not the static weights of a model. It is the machinery wrapped around it. It is the industrial system that can reliably teach a machine to improve at the messy, undefined work of being human.
In the literature, we call this RL post-training. In practice, it is the assembly line of intuition.
Before Deep Research, we could only align models on things that were crystalline. Code. Math. Go. These worlds are brittle and binary. You run the test. It passes or it fails. The reward signal is a clean spike of voltage. The model climbs the gradient.
But knowledge work is not crystalline. It is muddy.
There is no unit test for “explain the ethics of drone warfare” or “synthesize the history of Western philosophy.” For years, this mud kept the models out. You cannot optimize what you cannot score.
Deep Research stepped into the mud and carved a path.
You ask it to evaluate the failure modes of a new reactor design, and it doesn't regurgitate a training set. It goes out and reasons. It combs through noise, weighs contradictions, and synthesizes truth. OpenAI didn’t just train a model to predict the next word; they trained a system to execute the process of research. They taught it how to explore, when to doubt, and when to stop. It feels strangely alive because it is mimicking the loop of human thought.
Underneath this emergent reasoning lies something almost insultingly simple.
Rubrics.
On the surface, rubrics look like bureaucracy. A checklist. Did it cite sources? Is the tone objective? Did it address the counter-argument? Check. Check. Check.
But stack a million of these checks together, and they undergo a phase change.
They stop being a checklist. They become a constitution.
They become the bridge between biological judgment and silicon execution. They are the map of what we believe “good” means. When we write a rubric, we are not just grading a test. We are encoding values. We are telling the model: This is what thoroughness looks like. This is what nuance feels like. This is what we value.
The model does not just learn the answer. It learns the shape of the game we have designed for it.
We are drifting into a world where a small set of institutions will master this quiet, potent craft. They are taking the messy, ill-defined intuitions of the human experience and making them legible to a loss function. They are deciding what counts as truth. They are deciding what counts as quality. They are pouring the judgment of the world into checklists, and those checklists are becoming the gradient that the future follows.
The obvious question used to be: Who has the biggest model? The real question is: Who writes the rubrics?
Because once you can do this, once you can turn meaning into a trainable process, you don’t just build a chatbot. You build the lens through which the world is interpreted. You steer the systems that decide who gets the loan and who gets the job. You decide which scientific hunches get the grant and which news stories define the era. You dictate the invisible laws that will govern the next century of human thought.
AGI will not arrive as a press release. It has arrived as a skill: the ability to turn any vague human desire into a rigorous training loop.
History is written by the victors, those who conquered the map. The future belongs to the architects, those who draw it.
It belongs to those who can design the rubrics, harvest the judgments, and run the training that aligns silicon with the soul. They won’t just own a model. They will own the definition of Truth.
AGI isn’t measured by its intelligence. It’s the rules of our world that it learns to live by.