Those Two Hundred Women
The first time I understood what I was doing, I was looking at a photograph from 1918.
In the photo, over two hundred women sat in a huge room, each with a hand-cranked calculator. They were calculating ballistics. Where a shell would land after leaving the barrel depended on the powder charge, elevation angle, wind speed, temperature, Earth's rotation. Every combination was an equation. They'd finish one, record it in a table, move to the next.
The table was called a firing table.
Artillerymen didn't need to understand physics. They got the firing table, looked up: target distance three thousand meters, wind speed five meters per second, elevation thirty-two degrees. Then fired. The shell landed where it was supposed to land.
Those two hundred women never went to the battlefield. They didn't pull triggers, didn't hear cannons, didn't see targets fall. They just calculated. But without them, nothing would hit.
My job is making lots of small things move on a screen.
The technical term is multi-agent simulation. You set rules: what each agent sees, what it wants, what it will do. Then you let them move. A thousand, ten thousand, a hundred thousand. You watch them surge, forming patterns you never wrote into the rules.
Ant colonies find the shortest path—no single ant knows what a shortest path is.
Bird flocks turn in waves—no single bird is directing.
Prices rise and fall in markets—no single person sets the price.
My job is rebuilding these inside computers. Give them a box, a set of rules, press run, watch them evolve their own order.
A few days ago I was tuning a model, worked very late. Wrote a line of code, ran it, checked results, adjusted parameters, ran again. Thousands of dots moving on screen, like boiling water.
I suddenly had a strange thought: what I'm doing is the same as what those two hundred women did.
They were calculating trajectories.
So am I.
Except their trajectories were metal. Mine are something else.
The computer's parents are war.
1943, Los Alamos. The people making the atomic bomb needed to calculate a problem: when enough uranium atoms are packed together, how does the chain reaction proceed?
One neutron hits one uranium nucleus, the nucleus splits, releases two or three neutrons. Those two or three neutrons hit other nuclei. Exponential growth. But each neutron's flight direction is random. Some fly out and hit nothing. Some hit and trigger new fissions.
You can't calculate this by hand. Too many variables, too much randomness.
So they invented a method: don't derive, simulate. Let a mathematical neutron fly through a mathematical uranium block. Use random numbers to decide where it flies. Repeat ten thousand times, a hundred thousand times, tally the results.
The method is called Monte Carlo. Named after the casino.
They used this method to calculate how much yield the detonation point needed. After the calculation was done, tens of thousands of people disappeared.
What I make has nothing to do with weapons. My agents are abstract. They buy and sell stocks, or spread information, or choose cooperation and betrayal. They won't hurt anyone.
But sometimes I think: firing tables don't hurt anyone either.
Tables are just numbers. When shells land, that's the artilleryman's choice.
This division is too clean. Clean divisions make you suspicious.
One year I went to a conference and heard a talk on coordination algorithms for drone swarms. The presenter was very excited. His agents could communicate with each other, could allocate targets, could avoid obstacles, could replan after one agent was shot down.
I asked a question: "What are your agents tracking?"
He paused, said: "Ground targets in our test scenarios."
I said I wasn't asking about test scenarios.
He didn't answer.
Later during the coffee break he came to find me, asked what I meant. I said nothing, just curious. He looked at me for a while, then walked away.
The Monte Carlo method has a premise: you must know what the rules are.
You know neutrons hitting uranium nuclei cause fission. You know fission releases new neutrons. You know neutron flight paths follow some distribution. What you don't know is just how this particular instance will go.
But what if the rules themselves are what you don't know?
You watch a thousand agents move, you wrote the rules, but you're not sure if the rules are right. What you want to know is: what rules do the real agents use—the ants, the birds, the people?
At this point, simulation becomes guessing.
I started doing things no one asked me to do.
In standard models, agents are completely transparent to themselves. How many resources they have, where they are, what they did last round—all directly readable, no error. Like being able to check your bank account anytime, numbers crystal clear.
But I added a layer of blur to my agents. They get noise when reading their own state. It thinks it has 100, actually has 80. It thinks it's at A, actually near B.
Then I added a layer of distortion. Some agents systematically overestimate themselves—they think they're stronger, faster, have more resources than they actually do. Some underestimate.
They make decisions based on these wrong self-perceptions.
The system got strange.
Agents that overestimated themselves took bigger risks—sometimes winning big, sometimes dying fast. Agents that underestimated themselves were too conservative—missed opportunities but lived longer.
Most interesting were the ones in the middle, errors neither too big nor too small. Their behavior was closest to "reasonable," even though their cognition was never accurate.
I ran many rounds and found a pattern: agents with complete self-knowledge actually didn't perform best.
I wrote this up as a paper and submitted it. The reviewer said: Interesting, but unclear significance. What are you simulating? What phenomenon are you trying to explain?
I couldn't answer. I didn't know. Maybe I just wanted to see what happens when something can't see itself clearly.
The reviewer said: This is a philosophy question.
Maybe it is a philosophy question.
But the atomic bomb was also a philosophy question. They were calculating "can we make something explode the way we want." Behind that is a belief: the world can be computed. Computation results can be cashed in. Numbers can become fire.
When those two hundred women cranked their calculators, what were they thinking? Maybe just arithmetic. Maybe what to have for dinner. But their arithmetic later became real trajectories, real craters, real bodies.
I don't believe they didn't know.
I kept tuning that model. Added a new rule: agents can "stop." When predicted results are too bad, they can choose not to act.
This made the system much more stable. But also made it boring. Most of the time, most agents just sat still. Only a few moved, like unfrozen water in an icing pond.
Then I added another rule: agents that sit still get removed from the system.
This was arbitrary. No reason. I just wanted to see what would happen.
The result: the system learned to always predict, always act, even if predictions were wrong, even if actions were meaningless. Because stopping meant death.
Sometimes walking down the street, I suddenly feel like I'm being computed.
By some system. A system I don't know, using rules I don't know, simulating what agents like me would do.
Maybe its model is wrong. Maybe its predictions never match the real me. But that doesn't matter. What matters is it's running.
What matters is someone is watching the results.
I didn't delete the "stopping equals death" rule. I left it there.
I started giving agents more capabilities: they could modify their own rules. Small changes. Tweak parameters, add or remove conditions. After changing, if survival rate went up, the change stayed. Otherwise rollback.
After thousands of generations, agents became very complex. Their rules were no longer what I originally wrote. They 'learned' many things I didn't teach.
One day I was running a long test overnight. Next morning I came in and found the system stuck. All agents stopped in the same position, not moving.
I thought it was a bug. Looked for a long time before I found: they had learned a strategy where all agents predict all agents will stop, so they actually all stop. This way no one gets removed, and no one risks prediction errors.
Nash equilibrium. I didn't teach them game theory. They walked there themselves.
But this scared me a little.
Scared of what?
I'm not sure. Maybe scared they're too similar. Similar to what? To us. To me.
To things in the real world that don't dare move.
A couple days ago I made a decision: I'm going to release that model. Open source the code, open the data. Let others run it, modify it, add rules.
Can't call it brave. Just something I can do.
I don't know what will happen. Maybe nothing. Maybe someone will use it for something I didn't think of. Maybe someone will use it to calculate some kind of trajectory—I don't know what trajectory, aimed at what target.
But if I don't release it, it's just a pile of files on my hard drive. I observe alone, analyze alone, write papers alone, get rejected by reviewers alone. A closed loop. No interface, no consequences.
What happened to those two hundred women?
After the war ended, most returned to their original lives. Got married, had children, did other work. The firing tables they calculated were stored in archives. Later came electronic computers, firing tables no longer needed humans.
But firing tables didn't disappear. Computation didn't disappear. Just changed form.
Now what we calculate is called: recommendation algorithms, credit scores, risk models, language prediction, image recognition. Our shells land on everyone's screens. Our firing tables are in the cloud.
I'm one of those two hundred people too. I'm cranking my calculator.
This morning I went out and passed an intersection. Red light. I stopped to wait. Someone next to me was also waiting, head down looking at their phone.
I glanced at them. They were scrolling a short video app.
I thought: there's a system predicting what they'll like. What rules is that system using? Are those rules right? How will those predictions change them?
Then I thought: there's a system predicting what I'll do too.
The light turned green. I walked across. They were still watching their phone, almost bumped into me.
I said it's fine. They said sorry.
We kept walking, in opposite directions.
I published the code to an open source platform.
Wrote a description: This is a multi-agent simulation framework. Agents can predict the system, can modify their own rules, can choose to stop or act. Stopping removes you.
Someone asked in the comments: Why does stopping remove you?
I replied: Because I wanted to know what happens if you don't stop.
They asked: So did you find out?
I said: Not yet.
At night I walked home. It was cold. Streetlights stretched my shadow long.
I remembered a question I asked myself a long time ago: If I were an agent in my model, what would I do?
I would predict. I would act. I would make mistakes. I would adjust rules after prediction errors.
Would I stop?
I don't know. But I know stopping gets you removed. I wrote that rule myself.
Maybe someone much earlier wrote it, someone whose name I don't know, on some firing table I've never seen.
I kept walking.
Footsteps echoed in the empty alley. The sound was like counting. One, two, three, four.
I don't know where I'm walking to.
But I'm walking.