Systems Thinking: Field Guide
A practical reference for seeing the system you're actually in
How to Use This Guide
This is organized by moments you'll actually face, not by theory. When a problem keeps coming back, when someone's about to get blamed for a system failure, or when you're about to optimize one number and forget the others, find the relevant section and read it before you act.
Each section has a few high-signal principles, a short explanation with a specific example, and a source if you want to dig in. Most of the examples come from my own teaching and work life, because systems thinking only becomes real when it's attached to something you've actually lived through.
When You Might Open This Guide
- A problem you thought you fixed has come back in a new shape, and you're about to apply the same fix harder.
- Your group wants to blame one teammate for a missed deadline, and you're not sure that's fair.
- You're about to optimize one number for your project (downloads, accuracy, revenue) and ignore everything else.
- You're designing an incentive for users, a feature flag, a grading policy, and you can't predict how people will actually respond.
- Your team's workflow feels fragile but nothing's obviously broken and you can't explain why you're nervous.
- You want to change something big (a team, a process, a company) and you don't know where to push.
When a Problem Won't Stay Fixed
Stop fixing the symptom. Diagnose the structure. If you clear a bug, merge a conflict, or calm a tense meeting and the same thing shows up two weeks later in a different costume, the thing you fixed was not the thing that was wrong. Systems generate their own problems. The dynamic that produced the first version of the problem is still running, so it will produce a second version, and a third. You have to go upstream from the symptom to the structure.
Last spring my Adobe and BYU workloads looked fine for months. Nothing about me changed the week it all started to fall apart. Demand spiked, travel ate my presence, and the single shared resource (me) got exposed. Working harder wouldn't have fixed it, because the problem was never effort. The problem was that both systems pulled on the same stocks with no backup.
Source: Jason Johnson, Lecture 3 speaker notes; Systems Thinking in Action (ChatGPT conversation, Jan 2026)
A system is more than the parts. A system is the parts, the connections between the parts, and what the system is trying to do. A UML diagram shows you the parts and arrows. Systems thinking asks what happens to that diagram once real people, real incentives, and real time get involved. Two teams with the same org chart can behave completely differently depending on who talks to whom, what gets measured, and what the unstated goal actually is.
In the fish game we played in class, the instructions were identical for every table. The behavior that emerged was not. That was the system at work, not the rules.
Source: Lecture 3 slide 9 ("What is a System?") and notes; slide 12 fish game
Your reasoning can feel logical even when it's wrong. When key parts of a system are missing from your mental model, you can draw perfectly valid conclusions from an incomplete picture and still be completely wrong. The failure feels like "I thought about this carefully." That's why the right move, when something keeps breaking, is not to think harder. It's to ask what you're not seeing yet.
Source: Lecture 3 slide 9 notes; Lecture 1 callback to mental models
When Someone Is About to Get Blamed
Don't blame people. Blame the design. Most failures are not because people are stupid or unethical. Almost always, it's because they don't understand the systems they're inside of, or the system itself is rewarding behavior no one would admit to wanting. If you punish the person and leave the structure alone, the next person in that seat will do the same thing.
Thomas More said it plainly: if you let your people be poorly educated and their habits corrupted from infancy, and then punish them for the crimes those conditions produced, you're first making thieves and then punishing them. That line is on slide 15 of the lecture for a reason. Every time a group project goes sideways and one person gets scapegoated, remember it.
Source: Lecture 3 slide 15 ("Don't Blame People"); Thomas More, quoted in the deck
Good people inside bad systems cause real damage without knowing it. Ben Goldacre's whole case in Bad Pharma is that the people running drug trials are not villains. They're scientists inside a set of publication incentives, trial registration rules, and regulator relationships that quietly push the whole field toward overstating benefits. Nobody planned it. The system did. The same thing happens inside companies and classrooms that are "full of good people" but still keep producing bad outcomes.
Source: Goldacre, Bounded Pharma (Audible notes: "This book is not about bad people. In fact good people who are put in perversely designed systems can cause a lot of damage even without knowing it.")
Behavior you don't like is usually a signal, not a character flaw. When you see someone gaming a rule, dragging their feet, or cutting a corner, your first question should not be "what's wrong with them?" It should be "what is the system rewarding right now that makes this the sensible move?" Once you see that, you can change the reward instead of arguing with the person.
Source: Lecture 3 slide 15 notes ("rational people, acting reasonably, can still create bad outcomes when the system is poorly designed")
When You're About to Optimize One Number
People don't optimize the whole system. They optimize the part they can see. This is bounded rationality. It isn't stupidity, and it isn't laziness. It's what you get when a real human, with limited time and limited view, zooms in on the slice of the problem they happen to have a handle on, and then treats that slice as the whole thing. It usually works at first, which is what makes it dangerous. Confidence keeps climbing while reality quietly drifts.
If your group is chasing one metric for the semester project, ask yourself what stock that metric is connected to, and what else is draining because you're focused there.
Source: Lecture 3 slide 14 ("System Traps: Bounded Rationality") and notes
When the metric becomes the goal, the goal quietly dies. This is rule beating. People optimize against the rules of a system, not the purpose of the system. The rules technically get satisfied, but the outcome drifts away from what anyone actually wanted. Scores go up, learning goes down. Sales numbers hit target, customer trust erodes. The system is "working as designed," and that's the problem.
Publication bias in medical research is rule beating at planetary scale. Studies that should have been published never were, because the incentives around publishing rewarded positive results. Over decades, that quietly skewed the entire evidence base doctors use. Nobody lied. The rules got beaten.
Source: Lecture 3 slide 14 (Rule Beating) and notes; Goldacre, Bad Pharma (Audible notes on publication bias)
Slack is not waste. Slack is what keeps you from finding out your design is broken. A system that looks fine might just have enough buffer to hide the structural problem. The day the buffer runs out is the day everyone acts surprised. If you're feeling nervous about a workflow that "technically works," don't wait for the shock to prove you right. Go looking for the single points of failure now.
Source: Jason Johnson, Systems Thinking in Action (ChatGPT, Jan 2026): "This system worked not because it was perfectly designed, but because it had slack. Slack hides design flaws."
When You Want to Change a System
Push at the right depth, not the most visible spot. Most people try to change systems at the surface, because the surface is the part they can see. Changing a parameter or a threshold rarely moves much. Changing a feedback loop or an information flow moves more. Changing the goal, the rules, or the underlying paradigm is where the real leverage lives, and it's also the scariest place to push.
When my schedule imploded, the tempting moves were all surface. Work 70 hours. Cancel lectures. Record everything and hope nobody noticed. None of those fix the structure. The move that actually worked was a paradigm change: I stopped being the single person delivering value and started designing a system that could deliver value without me in the room. That's how Casey got pulled in as a guest instructor.
Source: Lecture 3 slide 16 (Leverage Points); Jason Johnson, Systems Thinking in Action (ChatGPT, Jan 2026)
Information flow is one of the best leverage points you have. Who knows what, and when, often matters more than who has authority. A team that surfaces problems early behaves completely differently from a team that hides them until Friday. If you can't change the rules or the goal of a system, see if you can change what gets seen, by whom, and how fast. That alone will often fix things that looked structural.
Source: Lecture 3 slide 16 and slide 19 ("Information flow is leverage"); speaker notes
You're not increasing effort. You're removing yourself as the single point of failure. This is the difference between a buffer fix and a structural redesign. A buffer fix hides the problem for a while. A structural redesign changes what the system is capable of surviving. When you look at any intervention you're considering, ask: "If I stepped away tomorrow, would this still hold?" If the answer is no, you built a buffer, not a system.
Source: Jason Johnson, Systems Thinking in Action (ChatGPT, Jan 2026)
Don't just admire the system. Classify the move. A good habit, borrowed from the lecture notes, is to name the leverage type out loud when you're picking a solution. "I'm changing a parameter." "I'm changing a rule." "I'm changing the paradigm." This forces you to notice when you're doing surface work and calling it strategy.
Source: Lecture 3 slide 16 notes; Systems Thinking in Action (ChatGPT, Jan 2026)
When You Need to Just See the System at All
If you can't see the system, you can't see the space of possible actions. This is the line from slide 8, and it's the whole reason systems thinking is worth practicing. Before you can choose a smart move, you have to be able to look at the situation and see more than the people involved and the task in front of you. You have to see stocks, flows, feedback loops, and the goal the system is actually pursuing (which is often not the goal it claims to be pursuing).
In the fish game almost nobody questioned the rules of the game itself. Everyone played hard inside the rules and most tables ran the lake dry. That's what it feels like to be stuck inside a system you can't see.
Source: Lecture 3 slide 8; speaker notes
Sit with an LLM and model the system out loud. This is the homework prompt from the lecture. Take a system you're inside of (your project, your team, your schedule) and have an LLM help you draw it as a diagram in markdown or mermaid. Name the stocks. Name the flows. Name the feedback loops you can see, and the ones you suspect. Then ask it to help you find the leverage points. You'll learn more about the system in 20 minutes of this than in a week of staring at it in your head.
Source: Lecture 3 slide 21 ("AI Conversation This Week")
Stay long enough to notice the second-order effects. Most failures come from acting on the first-order read and shipping before you see what else moved. Before you lock in a decision, ask: "If this works, what else changes? If this fails, what else breaks?" You don't need a formal model. You just need to refuse to stop at the first consequence.
Source: Lecture 3 slide 19 ("Judgement and framing remain scarce"); Varol, Think Like a Rocket Scientist (Audible notes: "The problem was not the error. It was in the systems engineering and the failure in the checks and processes the engineers needed to do.")
The Anatomy of a System (Quick Reference)
| Piece | What it is | What to look for |
|---|---|---|
| Elements | The visible parts (people, tools, teams, resources) | Who and what is in this system |
| Connections | How the elements interact | Information flow, money flow, trust, authority |
| Purpose | What the system is actually trying to do | The real goal, not the stated goal |
| Stocks | The memory of the system (things that accumulate) | Time, energy, trust, cash, attention, reputation |
| Flows | What moves into and out of a stock over time | Hiring rate, burn rate, enrollment, velocity |
| Feedback loops | How the state of a stock changes its own flows | Reinforcing loops (runaway), balancing loops (stabilizing) |
| Leverage points | Where a small push changes a lot | Parameters < Flows < Feedback < Info < Rules < Goals < Paradigm |
The lower you go in that last row, the more powerful (and scarier) the intervention. Most people stop at parameters because parameters are what they can see. Lecture 3 slides 10, 11, 16
Go Deeper
These are the books that sit behind this guide. Read any one of them and you'll start seeing systems in places you used to see individual people.
| Book | Why It Matters |
|---|---|
| Thinking in Systems by Donella Meadows | The canonical book on this topic. Short, clear, and the source of the stocks/flows/leverage vocabulary the lecture uses. A good first stop if the ideas here clicked for you. |
| Bad Pharma by Ben Goldacre | A brutal case study in how good people inside a badly designed system produce bad outcomes at scale. The clearest "blame the structure, not the person" example you'll read. |
| The Coddling of the American Mind by Lukianoff and Haidt | An entire argument that second-order effects on a generation of students came from parenting and campus incentives nobody designed on purpose. Systems thinking applied to culture. |
| Atomic Habits by James Clear | "You do not rise to the level of your goals. You fall to the level of your systems." The best popular statement of why personal change is really systems design. |
| Think Like a Rocket Scientist by Ozan Varol | Varol's chapter on the Mars Climate Orbiter is a master class in how a single unit conversion was never really the problem. The systems engineering was. |
| Factfulness by Hans Rosling | A full book on how our mental models of the world drift out of sync with reality. You can't fix a system you can't see clearly. |