while(motivation <= 0)

Rebuilding a 1991 Windows Strategy Game from Assembly — Second Conflict in Python

Second Conflict is a 1991 turn-based space strategy game for Windows 3.x, written in Borland C++ by Jerry W. Galloway. Up to ten factions compete to conquer a galaxy of 26 star systems, dispatching fleets, managing planetary production, and grinding through attritional ship-to-ship combat. The executable — SCW.EXE — is a 16-bit Windows NE binary that has never had its source released. This post documents how we reverse-engineered the game's file formats, mechanics, and UI from scratch, then rebuilt the whole thing in Python and pygame.

6,034
lines of Python
538
decompiled functions
26
star systems
15+
dialog screens

1. Starting Point — Why Ghidra?

The project began with a single goal: faithfully preserve the game's mechanics, not just make something inspired by it. That means reading the actual binary. Ghidra, the NSA's open-source reverse-engineering framework, handles 16-bit NE executables well enough to produce readable pseudo-C for most functions. We exported the full decompilation — 25,826 lines across 538 functions — into decomp_scw.txt and used it as the authoritative reference throughout.

The core workflow was: find a dialog or behavior in the running game, locate the corresponding Windows message handler in the decompilation (Ghidra labels them FUN_XXXX_YYYY), read the pseudo-C, and translate that logic into Python. Where the decompiler output was ambiguous we went back to the raw hex.

The original's dialog box identifiers (e.g. COMBATPAUSEDLG, REINFVIEWDLG) were recoverable from the NE resource table strings, which gave us reliable anchor points to search the decompilation.

2. Decoding the Save-File Format

Before writing any game logic we had to be able to read and write the original .SCN/.SAV save files. The loader function (FUN_1070_013f) reads ten sequential sections:

SectionSize (bytes)Contents
Header18Star count, sim-steps per turn, version 0x0300
Stars99 × 26 = 2,574Star records with TLV garrison entries
Fleets in transit21 × 400 = 8,400Fleet records; 0xFF = free slot
Playersvariable9-byte name + 27 × uint16 attributes
Event logvariablePast-turn event strings
Misc state sections

Several false starts happened here. The player record layout was initially read backwards — attributes came before the name in our first pass, not after. Star 0 turned out to use different field offsets than stars 1–25 (its coordinates are at bytes 9–10 rather than 1–2). These were caught by diffing known scenario files against parsed output until every field matched.

TLV garrison encoding

Each star's garrison (the ships defending it) is stored as a sequence of 7-byte Type-Length-Value records. Each entry encodes a faction ID, ship type, and count. The parser walks these until it hits a terminator byte, building a list of GarrisonEntry objects that the engine then queries by ship type.

# From scenario_parser.py — reading one garrison entry
ship_type  = data[off]
faction_id = data[off + 1]
ship_count = struct.unpack_from('

3. Extracting Original Bitmaps from the NE Executable

The game's artwork lives inside SCW.EXE and SCWTIT.DLL as NE resources. The complication: Windows NE DIB resources are stored without the 14-byte BITMAPFILEHEADER that modern tools expect. We wrote a parser in assets.py that walks the NE resource table directly:

  1. Read the NE header offset from the MZ stub at byte 0x3c.
  2. Follow the resource table pointer inside the NE header.
  3. Find entries of type 0x8002 (RT_BITMAP).
  4. Apply the alignment shift from the resource table header to get the true file offset and length.
  5. Prepend a synthesised BITMAPFILEHEADER (calculating the pixel-data offset as 14 + hdr_size + palette_entries × 4).
  6. Load the result via pygame.image.load(io.BytesIO(header + dib_data)).

Star sprites are stored as white-on-black 15×15 bitmaps, which made tinting trivial: pygame.BLEND_RGB_MULT multiplies each pixel by the player's faction colour, turning white into any desired hue. The title screen art (288×360) is pulled from SCWTIT.DLL and shown in the About dialog when the DLL is present; the dialog degrades gracefully to text-only otherwise.

# assets.py — tinting a white sprite to player colour
surf = base_sprite.copy()
tint = pygame.Surface(surf.get_size())
tint.fill(player_colour)
surf.blit(tint, (0, 0), special_flags=pygame.BLEND_RGB_MULT)
return surf

4. Game Engine

The engine lives under second_conflict/engine/ and is deliberately stateless — every function takes the GameState dataclass and mutates it in place, matching the original's single shared-memory model.

engine/
  turn_runner.py  — orchestrates one full turn
  combat.py      — warship attrition & combat records
  production.py  — per-planet ship production
  fleet_transit.py— dispatch & advance fleets
  revolt.py      — loyalty decay & planet revolts
  events.py     — human-readable event log
  distance.py   — star-to-star travel time

Combat

The original game resolves combat as multiple rounds of attrition between the attacking warships (any fleet arriving at an enemy star) and the defending warships (always the star's current owner's garrison). Each round a random fraction of each side is destroyed — the exact formula derived from the decompiled _attrition function.

Combat produces a CombatRecord dataclass — attacker/defender factions, initial and final ship counts, a list of per-round (atk_hit, def_hit) tuples, and the winning faction. turn_runner.py returns these records alongside the event log so the UI can animate them.

Bug we fixed Early on, the code picked the faction with the most warships as the defender, which flipped attacker and defender when the star owner had fewer ships than the attacker. The fix was simple: the defender is always star.owner_faction_id, mirroring what the original does.

Ship types

The original game has seven ship types. One caused confusion during RE: planet type 'S' in the scenario file was initially labelled "Scout" in our model. Cross-referencing the production dialog switch-case (offset +0x55 in the star record) with the scout-launch code revealed that offset stores StealthShip counts — so planet type S produces StealthShips, not scouts. Probe ships fill the scout role.

IDNamePlanet type
1WarShipW
2StealthShipS
3TransportT
4MissileM
5ScoutC
6Troopship
7ProbeP

5. The UI — Translating Windows Dialogs to pygame

The original game is a classic Windows 3.x dialog-heavy application. Every interaction — viewing your planets, dispatching a fleet, reading combat results — happens in a modal dialog box. We translated each WNDPROC into a Python class inheriting from BaseDialog, which handles the common pattern of: draw a bordered panel, render text rows, handle mouse hover/click on buttons, close with a return value.

Original IDPython classPurpose
ADMVIEWDLGAdminViewDialogAll owned planets with ship counts
SCOUTVIEWDLGScoutViewDialogIntelligence on enemy/neutral systems
REINFVIEWDLGReinfViewDialogIncoming friendly fleets
REVOLTVIEWDLGRevoltViewDialogPlanets at revolt risk
COMBATPAUSEDLGCombatPauseDialogContinue / Skip All between rounds
COMBATWNDPROCCombatAnimationAnimated per-round battle replay
FLEETVIEWDLGFleetViewDialogAll fleets in transit
PRODLIMITDLGProdLimitDialogSet production per planet type
UNRESTVIEWDLGUnrestDialogLoyalty across all factions

Combat animation

CombatAnimation is the most complex dialog. It replays a full CombatRecord visually: ship dots (using extracted sprites, tinted to each faction's colour) are scattered across a split battle area, and each combat round plays out as a phase sequence:

def _build_phases(self):
    phases = [('scatter', 600)]          # ships fly to positions
    for r in range(len(self.record.rounds)):
        phases += [
            (f'r{r}_red',    500),       # casualties highlighted red
            (f'r{r}_yellow', 350),       # dying ships turn yellow
            (f'r{r}_clear',  300),       # dead ships removed
        ]
    phases.append(('result', 0))         # outcome — wait for click
    return phases

Dots are drawn as alive (tinted sprite), dying (yellow rect), or simply absent. The state machine advances automatically on a timer, pausing at 'result' until the player clicks.

6. AI Players

Two AI layers exist. The Empire AI controls the neutral Empire faction — a standing enemy that pressures all players throughout the game. The Player AI handles CPU-controlled player factions in single-player games, making fleet dispatch and production decisions each turn based on heuristics derived from the original's behaviour.

7. Project Structure

second_conflict/
  model/    — GameState, Star, Fleet, Player dataclasses
  engine/   — pure game logic (no pygame)
  io/      — scenario_parser: read/write .SCN files
  ui/
    dialogs/ — 15+ modal dialog classes
    map_view.py   — interactive star map
    side_panel.py — right-hand fleet/turn panel
    sys_info_panel.py— selected star details
  ai/     — empire_ai.py, player_ai.py
  assets.py    — NE resource parser, sprite cache
main.py        — entry point, menu bar, event loop

The model and engine layers have no pygame dependency at all, which kept testing straightforward and would allow a headless server mode.

8. Lessons Learned

  • Trust the binary, not assumptions. Several fields were initially wrong because we assumed typical game layouts. The decompilation always won arguments.
  • NE resources are not PE resources. The 16-bit Windows NE format predates the PE format and has a completely different resource table structure. DIB bitmaps stored inside it lack the file header that modern tools expect — synthesising it from the DIB's own info header is the only way to load them.
  • White sprites are a tinting gift. If the original artist drew ship and star sprites in white-on-black, BLEND_RGB_MULT gives you faction colouring for free. No palette hacks required.
  • Stateless engine functions pay off. Keeping all game logic as pure functions over a serialisable state dataclass made save/load trivial and prevented entire classes of bugs where UI and model drifted out of sync.
  • Name things from the source. Using the original dialog IDs (ADMVIEWDLG, REINFVIEWDLG, etc.) as class-level docstring references meant that whenever something looked wrong, there was an unambiguous pointer back to the relevant decompiled function.

What's Next

The remaining work is mostly filling in edges: fog-of-war is not yet implemented (currently all stars are visible to all players), the diplomacy system is stubbed out, and a few of the original's more obscure mechanics — missile fleet speed bonuses, troopship boarding combat — are approximated rather than exact. The save-file round-trip is complete, which means existing original scenario files load and play correctly.

Note on legality Second Conflict is abandonware — the original publisher is long gone and the game is freely distributed on abandonware sites. The Python reimplementation does not distribute any original game assets; if SCW.EXE is present on the user's machine the engine will extract and use the original sprites, otherwise it falls back to procedural graphics.
second conflict in python Source Code
Bracket Buster

Building a Bracket Buster

A probabilistic March Madness simulator — from idea to stat-backed analysis

March 16, 2026  ·  Python Sports Analytics March Madness

Every March, 68 college basketball teams tip off in one of sports' most chaotic single-elimination tournaments. Brackets get busted in the first round. Cinderellas run all the way to the Final Four. A #1 seed hasn't been upset by a #16 seed in... well, until Saint Peter's showed everyone that it can happen.

The question that kicked off this project was simple: can I build a simulator that captures both the chalk (favorites winning) and the chaos (upsets) in a tunable, principled way? The answer turned out to be yes — and the math to do it is beautifully simple.

The Core Idea: A Single Chaos Knob

The first design decision was the most important one. How do you model the probability that Team A beats Team B? In a real game, dozens of factors matter — injury reports, pace of play, three-point shooting variance. But for bracket prediction, you only reliably know one thing ahead of tip-off: seed numbers.

The model needed to do two things at once:

  1. Respect seeds — a #1 seed should beat a #16 seed most of the time.
  2. Allow for upsets — because this is March Madness, not a scripted event.

The solution is a single function:

def win_probability(seed1: int, seed2: int, chaos: float) -> float:
    base_prob = seed2 / (seed1 + seed2)
    return base_prob * (1.0 - chaos) + 0.5 * chaos

Let's unpack this. The base_prob line is the key insight:

  • #1 vs #16: 16 / (1 + 16) = 94.1% chance the #1 seed wins.
  • #8 vs #9: 9 / (8 + 9) = 52.9% — effectively a coin flip, which matches reality.
  • #5 vs #12: 12 / (5 + 12) = 70.6% — favors the #5 but not overwhelmingly.

The chaos parameter then blends this seed-based probability with a pure 50/50:

Chaos ValueBehavior
0.0Pure seed logic — lower seeds win proportionally more often
0.5Half seed-weighted, half random — bracket-busting territory
1.0Pure coin flip — seeds mean nothing

This linear interpolation is elegant because the entire "personality" of a bracket — conservative vs. chaotic — lives in one number between 0 and 1. You can run the same simulator ten times at chaos=0.0 and get roughly similar chalk brackets, or crank it to chaos=0.8 and watch #15 seeds reach the Final Four.

Getting Live Bracket Data

A simulator is only as good as its input data. The NCAA doesn't offer an open public API, but the community-maintained henrygd/ncaa-api proxy does the heavy lifting. The bracket data comes back as a flat list of game slots with bracketPositionId values — integers that encode round and region.

Position IDs aren't arbitrary. 100–199 are First Four games, 200–299 are the Round of 64, 300–399 are the Round of 32, and so on. Processing games in sorted order guarantees that every predecessor game is resolved before the game that depends on its winner.

To avoid hammering the API during iterative development, the fetcher writes a local .cache_basketball-men_2026.json file on the first call and reads from it on every subsequent run. A --no-cache flag bypasses this for live data.

Simulating the Bracket

With live data in hand, the simulation engine processes the bracket as a directed acyclic graph (DAG): each game slot knows which two upstream slots feed winners into it. The engine walks the sorted position IDs and resolves each game by sampling from the probability distribution:

def pick_winner(game: Game, chaos: float) -> Team:
    t1, t2 = game.team1, game.team2
    p = win_probability(t1.seed, t2.seed, chaos)
    return t1 if random.random() < p else t2

The result propagates forward — the winner of game 201 might be team1 in game 301. Sixty-three games later, you have a complete predicted bracket, expressed as a markdown table with seeds, team names, and a trophy emoji next to the champion.

Batch Simulation and Summary Stats

Running one simulation is interesting. Running a hundred is revealing. The --simulations N flag runs the full tournament N times and aggregates results into a champion frequency table:

$ python bracket_buster.py --simulations 100 --chaos 0.4

Champion Summary (100 simulations):
  Kansas         ████████████████  18 wins (18.0%)
  Duke           ██████████        12 wins (12.0%)
  Houston        ████████          10 wins (10.0%)
  ...

This surfaces something that a single bracket can't: which teams are consistently in the conversation at a given chaos level, vs. which are one-hit wonders that only win when the randomness breaks their way.

Evaluation: Working Backward from the Winner

The latest and most analytically interesting feature is --evaluate. Instead of predicting forward, it looks at a set of already-generated bracket files and asks: given the seed matchups that actually occurred, how likely was each bracket outcome?

Each bracket is scored by multiplying the probabilities of every game result across all 63 games. Because these are small numbers multiplied together sixty-three times, the math is done in log space to avoid floating-point underflow:

log_prob = sum(log10(p) for p in game_probabilities)
# A "perfect chalk" bracket would score around log_prob = -8
# A massive upset bracket might score -25 or lower

The evaluator then ranks brackets from most to least probable, surfaces the upsets that drove the biggest probability penalties, and reports how often each team won the championship across the evaluated set. It's a way to pressure-test your bracket instincts: was your Final Four actually defensible by the numbers, or were you just wishful thinking?

Architecture: Keeping It Small

The entire application is 677 lines of Python in a single file. That's a deliberate choice. This isn't a product — it's a focused analytical tool. Breaking it into packages would add overhead without adding clarity. The structure within the file is logical:

  1. DataclassesTeam, Game, BracketResults
  2. API layer — fetch, cache, parse
  3. Simulation engine — probability model, game resolution, DAG traversal
  4. Output formatting — markdown tables by region and round
  5. Evaluation — log-probability scoring, upset detection, ranking
  6. CLIargparse wiring everything together

One external dependency: requests. Everything else is standard library.

What I'd Build Next

A few natural extensions worth exploring:

  • Historical calibration — fit the chaos parameter to real historical upset rates by seed matchup, rather than choosing it intuitively.
  • Head-to-head records — incorporate adjusted efficiency metrics (like KenPom) as an additional signal beyond seed.
  • Interactive web UI — let users drag the chaos slider and watch the bracket update in real time.
  • Pool scoring — score a simulated bracket against standard ESPN/Yahoo scoring rules to optimize for expected pool points, not just accuracy.

March Madness is fun precisely because it's unpredictable. A good simulator doesn't try to eliminate that uncertainty — it quantifies it. The chaos parameter isn't a hack; it's an honest acknowledgment that seed numbers explain a lot, but not everything. Sometimes the #12 seed just hits.

The code is on GitHub. Fill out your bracket responsibly.

Back to basics

This weekend, I started off strong by going to the range with an old friend and then catching up over lunch. As far as the tech side of things, I broke down my serverless setup as it’s more expensive than a Wonderbox hosting containers. With that said I dusted off my nginx load balancer container, killed my ELB, and elastic ips, and retooled dns back to it’s humble roots. The following is Claude’s summary of the new features we implemented and the things we troubleshooted.

Dev work done this weekend

This weekend was a productive infrastructure and tooling sprint on the blog2/vacuumflask project. Here's a rundown of what got built and fixed.

Media Expiration System

Added a full lifecycle management system for media files. You can now set an expiration date on any file in the media library. After that date, a cleanup job removes it from S3 automatically. The expiration is stored in SQLite, visible in the media library UI with an edit button on each card, and also available at upload time. When a file is deleted manually, its expiration record is cleaned up too.

Headless API Authentication

Built a /api/login endpoint that issues a short-lived Bearer token using VACUUMAPIKEYSALT + TOTP — no plaintext password required. This lets automated scripts authenticate without storing credentials, using AWS Secrets Manager for the secret values and pyotp for one-time passwords.

Lambda Cleanup Cron

Created cron/cleanup_expired.py — a Lambda-compatible handler that logs in via the headless API and calls /admin/cleanup_expired. It logs structured output to CloudWatch under the cron log group via watchtower, with each run getting its own stream. Also ships as a local cron.sh that can be called from the system crontab, scheduled for 12:01 AM daily.

MCP Server

Made the blog discoverable to AI assistants via the Model Context Protocol. A FastMCP server exposes blog posts, tags, and search as resources and tools. It runs as a Docker container with SSE transport proxied through nginx at /mcp/, and is advertised to clients via /.well-known/mcp.json and a tag in the page header.

Blog Styling Modernization

Rewrote style.css with CSS variables, a sticky header, card-style post layout, and a modernized tag cloud panel. Fixed a specificity bug where tags were rendering white-on-white, and another where tag size weighting was overridden by the admin nav styles — so the tag cloud now correctly reflects content volume.

Nginx + Certbot Infrastructure

Rebuilt the load balancer container to manage its own TLS certificates. On startup it generates self-signed certs so nginx can start, then immediately replaces them with Let's Encrypt certificates via the HTTP-01 webroot challenge. A cron job inside the container handles daily renewal at 3 AM and 3 PM. Certificates persist across restarts via Docker named volumes.

Redis Connection Fix

Tracked down a TimeoutError in the Redis client caused by a stale EC2 internal IP in the server's .env file. The Flask container runs with --network=host but Valkey runs in bridge mode, so Docker's loopback forwarding doesn't apply. The fix was to use Valkey's Docker bridge IP (172.17.0.2) directly.

My weekend with Claude
This weekend, after Anthropic told Lord Farquad to kick rocks, I decided to try out Claude Code Pro. Wow. I've been using a lot of Amazon Q integrated with VS Code, which was twice as productive as CoPilot. Claude Code is 3x more productive than Amazon Q. It's interesting how much further ahead Anthropic's native tools are when, under the hood, Amazon and Microsoft are both using Claude's models. Using Claude's code was like having a recently certified junior engineer at your disposal. It could use the CLI to query logs and process the results; it uses the CLI directly to implement fixes and diagnose errors. With Claude's help, I took an existing application and over the weekend rewrote it into a new, cheaper-to-maintain app.
my claude usage after a full weekend
Local AI disapointment and tight groupings
PyCharm now supports adding local models to its AI tooling. setup local ai
I gave it a try today with LM Studio and a 48GB Deepseek model. It looked promising at first, but it never finished any of the prompts.running model
My laptop should have the resources to run a bigger model, but it was quite slow with this model. In other news, after renting a Walther PDP compact with a nice Holosun optic, which was fine. I managed to get the tightest grouping at five yards that I’ve ever shot with my Glock.
best grouping I've shot in 700 rounds of 9mm with my Glock.
Finding the maximum bid for achiving a specific discount over retail

So this weekend, I set about solving a problem that was bothering me. When purchasing a pewpew at auction, what should the maximum bid be, assuming we want a specified percentage discount off the retail new price for a given item?

Building on an existing formula I had worked out to calculate the savings percent over retail, I started working backwards.

Assumptions

  • Auction items will require shipping
  • Auction items will include tax plus an auction fee
  • Auction items may or may not have shipping insurance
  • Auction items will have a credit card payment fee
  • Retail price will not include shipping
  • Average Sale price info available online won’t include shipping, insurance, or credit card fees.

Variables

a table of variables
With the variables defined, now we can work backwards and then solve for the bid. excel sheet formulas
Last but not least, while solving for B, I tried a number of AI assistants. The winner ended up being ChatGPT, which was able to isolate B on the left side of the equation.
max bid formula
Reflections on 2025

As I reflect on the absolute chaos that is 2025, I’m a bit taken aback by how much has changed this year compared to previous years. I lost a boss I liked, a gentleman who was the best engineer on my team, whom I thought would outlast me. The world has also been more chaotic than average. On the brighter side, I’m grateful for the new friends I’ve made this year.

For my midlife crisis, I’ve taken up shooting sports. When I was a kid, I was always shooting my bb/pellet guns, bows, and arrows. Even poked a few holes with arrows in my parents' aluminum siding. For me, shooting at the range has turned the volume down on a lot of my older vices, such as gaming.

2026 will be the year of Kubernetes for me at work. Here is hoping 2026 > 2025.

Older posts