while(motivation <= 0)

Bracket Buster

Building a Bracket Buster

A probabilistic March Madness simulator — from idea to stat-backed analysis

March 16, 2026  ·  Python Sports Analytics March Madness

Every March, 68 college basketball teams tip off in one of sports' most chaotic single-elimination tournaments. Brackets get busted in the first round. Cinderellas run all the way to the Final Four. A #1 seed hasn't been upset by a #16 seed in... well, until Saint Peter's showed everyone that it can happen.

The question that kicked off this project was simple: can I build a simulator that captures both the chalk (favorites winning) and the chaos (upsets) in a tunable, principled way? The answer turned out to be yes — and the math to do it is beautifully simple.

The Core Idea: A Single Chaos Knob

The first design decision was the most important one. How do you model the probability that Team A beats Team B? In a real game, dozens of factors matter — injury reports, pace of play, three-point shooting variance. But for bracket prediction, you only reliably know one thing ahead of tip-off: seed numbers.

The model needed to do two things at once:

  1. Respect seeds — a #1 seed should beat a #16 seed most of the time.
  2. Allow for upsets — because this is March Madness, not a scripted event.

The solution is a single function:

def win_probability(seed1: int, seed2: int, chaos: float) -> float:
    base_prob = seed2 / (seed1 + seed2)
    return base_prob * (1.0 - chaos) + 0.5 * chaos

Let's unpack this. The base_prob line is the key insight:

  • #1 vs #16: 16 / (1 + 16) = 94.1% chance the #1 seed wins.
  • #8 vs #9: 9 / (8 + 9) = 52.9% — effectively a coin flip, which matches reality.
  • #5 vs #12: 12 / (5 + 12) = 70.6% — favors the #5 but not overwhelmingly.

The chaos parameter then blends this seed-based probability with a pure 50/50:

Chaos ValueBehavior
0.0Pure seed logic — lower seeds win proportionally more often
0.5Half seed-weighted, half random — bracket-busting territory
1.0Pure coin flip — seeds mean nothing

This linear interpolation is elegant because the entire "personality" of a bracket — conservative vs. chaotic — lives in one number between 0 and 1. You can run the same simulator ten times at chaos=0.0 and get roughly similar chalk brackets, or crank it to chaos=0.8 and watch #15 seeds reach the Final Four.

Getting Live Bracket Data

A simulator is only as good as its input data. The NCAA doesn't offer an open public API, but the community-maintained henrygd/ncaa-api proxy does the heavy lifting. The bracket data comes back as a flat list of game slots with bracketPositionId values — integers that encode round and region.

Position IDs aren't arbitrary. 100–199 are First Four games, 200–299 are the Round of 64, 300–399 are the Round of 32, and so on. Processing games in sorted order guarantees that every predecessor game is resolved before the game that depends on its winner.

To avoid hammering the API during iterative development, the fetcher writes a local .cache_basketball-men_2026.json file on the first call and reads from it on every subsequent run. A --no-cache flag bypasses this for live data.

Simulating the Bracket

With live data in hand, the simulation engine processes the bracket as a directed acyclic graph (DAG): each game slot knows which two upstream slots feed winners into it. The engine walks the sorted position IDs and resolves each game by sampling from the probability distribution:

def pick_winner(game: Game, chaos: float) -> Team:
    t1, t2 = game.team1, game.team2
    p = win_probability(t1.seed, t2.seed, chaos)
    return t1 if random.random() < p else t2

The result propagates forward — the winner of game 201 might be team1 in game 301. Sixty-three games later, you have a complete predicted bracket, expressed as a markdown table with seeds, team names, and a trophy emoji next to the champion.

Batch Simulation and Summary Stats

Running one simulation is interesting. Running a hundred is revealing. The --simulations N flag runs the full tournament N times and aggregates results into a champion frequency table:

$ python bracket_buster.py --simulations 100 --chaos 0.4

Champion Summary (100 simulations):
  Kansas         ████████████████  18 wins (18.0%)
  Duke           ██████████        12 wins (12.0%)
  Houston        ████████          10 wins (10.0%)
  ...

This surfaces something that a single bracket can't: which teams are consistently in the conversation at a given chaos level, vs. which are one-hit wonders that only win when the randomness breaks their way.

Evaluation: Working Backward from the Winner

The latest and most analytically interesting feature is --evaluate. Instead of predicting forward, it looks at a set of already-generated bracket files and asks: given the seed matchups that actually occurred, how likely was each bracket outcome?

Each bracket is scored by multiplying the probabilities of every game result across all 63 games. Because these are small numbers multiplied together sixty-three times, the math is done in log space to avoid floating-point underflow:

log_prob = sum(log10(p) for p in game_probabilities)
# A "perfect chalk" bracket would score around log_prob = -8
# A massive upset bracket might score -25 or lower

The evaluator then ranks brackets from most to least probable, surfaces the upsets that drove the biggest probability penalties, and reports how often each team won the championship across the evaluated set. It's a way to pressure-test your bracket instincts: was your Final Four actually defensible by the numbers, or were you just wishful thinking?

Architecture: Keeping It Small

The entire application is 677 lines of Python in a single file. That's a deliberate choice. This isn't a product — it's a focused analytical tool. Breaking it into packages would add overhead without adding clarity. The structure within the file is logical:

  1. DataclassesTeam, Game, BracketResults
  2. API layer — fetch, cache, parse
  3. Simulation engine — probability model, game resolution, DAG traversal
  4. Output formatting — markdown tables by region and round
  5. Evaluation — log-probability scoring, upset detection, ranking
  6. CLIargparse wiring everything together

One external dependency: requests. Everything else is standard library.

What I'd Build Next

A few natural extensions worth exploring:

  • Historical calibration — fit the chaos parameter to real historical upset rates by seed matchup, rather than choosing it intuitively.
  • Head-to-head records — incorporate adjusted efficiency metrics (like KenPom) as an additional signal beyond seed.
  • Interactive web UI — let users drag the chaos slider and watch the bracket update in real time.
  • Pool scoring — score a simulated bracket against standard ESPN/Yahoo scoring rules to optimize for expected pool points, not just accuracy.

March Madness is fun precisely because it's unpredictable. A good simulator doesn't try to eliminate that uncertainty — it quantifies it. The chaos parameter isn't a hack; it's an honest acknowledgment that seed numbers explain a lot, but not everything. Sometimes the #12 seed just hits.

The code is on GitHub. Fill out your bracket responsibly.

Back to basics

This weekend, I started off strong by going to the range with an old friend and then catching up over lunch. As far as the tech side of things, I broke down my serverless setup as it’s more expensive than a Wonderbox hosting containers. With that said I dusted off my nginx load balancer container, killed my ELB, and elastic ips, and retooled dns back to it’s humble roots. The following is Claude’s summary of the new features we implemented and the things we troubleshooted.

Dev work done this weekend

This weekend was a productive infrastructure and tooling sprint on the blog2/vacuumflask project. Here's a rundown of what got built and fixed.

Media Expiration System

Added a full lifecycle management system for media files. You can now set an expiration date on any file in the media library. After that date, a cleanup job removes it from S3 automatically. The expiration is stored in SQLite, visible in the media library UI with an edit button on each card, and also available at upload time. When a file is deleted manually, its expiration record is cleaned up too.

Headless API Authentication

Built a /api/login endpoint that issues a short-lived Bearer token using VACUUMAPIKEYSALT + TOTP — no plaintext password required. This lets automated scripts authenticate without storing credentials, using AWS Secrets Manager for the secret values and pyotp for one-time passwords.

Lambda Cleanup Cron

Created cron/cleanup_expired.py — a Lambda-compatible handler that logs in via the headless API and calls /admin/cleanup_expired. It logs structured output to CloudWatch under the cron log group via watchtower, with each run getting its own stream. Also ships as a local cron.sh that can be called from the system crontab, scheduled for 12:01 AM daily.

MCP Server

Made the blog discoverable to AI assistants via the Model Context Protocol. A FastMCP server exposes blog posts, tags, and search as resources and tools. It runs as a Docker container with SSE transport proxied through nginx at /mcp/, and is advertised to clients via /.well-known/mcp.json and a tag in the page header.

Blog Styling Modernization

Rewrote style.css with CSS variables, a sticky header, card-style post layout, and a modernized tag cloud panel. Fixed a specificity bug where tags were rendering white-on-white, and another where tag size weighting was overridden by the admin nav styles — so the tag cloud now correctly reflects content volume.

Nginx + Certbot Infrastructure

Rebuilt the load balancer container to manage its own TLS certificates. On startup it generates self-signed certs so nginx can start, then immediately replaces them with Let's Encrypt certificates via the HTTP-01 webroot challenge. A cron job inside the container handles daily renewal at 3 AM and 3 PM. Certificates persist across restarts via Docker named volumes.

Redis Connection Fix

Tracked down a TimeoutError in the Redis client caused by a stale EC2 internal IP in the server's .env file. The Flask container runs with --network=host but Valkey runs in bridge mode, so Docker's loopback forwarding doesn't apply. The fix was to use Valkey's Docker bridge IP (172.17.0.2) directly.

My weekend with Claude
This weekend, after Anthropic told Lord Farquad to kick rocks, I decided to try out Claude Code Pro. Wow. I've been using a lot of Amazon Q integrated with VS Code, which was twice as productive as CoPilot. Claude Code is 3x more productive than Amazon Q. It's interesting how much further ahead Anthropic's native tools are when, under the hood, Amazon and Microsoft are both using Claude's models. Using Claude's code was like having a recently certified junior engineer at your disposal. It could use the CLI to query logs and process the results; it uses the CLI directly to implement fixes and diagnose errors. With Claude's help, I took an existing application and over the weekend rewrote it into a new, cheaper-to-maintain app.
my claude usage after a full weekend
Local AI disapointment and tight groupings
PyCharm now supports adding local models to its AI tooling. setup local ai
I gave it a try today with LM Studio and a 48GB Deepseek model. It looked promising at first, but it never finished any of the prompts.running model
My laptop should have the resources to run a bigger model, but it was quite slow with this model. In other news, after renting a Walther PDP compact with a nice Holosun optic, which was fine. I managed to get the tightest grouping at five yards that I’ve ever shot with my Glock.
best grouping I've shot in 700 rounds of 9mm with my Glock.
Finding the maximum bid for achiving a specific discount over retail

So this weekend, I set about solving a problem that was bothering me. When purchasing a pewpew at auction, what should the maximum bid be, assuming we want a specified percentage discount off the retail new price for a given item?

Building on an existing formula I had worked out to calculate the savings percent over retail, I started working backwards.

Assumptions

  • Auction items will require shipping
  • Auction items will include tax plus an auction fee
  • Auction items may or may not have shipping insurance
  • Auction items will have a credit card payment fee
  • Retail price will not include shipping
  • Average Sale price info available online won’t include shipping, insurance, or credit card fees.

Variables

a table of variables
With the variables defined, now we can work backwards and then solve for the bid. excel sheet formulas
Last but not least, while solving for B, I tried a number of AI assistants. The winner ended up being ChatGPT, which was able to isolate B on the left side of the equation.
max bid formula
Reflections on 2025

As I reflect on the absolute chaos that is 2025, I’m a bit taken aback by how much has changed this year compared to previous years. I lost a boss I liked, a gentleman who was the best engineer on my team, whom I thought would outlast me. The world has also been more chaotic than average. On the brighter side, I’m grateful for the new friends I’ve made this year.

For my midlife crisis, I’ve taken up shooting sports. When I was a kid, I was always shooting my bb/pellet guns, bows, and arrows. Even poked a few holes with arrows in my parents' aluminum siding. For me, shooting at the range has turned the volume down on a lot of my older vices, such as gaming.

2026 will be the year of Kubernetes for me at work. Here is hoping 2026 > 2025.

Hardware Upgrade and Proxmox 9.1
With the retirement of another data center stack, I had the opportunity to refresh a piece of iron. I ended up choosing to replace an R530 with eight 3.5-inch drives and the oldest IDRAc in my lab. Thankfully, this time the server arrived from FedEx with only slightly bent rails. I upgraded the server from Windows 2022 to 2025, played around with Windows for a few days, and then decided to try out Proxmox 9.1. Proxmox has come a long way since Proxmox seven. While trying to move virtual machines around after getting my NFS servers, I was seeing the strangest behavior. It turned out that I had plugged in an Ethernet 5e into a ten-gigabit Ethernet. Everything looked fine, but the connections were volatile. The next step in my virtualization adventure was to migrate and upgrade a licensed Windows 10 VM. Once my network situation was resolved, it was just a matter of time before I was able to offload the VM and its drive and get them on the new host. After some Windows updates, I set about getting it upgraded to Windows 11. The process involved:
  • Adding a virtual TPM
  • Adding a virtual EFI Disk
  • Switched the BIOS to OVMF
  • Used a win 11 usb drive, cleaned up some bad entries in the MBR
  • Repaired the MBR
  • Converted the disk to GPT using MBR2GPT
  • Ran the Windows 11 PC check and verified that everything is order
  • Upgraded to Windows 11

Older posts

Fathers day weekend with OKTA
The year is flying by
Fully automated K3S hosting
Adventures in K3S in 2025
Slick loading of environmental variables into bash
Adventures in NginX
New year new personal development
AWS Auto Scaling Groups
Post reinvent adventures
AWS Reinvent 2024 personal recap
AWS EventBridge debuging
AWS ECS Tasks remuxing mov to mp4
docker remuxing worker process
Projects for the upcoming Holiday break
Adventures in C++ SDK hell
Implementing WatchTower in python flask
looping through a csv 4 ways
Raphael web source update
Taxonomy, tagging, and topics oh my
secrets manager, sftp, plucking missing files out of two lists of files
multi server deployments, health checks, edit links oh my
Switching a git repo from https to ssh
bash vs sh
multi node flask web server
resizing images with python and opencv
Raphael gets threading for a flask web server
devops now
new blog engine now live
the end
Cheese on the bottom omlet and the end of an era
bulk comparing files by sha256
adding numbers with awk
Dynamic code generation to save time
A week of trial by fire
One Time secret clone and dockerization
Syncing data from the old blog to the new blog
flask templates and orm updates
Lego F1 Shadowbox Build
Raphael bot obs scene switching
Economics of transcription
Adventures in PHP
Social tooling
Raphael bot update
Twtich Raphael_bot
new year new data
terraform vs kubernetes an under the weather Saturday
Viral voice over
now with kekywords for seo
Chunky Cutting boards weekend 2
Chunky Cutting boards weekend 1
Thick cutting board planning
Hammer time
Adobe Fresco
adventures in base javascript and html
new login testing
wysiwyg test 1
admin now with banhammer
blog admin interface
2023 Year of the linux laptop
adventures in ceph
Hello World