AI Pitmaster: SDR + LLM + SMS + Meat + Fire

I’ve been smoking meat for years. Brisket, ribs, pork shoulder - the usual suspects. The process is simple but time-consuming: maintain a consistent pit temperature around 225°F for anywhere from 6 to 16 hours, monitor your meat temperature, and make adjustments when things inevitably go sideways. It’s meditation with delicious consequences.

The problem is that maintaining attention for that long is hard. Miss a pit temperature crash by 30 minutes and you’ve added hours to your cook. Let it spike and you’ve dried out your $100 brisket. With Thanksgiving next week, a lot of people will be attempting smoked turkey for the first time: a 4-6 hour cook that’s far more forgiving than brisket but still benefits from consistent monitoring. There are ways to maintain your cook temp with a PID controller and I’ve tried/built most of them but I find the actual process of tending to a charcoal smoker to be kind of the point of the exercise, I just wanted an extra set of eyes to make sure nothing goes catastrophically wrong and crucially, a good estimate of when it’ll be done.

So I built AI Pitmaster: a Python application that reads wireless thermometer data via RTL-SDR, talks to Claude for cooking advice, and sends SMS alerts when things need attention. Agentic barbecue!

The Hardware

The setup is straightforward:

The TP12 has two probes - one for pit temp, one for meat. It broadcasts every ~12 seconds on 433.92MHz. The SDR dongle picks this up, rtl_433 decodes it to JSON, and my Python code processes it.

The Software

The application is built around a single class (ClaudeBBQConversation) that orchestrates everything:

  1. A background thread runs rtl_433 as a subprocess, parsing its JSON output and queueing temperature readings
  2. The main loop processes these readings, checks for critical conditions, and updates predictive models
  3. You can type natural language messages at any time - they get contextualized with current temps and sent to Claude
  4. SMS alerts fire when things go wrong (with cooldown logic so you don’t get spammed)

Here’s what the interaction looks like:

[10:23] pit:225°F meat:147°F outside:72°F | 8.2hrs

wrapped it in pink paper

🤖 good timing on the wrap. you're right at stall territory. should power through
in 2-3hrs now. maybe bump pit to 250 if you're in a hurry but 225 is fine

[10:24] pit:227°F meat:148°F outside:72°F | 8.2hrs

The interface is just an interactive CLI. No web UI, no mobile app. You’re either at the terminal or you’re getting texts.

The Math

The most interesting part of this project is the stall detection. If you’ve never smoked a large piece of meat, the stall is this phenomenon where your internal temperature just… stops rising. You’ll be cruising along at 147°F, 148°F, 149°F, and then suddenly you’re stuck at 150°F for 3-5 hours while moisture evaporates off the surface.

Most BBQ advice about the stall to just ride it out for indeterminate period of time or do a Texas crutch.

I found this paper by Troy Henderson at the University of Mobile that defines the stall mathematically. The criterion is:

The stall occurs when |α(t)| ≤ 0.03 h⁻¹ and meat temp is between 150-170°F

Where α is the relative rate of temperature change: α = f'(t)/f(t)

In practice, this means calculating the derivative using centered finite differences on the last few temperature readings:

def detect_stall_mathematical(self):
    if len(self.temp_history) < 10:
        return False

    recent = list(self.temp_history)[-10:]
    times_s = [(d['time'] - recent[0]['time']).total_seconds() for d in recent]
    temps_f = [d['meat'] for d in recent]

    # centered 3-point finite diff on last 3 samples
    t1, t0, tm1 = times_s[-1], times_s[-2], times_s[-3]
    f1, f0, fm1 = temps_f[-1], temps_f[-2], temps_f[-3]

    dt_hours = (t1 - tm1) / 3600.0
    if dt_hours == 0:
        return False

    f_prime = (f1 - fm1) / (2 * dt_hours)  # °F h⁻¹
    alpha = f_prime / f0                    # h⁻¹

    return 150 <= f0 <= 170 and abs(alpha) <= 0.03

The Prediction Model

The other mathematically interesting piece is ETA prediction. Meat temperature follows a logistic curve during the pre-stall phase. If you fit a 5-parameter logistic model to the Stage I data (everything before 150°F), you can extrapolate when you’ll hit wrapping temperature and when you’ll be done.

The model is: D + (K - D) / ((1 + exp(-k(t - λ)))^γ)

Where:

I use SciPy’s curve_fit to estimate these parameters from the last hour of sub-150°F data. It’s optional - if you don’t have SciPy installed, the app just won’t show ETA predictions.

The predictions are surprisingly accurate. The logistic model fits the pre-stall data well, and because the app detects the stall mathematically, it knows when to stop trusting the Stage I model. Once you’re through the stall and back into Stage III, the temperature rise resumes its predictable curve. The ETA estimates I’m seeing are within around 10 minutes on 12+ hour cooks.

Context-Aware Alerting

Early versions of this would spam me with “pit temp declining, add fuel” alerts every few minutes. Not helpful when I literally just added fuel and the temperature is still recovering.

The solution was to make the alerting system context-aware. The app now tracks recent user messages and looks for fuel-related keywords:

def _should_alert_about_temp_decline(self, now, decline):
    # Don't alert if we just mentioned fuel (give time for recovery)
    if self.last_fuel_mention:
        minutes_since_fuel = (now - self.last_fuel_mention).total_seconds() / 60
        if minutes_since_fuel < 15:
            return False

    # Check recent user messages for fuel-related context
    for action in list(self.recent_user_actions)[-5:]:
        time_diff = (now - action['time']).total_seconds() / 60
        if time_diff < 20:
            fuel_keywords = ['fuel', 'coal', 'wood', 'charcoal', 'added']
            if any(keyword in action['message'] for keyword in fuel_keywords):
                return decline >= 25  # Higher threshold if we discussed fuel

    return decline >= 15

Session Persistence

Early versions had an issue: if the process crashed (sometimes rtl_433 is prone to USB connection hiccups) or you accidentally killed it, you’d lose all your conversation history and temperature data. So I added timestamped session files that auto-save every 60 seconds and after each user message:

.bbq_session_2025-11-20_093015.json

On startup, the app looks for recent sessions (< 48 hours old) and offers to restore them. You see a summary:

📂 Found recent session:
   14 lb brisket
   Started: 2025-11-20 09:30
   Age: 6.2 hours
   Temp readings: 127

Restore this session? [Y/n]:

Sessions older than 48 hours are automatically archived to .bbq_archive/ so they don’t clutter your directory. This also solves another problem: collecting real-world cook data.

The stall detection math is experimental. It works on my Pit Barrel Cooker with briskets and pork shoulder so far, but I don’t know how it generalizes to offset smokers, pellet grills, or different meat cuts. To improve it, I need more data.

So when sessions get archived, the app shows instructions for sharing them:

📊 Share Your Cook Data

You have 1 archived session(s):
  1. brisket (14lb) - .bbq_session_2025-11-18_083015.json

To share your data:
  1. Run: python3 -c "from ai_pitmaster import generate_session_mailto; ..."
  2. Open the generated mailto: link
  3. Send the email

The generate_session_mailto() function creates a pre-populated email with the session data. If enough people share their cooks, I can validate the stall detection across different equipment and improve the ETA predictions, maybe find some interesting corner cases.

Claude Integration

Why use Claude instead of running a local LLM? A few reasons:

  1. I’m already paying for Claude, and a 12-hour cook costs like $0.25 in API calls
  2. The quality of advice is genuinely good - it understands BBQ concepts, can reason about trade-offs, and remembers context
  3. I wanted to focus on the hardware integration and math, not on hosting models

The prompt includes a knowledge base (PITMASTER_WISDOM) with BBQ fundamentals, target temps for different meats, stall behavior, wrapping strategies, etc. Each user message gets contextualized with current temperatures and statistics before being sent to Claude.

I use Claude Sonnet 4.5 with temperature set to 0.2 for consistent, reliable advice rather than creative hallucinations. The last thing I need mid-cook is Claude suggesting I wrap my brisket in aluminum foil lined with cheese.

SMS Alerts

SMS alerts use the TextBelt API which is way easier to deal with than Twilio. Alerts fire for:

Each alert type has independent cooldown tracking (default: 15 minutes) to prevent blowing up your phone.

Limitations and Future Work

This is very much a “works for me” project. Some limitations:

POSIX-only: I’ve only tested this on Linux, it might work on WSL, it probably works on MacOS.

Hardware dependency: You need an RTL-SDR dongle and a compatible thermometer. The dongle is $30 but it’s still a barrier to entry. No thermometer, no data.

Single thermometer: The code assumes one TP12 with two probes. If you’re running multiple thermometers or doing multiple cooks simultaneously, you’d need to add device differentiation logic. Also your device ID might be different from mine, so you’d have to figure that out, not too hard.

Ambient temperature: I extract this from nearby weather station broadcasts (which rtl_433 also picks up), but whether you get usable data depends on if you or your neighbors have 433MHz weather stations. It’s useful when available but totally optional. Could also use a weather API if you wanted to.

Stall detection generalization: The mathematical stall detection works well on my equipment but hasn’t been validated across different smoker types and meat cuts. This is where shared session data will help - the more cooks I can analyze, the better this gets.

Would I Recommend Building This?

If you’re already into both smoking meat and tinkering with software/hardware, absolutely.

If you just want to smoke meat and have it turn out well, you probably don’t need this. Get a good thermometer with wireless alerts, learn the basics, and you’ll be fine. This project is overkill for most people.

But if you’re the kind of person who reads infrastructure-as-code blogs and also spends 16 hours making brisket, this scratches a very specific itch.

The code is on GitHub if you want to build your own or just read through it. It’s about 800 lines of Python (including session management and archiving).

And if you do build it and use it for a few cooks, consider sharing your archived sessions. The more data I have, the better the stall detection and ETA predictions become for everyone.

Now if you’ll excuse me, I have 7 hours left on a pork shoulder and Claude is telling me to consider bumping the pit temp.