<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

  <title><![CDATA[BigfootInMouth]]></title>
  <link href="http://blog.tbcdevelopmentgroup.com//atom.xml" rel="self" />
  <link href="http://blog.tbcdevelopmentgroup.com/" />
  <updated>2026-03-26T12:24:48-04:00</updated>
  <id>http://blog.tbcdevelopmentgroup.com/</id>
  <author>
    <name><![CDATA[Rob Lauer]]></name>
    <email><![CDATA[rclauer@gmail.com]]></email>
  </author>
  <generator uri="https://github.com/jmacdotorg/plerd">Plerd</generator>


  <entry>
    <title type="html"><![CDATA[Idempotent AWS Resource Creation - with tools you have laying around the house]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2026-03-20-post.html"/>
    <published>2026-03-20T00:00:00-04:00</published>
    <updated>2026-03-26T12:24:48-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2026-03-20-post.html</id>
    <content type="html"><![CDATA[<p><em>Make, Bash, and a scripting language of your choice</em></p>

<hr />

<h2>Creating AWS Resources…let me count the ways</h2>

<p>You need to create an S3 bucket, an SQS queue, an IAM policy and a few
other AWS resources.  But how?…TIMTOWTDI</p>

<p><strong>The Console</strong></p>

<ul>
<li>Pros: visual, immediate feedback, no tooling required, great for
exploration</li>
<li>Cons: not repeatable, not version controllable, opaque, clickops doesn’t
scale, “I swear I configured it the same way”</li>
</ul>


<p><strong>The AWS CLI</strong></p>

<ul>
<li>Pros: scriptable, composable, already installed, good for one-offs</li>
<li>Cons: not idempotent by default, no state management, error handling
is manual, scripts can grow into monsters</li>
</ul>


<p><strong>CloudFormation</strong></p>

<ul>
<li>Pros: native AWS, state managed by AWS, rollback support, drift
detection</li>
<li>Cons: YAML/JSON verbosity, slow feedback loop, stack update failures
are painful, error messages are famously cryptic, proprietary to
AWS, subject to change without notice</li>
</ul>


<p><strong>Terraform</strong></p>

<ul>
<li>Pros: multi-cloud, huge community, mature ecosystem, state
management, plan before apply</li>
<li>Cons: state file complexity, backend configuration, provider
versioning, HCL is yet another language to learn, overkill for small
projects, often requires tricks &amp; contortions</li>
</ul>


<p><strong>Pulumi</strong></p>

<ul>
<li>Pros: real programming languages, familiar abstractions, state
management</li>
<li>Cons: even more complex than Terraform, another runtime to install
and maintain</li>
</ul>


<p><strong>CDK</strong></p>

<ul>
<li>Pros: real programming languages, generates CloudFormation, good for
large organizations</li>
<li>Cons: CloudFormation underneath means CloudFormation problems, Node.js dependency</li>
</ul>


<p><strong>…and the rest of crew…</strong></p>

<p>Ansible, AWS SAM, Serverless Framework - each with their own opinions,
dependencies, and learning curves.</p>

<p>Every option beyond the CLI adds a layer of abstraction, a new
language or DSL, a state management story, and a new thing to learn
and maintain. For large teams managing hundreds of resources across
multiple environments that overhead is justified. For a solo developer
or small team managing a focused set of resources it can feel like
overkill.</p>

<p>Even in large organizations, not every project should be conflated
into the corporate infrastructure IaC tool.  Moreover, not every
project gets the attention of the DevOps team necessary to create or
support the application infrastructure.</p>

<p>What if you could get idempotent, repeatable, version-controlled
infrastructure management using tools you already have? No new
language, no state backend, no provider versioning. Just <code>make</code>,
<code>bash</code>, a scripting language you’re comfortable with, and your cloud
provider’s CLI.</p>

<p>And yes…my love affair with <code>make</code> is endless.</p>

<p>We’ll use AWS examples throughout, but the patterns apply equally to
Google Cloud (<code>gcloud</code>) and Microsoft Azure (<code>az</code>). The CLI tools
differ, the patterns don’t.</p>

<hr />

<h2>A word about the AWS CLI <code>--query</code> option</h2>

<p>Before you reach for <code>jq</code>, <code>perl</code>, or <code>python</code> to parse CLI output,
it’s worth knowing that most cloud CLIs have built-in query
support. The AWS CLI’s <code>--query</code> flag implements <a href="https://jmespath.org/">JMESPath</a> - a query
language for JSON that handles the majority of filtering and
extraction tasks without any additional tools:</p>

<pre><code># get a specific field
aws lambda get-function \
    --function-name my-function \
    --query 'Configuration.FunctionArn' \
    --output text

# filter a list
aws sqs list-queues \
    --query 'QueueUrls[?contains(@, `my-queue`)]|[0]' \
    --output text
</code></pre>

<p><code>--query</code> is faster, requires no additional dependencies, and keeps
your pipeline simple. Reach for it first. When it falls short -
complex transformations, arithmetic, multi-value extraction - that’s
when a one-liner earns its place:</p>

<pre><code># perl
aws lambda get-function --function-name my-function | \
    perl -MJSON -n0 -e '$l=decode_json($_); print $l-&gt;{Configuration}{FunctionArn}'

# python
aws lambda get-function --function-name my-function | \
    python3 -c "import json,sys; d=json.load(sys.stdin); print(d['Configuration']['FunctionArn'])"
</code></pre>

<p>Both get the job done. Use whichever lives in your shed.</p>

<hr />

<h2>What is Idempotency?</h2>

<p>The word comes from mathematics - an operation is idempotent if
applying it multiple times produces the same result as applying it
once. Sort of like those ID10T errors…no matter how hard or how many
times that user clicks on that button they get the same result.</p>

<p>In the context of infrastructure management it means this:
running your resource creation script twice should have exactly the
same outcome as running it once. The first run creates the
resource. The second run detects it already exists and does nothing -
no errors, no duplicates, no side effects.</p>

<p>This sounds simple but it’s surprisingly easy to get wrong. A naive
script that just calls <code>aws lambda create-function</code> will fail on the
second run with a <code>ResourceConflictException</code>. A slightly better
script wraps that in error handling. A truly idempotent script never
attempts to create a resource it knows already exists.</p>

<p>And it works in both directions. The idempotent bug - running a
failing process repeatedly and getting the same error every time - is
what happens when your failure path is idempotent too. Consistently
wrong, no matter how many times you try. The patterns we’ll show are
designed to ensure that success is idempotent while failure always
leaves the door open for the next attempt.</p>

<p>Cloud APIs fall into four distinct behavioral categories when it comes
to <em>idempotency</em>, and your tooling needs to handle each one differently:</p>

<p><strong>Case 1 - The API is idempotent and produces output</strong></p>

<p>Some APIs can be called repeatedly without error and return useful
output each time - whether the resource was just created or already
existed. <code>aws events put-rule</code> is a good example - it returns the rule
ARN whether the rule was just created or already existed. The pattern:
call the read API first, capture the output, call the write API only
if the read returned nothing.</p>

<p><strong>Case 2 - The API is idempotent but produces no output</strong></p>

<p>Some write APIs succeed silently - they return nothing on
success. <code>aws s3api put-bucket-notification-configuration</code> is a good
example. It will happily overwrite an existing configuration without
complaint, but returns no output to confirm success. The pattern: call
the API, synthesize a value for your sentinel using <code>&amp;&amp; echo</code> to
capture something meaningful on success.</p>

<p><strong>Case 3 - The API is not idempotent</strong></p>

<p>Some APIs will fail with an error if you try to create a resource that
already exists. <code>aws lambda add-permission</code> returns
<code>ResourceConflictException</code> if the statement ID already exists. <code>aws
lambda create-function</code> returns <code>ResourceConflictException</code> if the
function already exists. These APIs give you no choice - you must
query first and only call the write API if the resource is missing.</p>

<p><strong>Case 4 - The API call fails</strong></p>

<p>Any of the above can fail - network errors, permission problems,
invalid parameters. When a call fails you must not leave behind a
sentinel file that signals success. A stale sentinel is worse than no
sentinel - it tells Make the resource exists when it doesn’t, and
subsequent runs silently skip the creation step. The patterns: <code>|| rm
-f $@</code> when writing directly, or <code>else rm -f $@</code> when capturing to a
variable first.</p>

<hr />

<h2>The Sentinel File</h2>

<p>Before we look at the four patterns in detail, we need to introduce a
concept that ties everything together: the <em>sentinel file</em>.</p>

<p>A sentinel file is simply a file whose existence signals that a task
has been completed successfully. It contains no magic - it might hold
the output of the API call that created the resource, or it might just
be an empty file created with <code>touch</code>. What matters is that it exists
when the task succeeded and doesn’t exist when it hasn’t.</p>

<p><code>make</code> has used this pattern since the 1970s. When you declare a
target in a Makefile, <code>make</code> checks whether a file with that name
exists before deciding whether to run the recipe. If the file exists
and is newer than its dependencies, <code>make</code> skips the recipe
entirely. If the file doesn’t exist, <code>make</code> runs the recipe to create
it.</p>

<p>For infrastructure management this is exactly the behavior we want:</p>

<pre><code>my-resource:
    @value="$$(aws some-service describe-resource \
            --name $(RESOURCE_NAME) 2&gt;&amp;1)"; \
    if [[ -z "$$value" || "$$value" = "ResourceNotFound" ]]; then \
        value="$$(aws some-service create-resource \
            --name $(RESOURCE_NAME))"; \
    fi; \
    test -e $@ || echo "$$value" &gt; $@
</code></pre>

<p>The first time you run <code>make my-resource</code> the file doesn’t exist,
the recipe runs, the resource is created, and the API response
is written to the sentinel file <code>my-resource</code>. The second time you
run it, <code>make</code> sees the file exists and skips the recipe entirely -
zero API calls.</p>

<p>When an API call fails we want to be sure we do not create the
sentinel file. We’ll cover the failure case in more detail in Pattern
4 of the next section.</p>

<hr />

<h2>The Four Patterns</h2>

<p>Armed with the sentinel file concept and an understanding of the four
API behavioral categories, let’s look at concrete implementations of
each pattern.</p>

<hr />

<h3>Pattern 1 - Idempotent API with output</h3>

<p>The simplest case. Query the resource first - if it exists capture the
output and write the sentinel. If it doesn’t exist, create it, capture
the output, and write the sentinel. Either way you end up with a
sentinel containing meaningful content.</p>

<p>The SQS queue creation is a good example:</p>

<pre><code>sqs-queue:
    @queue="$$(aws sqs list-queues \
        --query 'QueueUrls[?contains(@, `$(QUEUE_NAME)`)]|[0]' \
        --output text --profile $(AWS_PROFILE) 2&gt;&amp;1)"; \
    if echo "$$queue" | grep -q 'error\|Error'; then \
        echo "ERROR: list-queues failed: $$queue" &gt;&amp;2; \
        exit 1; \
    elif [[ -z "$$queue" || "$$queue" = "None" ]]; then \
        queue="$(QUEUE_NAME)"; \
        aws sqs create-queue --queue-name $(QUEUE_NAME) \
            --profile $(AWS_PROFILE); \
    fi; \
    test -e $@ || echo "$$queue" &gt; $@
</code></pre>

<p>Notice <code>--query</code> doing the filtering work before the output reaches
the shell. No <code>jq</code>, no pipeline - the AWS CLI extracts exactly what we
need. The result is either a queue URL or empty. If empty we
create. Either way <code>$$queue</code> ends up with a value and the sentinel is
written exactly once.</p>

<p>The EventBridge rule follows the same pattern:</p>

<pre><code>lambda-eventbridge-rule:
    @rule="$$(aws events describe-rule \
            --name $(RULE_NAME) \
            --profile $(AWS_PROFILE) 2&gt;&amp;1)"; \
    if echo "$$rule" | grep -q 'ResourceNotFoundException'; then \
        rule="$$(aws events put-rule \
            --name $(RULE_NAME) \
            --schedule-expression "$(SCHEDULE_EXPRESSION)" \
            --state ENABLED \
            --profile $(AWS_PROFILE))"; \
    elif echo "$$rule" | grep -q 'error\|Error'; then \
        echo "ERROR: describe-rule failed: $$rule" &gt;&amp;2; \
        exit 1; \
    fi; \
    test -e $@ || echo "$$rule" &gt; $@
</code></pre>

<p>Same shape - query, create if missing, write sentinel once.</p>

<hr />

<h3>Pattern 2 - Idempotent API with no output</h3>

<p>Some APIs succeed silently. <code>aws s3api
put-bucket-notification-configuration</code> is the canonical example - it
happily overwrites an existing configuration and returns nothing. No
output means nothing to write to the sentinel.</p>

<p>The solution is to synthesize a value using <code>&amp;&amp;</code>:</p>

<pre><code>define notification_configuration =
use JSON;

my $lambda_function = $ENV{lambda_function};
my $function_arn = decode_json($lambda_function)-&gt;{Configuration}-&gt;{FunctionArn};

my $configuration = {
 LambdaFunctionConfigurations =&gt; [ {
   LambdaFunctionArn =&gt; $function_arn,
   Events =&gt; [ split ' ', $ENV{s3_event} ],
  }
 ]
};

print encode_json($configuration);
endef

export s_notification_configuration = $(value notification_configuration)

lambda-s3-trigger: lambda-s3-permission
        temp="$$(mktemp)"; trap 'rm -f "$$temp"' EXIT; \
        lambda_function="$$(cat lambda-function)"; \
        echo $$(s3_event="$(S3_EVENT)" lambda_function="$$lambda_function" \
          perl -e "$$s_notification_configuration") &gt; $$temp; \
        trigger="$$(aws s3api put-bucket-notification-configuration \
            --bucket $(BUCKET_NAME) \
            --notification-configuration file://$$temp \
            --profile $(AWS_PROFILE) &amp;&amp; cat $$temp)"; \
        test -e $@ || echo "$$trigger" &gt; $@
</code></pre>

<p>The <code>&amp;&amp; cat $$temp</code> is the key. If the API call succeeds the <code>&amp;&amp;</code>
fires and <code>$$trigger</code> gets the configuration JSON string - something meaningful to
write to the sentinel. If the API call fails <code>&amp;&amp;</code> doesn’t fire,
<code>$$trigger</code> stays empty because the <code>Makefile</code> recipe aborts.</p>

<p>Using a
<a href="https://blog.tbcdevelopmentgroup.com/2025-11-13-post.html">scriptlet (<code>s_notification_configuration</code>)</a>
might seem like overkill, but it’s worth not having to fight shell
quoting issues!</p>

<p>Writing JSON used in many AWS API calls to a temporary file is usually
a better way than passing a string on the command line.  Unless you
wrap the JSON in quotes you’ll be fighting shell quoting and
interpolation issues…and of course you can write your scriptlets in Perl
or Python!</p>

<hr />

<h3>Pattern 3 - Non-idempotent API</h3>

<p>Some APIs are not idempotent - they fail with a
<code>ResourceConflictException</code> or similar if the resource already
exists. <code>aws lambda add-permission</code> and <code>aws lambda create-function</code>
are both in this category. There is no “create or update” variant -
you must check existence first and only call the write API if the
resource is missing.</p>

<p>The Lambda S3 permission target is a good example:</p>

<pre><code>lambda-s3-permission: lambda-function s3-bucket
        @permission="$$(aws lambda get-policy \
                --function-name $(FUNCTION_NAME) \
                --profile $(AWS_PROFILE) 2&gt;&amp;1)"; \
        if echo "$$permission" | grep -q 'ResourceNotFoundException' || \
           ! echo "$$permission" | grep -q s3.amazonaws.com; then \
            permission="$$(aws lambda add-permission \
                --function-name $(FUNCTION_NAME) \
                --statement-id s3-trigger-$(BUCKET_NAME) \
                --action lambda:InvokeFunction \
                --principal s3.amazonaws.com \
                --source-arn arn:aws:s3:::$(BUCKET_NAME) \
                --profile $(AWS_PROFILE))"; \
        elif echo "$$permission" | grep -q 'error\|Error'; then \
            echo "ERROR: get-policy failed: $$permission" &gt;&amp;2; \
            exit 1; \
        fi; \
        if [[ -n "$$permission" ]]; then \
            test -e $@ || echo "$$permission" &gt; $@; \
        else \
            rm -f $@; \
        fi
</code></pre>

<p>A few things worth noting here…</p>

<ul>
<li><code>get-policy</code> returns the full policy document which may contain
multiple statements - we check for the presence of
<code>s3.amazonaws.com</code> specifically using <code>! grep -q</code> rather than just
checking for an empty response. This handles the case where a policy
exists but doesn’t yet have the S3 permission we need.</li>
<li>The sentinel is only written if <code>$$permission</code> is non-empty after
the if block. This covers the case where <code>get-policy</code> returns
nothing and <code>add-permission</code> also fails - the sentinel stays absent
and the next <code>make</code> run will try again.</li>
<li>We pipe errors to our <code>bash</code> variable to detect the case where the
resource does not exist or there may have been some other
error. When other failures are possible <code>2&gt;&amp;1</code> combined with
specific error string matching gives you both idempotency and
visibility. Swallowing errors silently (<code>2&gt;/dev/null</code>) is how
idempotent bugs are born.</li>
</ul>


<hr />

<h3>Pattern 4 - Failure handling</h3>

<p>This isn’t a separate pattern so much as a discipline that applies to
all three of the above. There are two mechanisms depending on how the
sentinel is written.</p>

<p><em>Case 1:</em> When the sentinel is written directly by the command:</p>

<pre><code>aws lambda create-function ... &gt; $@ || rm -f $@
</code></pre>

<p><code>|| rm -f $@</code> ensures that if the command fails the partial or empty
sentinel is immediately cleaned up. Without it <code>make</code> sees the file on
the next run and silently skips the recipe - an idempotent bug.</p>

<p><em>Case 2:</em> When the sentinel is written by capturing output to a variable first:</p>

<pre><code>if [[ -n "$$value" ]]; then \
    test -e $@ || echo "$$value" &gt; $@; \
else \
    rm -f $@; \
fi
</code></pre>

<p>The <code>else rm -f $@</code> serves the same purpose. If the variable is empty
- because the API call failed - the sentinel is removed. If the
sentinel doesn’t exist yet nothing is written. Either way the next
<code>make</code> run will try again.</p>

<p>In both cases the goal is the same: a sentinel file should only exist
when the underlying resource exists. A stale sentinel is worse than no
sentinel.</p>

<p>Depending on the way your recipe is written you may not need to test
the variable that capture the output at all. In Makefiles we
<code>.SHELLFLAGS := -ec</code> which causes <code>make</code> to exit immediately if any
command in a recipe fails. This means targets that don’t write to
<code>$@</code> - like our <code>sqs-queue</code> target above
- don’t need explicit failure handling. <code>make</code> will die loudly and the
sentinel won’t be written. In that case you don’t even need to test
<code>$$value</code> and can simplify writing of the sentinel file like this:</p>

<pre><code>test -e $@ || echo "$$value" &gt; $@
</code></pre>

<hr />

<h2>Conclusion</h2>

<p>Creating AWS resources can be done using several different tools…all
of them eventually call AWS APIs and process the return payloads.
Each of these tools has its place. Each adds something. Each also has
a complexity, dependencies, and a learning curve score.</p>

<p>For a small project or a focused set of resources - the kind a solo
developer or small team manages for a specific application - you don’t
need tools with a high cognitive or resource load. You can use those
tools you already have on your belt; <code>make</code>,<code>bash</code>, [<em>insert favorite
scripting language here</em>], and <code>aws</code>. And you can leverage those same tools
equally well with <code>gcloud</code> or <code>az</code>.</p>

<p>The four patterns we’ve covered handle every AWS API behavior you’ll
encounter:</p>

<ul>
<li>Query first, create only if missing, write a sentinel</li>
<li>Synthesize output when the API has none</li>
<li>Always check before calling a non-idempotent API</li>
<li>Clean up on failure with <code>|| rm -f $@</code></li>
</ul>


<p>These aren’t new tricks - they’re straightforward applications of
tools that have been around for decades. <code>make</code> has been managing
file-based dependencies since 1976. The sentinel file pattern predates
cloud computing entirely. We’re just applying them to a new problem.</p>

<p>One final thought. The idempotent bug - running a failing process
repeatedly and getting the same error every time - is the mirror image
of what we’ve built here. Our goal is idempotent success: run it once,
it works. Run it again, it still works. Run it a hundred times,
nothing changes. <code>|| rm -f $@</code> is what separates idempotent success
from idempotent failure - it ensures that a bad run always leaves the
door open for the next attempt rather than cementing the failure in
place with a stale sentinel.</p>

<p>Your shed is already well stocked. Sometimes the right tool for the
job is the one you’ve had hanging on the wall for thirty years.</p>

<hr />

<h2>Further Reading</h2>

<ul>
<li>“Advanced Bash-Scripting Guide” - https://tldp.org/LDP/abs/html/index.html</li>
<li>“GNU Make” - https://www.gnu.org/software/make/manual/html_node/index.html</li>
<li>Dave Oswald, “Perl One Liners for the Shell” (Perl conference presentation): https://www.slideshare.net/slideshow/perl-oneliners/77841913</li>
<li>Peteris Krumins, “Perl One-Liners” (No Starch Press): https://nostarch.com/perloneliners</li>
<li>Sundeep Agarwal, “Perl One-Liners Guide” (free online): https://learnbyexample.github.io/learn_perl_oneliners/</li>
<li>AWS CLI JMESPath query documentation: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html</li>
</ul>

]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Six Ways to Use AI Without Giving Up the Keys]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2026-03-18-post.html"/>
    <published>2026-03-18T00:00:00-04:00</published>
    <updated>2026-03-20T07:33:06-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2026-03-18-post.html</id>
    <content type="html"><![CDATA[<h2>Abstract</h2>

<p>Even if you’re skeptical about AI writing your code, you’re leaving
time on the table.</p>

<p>Many developers have been slow to adopt AI in their workflows, and
that’s understandable. As AI coding assistants become more capable the
anxiety is real - nobody wants to feel like they’re training their
replacement. But we’re not there yet. Skilled developers who
understand logic, mathematics, business needs and user experience will
be essential to guide application development for the foreseeable
future.</p>

<p>The smarter play is to let AI handle the parts of the job you never
liked anyway - the documentation, the release notes, the boilerplate
tests - while you stay focused on the work that actually requires your
experience and judgment. You don’t need to go all in on day one. Here
are six places to start.</p>

<hr />

<h2>1. Unit Test Writing</h2>

<p>Writing unit tests is one of those tasks most developers know they
should do more of and few enjoy doing. It’s methodical,
time-consuming, and the worst time to write them is when the code
reviewer asks if they pass.</p>

<p>TDD is a fine theory. In practice, writing tests before you’ve vetted
your design means rewriting your tests every time the design evolves -
which is often. Most experienced developers write tests after the
design has settled, and that’s a perfectly reasonable approach.</p>

<p>The important thing is that they get written at all. Even a test that
simply validates <code>use_ok(qw(Foo::Bar))</code> puts scaffolding in place that
can be expanded when new features are added or behavior changes. A
placeholder is infinitely more useful than nothing.</p>

<p>This is where AI earns its keep. Feed it a function or a module and it
will identify the code paths that need coverage - the happy path, the
edge cases, the boundary conditions, the error handling. It will
suggest appropriate test data sets including the inputs most likely to
expose bugs: empty strings, nulls, negative numbers, off-by-one values
- the things a tired developer skips.</p>

<p>You review it, adjust it, own it. AI did the mechanical work of
thinking through the permutations. You make sure it reflects how your
code is actually used in the real world.</p>

<hr />

<h2>2. Documentation</h2>

<blockquote><p>“Documentation is like sex: when it’s good, it’s very, very good;
and when it’s bad, it’s better than nothing.” - said someone
somewhere.</p></blockquote>

<p>Of course, there are developers that justify their disdain for writing
documentation with one of two arguments (or both):</p>

<ol>
<li>The code is the documentation</li>
<li>Documentation is wrong the moment it is written</li>
</ol>


<p>It is true, the single source of truth regarding what code actually
does is the code itself. What it is <em>supposed</em> to do is what
documentation should be all about. When they diverge it’s either a
defect in the software or a misunderstanding of the business
requirement captured in the documentation.</p>

<p>Code that changes rapidly is difficult to document, but the intent of
the code is not. Especially now with AI. It is trivial to ask AI to
review the current documentation and align it with the code, negating
point #2.</p>

<p>Feed AI a module and ask it to generate POD. It will describe what the
code does. Your job is to verify that what it does is what it <em>should</em>
do - which is a much faster review than writing from scratch.</p>

<hr />

<h2>3. Release Notes</h2>

<p>If you’ve read this far you may have noticed the irony - this post was
written by someone who just published a blog post about automating
release notes with AI. So consider this section field-tested.</p>

<p>Release notes sit at the intersection of everything developers
dislike: writing prose, summarizing work they’ve already mentally
moved on from, and doing it with enough clarity that non-developers
can understand what changed and why it matters. It’s the last thing
standing between you and shipping.</p>

<p>The problem with feeding a git log to AI is that git logs are written
for developers in the moment, not for readers after the fact. “Fix the
thing” and “WIP” are not useful release note fodder.</p>

<p>The better approach is to give AI real context - a unified diff, a
file manifest, and the actual source of the changed files. With those
three inputs AI can identify the primary themes of a release, group
related changes, and produce structured notes that actually reflect
the architecture rather than just the line changes.</p>

<p>A simple <code>make release-notes</code> target can generate all three assets
automatically from your last git tag. Upload them, prompt for your
preferred format, and you have a first draft in seconds rather than
thirty minutes. <a href="https://blog.tbcdevelopmentgroup.com/2026-03-17-post.html">Here’s how I built
it.</a></p>

<p>You still edit it. You add color, context, and the business rationale
that only you know. But the mechanical work of reading every diff and
turning it into coherent prose? Delegated.</p>

<hr />

<h2>4. Bug Triage</h2>

<p>Debugging can be the most frustrating and the most rewarding
experience for a developer. Most developers are predisposed to love a
puzzle and there is nothing more puzzling than a race condition or a
dangling pointer. Even though <a href="https://debuggingrules.com/">books and
posters</a> have been written about
debugging it is sometimes difficult to know exactly where to start.</p>

<p>Describe the symptoms, share the relevant code, toss your theory at
it. AI will validate or repudiate without ego - no colleague awkwardly
telling you you’re wrong. It will suggest where to look, what
telemetry to add, and before you know it you’re instrumenting the code
that should have been instrumented from the start.</p>

<p>AI may not find your bug, but it will be a fantastic bug buddy.</p>

<hr />

<h2>5. Code Review</h2>

<p>Since I’ve started using AI I’ve found that one of the most valuable
things I can do with it is to give it my first draft of a piece of
code. Anything more than a dozen or so lines is fair game.</p>

<p>Don’t waste your time polishing a piece of lava that just spewed from
your noggin. There’s probably some gold in there and there’s
definitely some ash. That’s ok. You created the framework for a
discussion on design and implementation. Before you know it you have
settled on a path.</p>

<p>AI’s strength is pattern recognition. It will recognize when your code
needs to adopt a different pattern or when you nailed it. Get
feedback. Push back. It’s not a one-way conversation. Question the
approach, flag the inconsistencies that don’t feel right - your input
into that review process is critical in evolving the molten rock into
a solid foundation.</p>

<hr />

<h2>6. Legacy Code Deciphering</h2>

<p>What defines “Legacy Code?” It’s a great question and hard to
answer. And not to get too racy again, but as it has been said of
pornography, I can’t exactly define it but I know it when I see it.</p>

<p>Fortunately (and yes I do mean fortunately) I have been involved in
maintaining legacy code since the day I started working for a family
run business in 1998. The code I maintained there was born literally
in the late 70’s and still, to this day generates millions of
dollars. You will never learn more about coding than by maintaining
legacy code.</p>

<p>These are the major characteristics of legacy code from my experience
(in order of visibility):</p>

<ol>
<li>It generates so much money for a company they could not possibly
think of it being unavailable.</li>
<li>It is monolithic and may in fact consist of modules in multiple
languages.</li>
<li>It is grown organically over the decades.</li>
<li>It is more than 10 years old.</li>
<li>The business rules are not documented, opaque and can only be
discerned by a careful reading of the software. Product managers
and users <em>think</em> they know what the software does, but probably do
not have the entire picture.</li>
<li>It cannot easily be re-written (by humans) because of #5.</li>
<li>It contains as much dead code that is no longer serving any useful
purpose as it does useful code.</li>
</ol>


<p>I once maintained a C program that searched an ISAM database of legal
judgments. The code had been ported from a proprietary in-memory
binary tree implementation and was likely older than most of the
developers reading this post. The business model was straightforward
and terrifying - miss a judgment and we indemnify the client. Every
change had to be essentially idempotent. You weren’t fixing code, you
were performing surgery on a patient who would sue you if the scar was
in the wrong place.</p>

<p>I was fortunate - there were no paydays for a
client on my watch. But I wish I’d had AI back then. Not to write the
code. To help me read it.</p>

<p>Now, where does AI come in? Points 5, 6, and definitely 7.</p>

<p>Throw a jabberwocky of a function at AI and ask it what it does. Not
what it <em>should</em> do - what it <em>actually</em> does. The variable names are
cryptic, the comments are either missing or lying, and the original
author left the company during the Clinton administration. AI doesn’t
care. It reads the code without preconception and gives you a plain
English explanation of the logic, the assumptions baked in, and the
side effects you never knew existed.</p>

<p>That explanation becomes your documentation. Those assumptions become
your unit tests. Those side effects become the bug reports you never
filed because you didn’t know they were bugs.</p>

<p>Dead code is where AI particularly shines. Show it a module and ask
what’s unreachable. Ask what’s duplicated. Ask what hasn’t been
touched in a decade but sits there quietly terrifying anyone who
considers deleting it. AI will give you a map of the minefield so you
can walk through it rather than around it forever.</p>

<p>Along the way AI will flag security vulnerabilities you never knew
were there - input validation gaps, unsafe string handling,
authentication assumptions that made sense in 1998 and are a liability
today. It will also suggest where instrumentation is missing, the
logging and telemetry that would have made every debugging session for
the last twenty years shorter. You can’t go back and add it to
history, but you can add it now before the next incident.</p>

<p>The irony of legacy code is that the skills required to understand it
- patience, pattern recognition, the ability to hold an entire system
in your head - are exactly the skills AI complements rather than
replaces. You still need to understand the business. AI just helps you
read the hieroglyphics.</p>

<hr />

<h2>Conclusion</h2>

<p>None of the six items on this list require you to hand over the
keys. You are still the architect, the decision maker, the person who
understands the business and the user. AI is the tireless assistant
who handles the parts of the job that drain your energy without
advancing your craft.</p>

<p>The developers who thrive in the next decade won’t be the ones who
resisted AI the longest. They’ll be the ones who figured out earliest
how to delegate the tedious, the mechanical, and the repetitive - and
spent the time they saved on the work that actually requires a human.</p>

<p>You don’t have to go all in. Start with a unit test. Paste some legacy
code and ask AI to explain it or document it. Think of AI as that
senior developer you go to with the tough problems - the one who has
seen everything, judges nothing, and is available at 3am when the
production system is on fire.</p>

<p>Only this one never sighs when you knock on the door.</p>

<hr />
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Stop Writing Release Notes: Accelerate with AI]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2026-03-17-post.html"/>
    <published>2026-03-17T14:51:59-04:00</published>
    <updated>2026-03-17T14:51:59-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2026-03-17-post.html</id>
    <content type="html"><![CDATA[<h2>The Problem: Generating Release Notes is Boring</h2>

<p>You’ve just finished a marathon refactoring - perhaps splitting a
monolithic script into proper modules-and now you need to write the
release notes. You could feed an AI a messy git log, but if you want
high-fidelity summaries that actually understand your architecture,
you need to provide better context.</p>

<h2>The Solution: AI Loves Boring Tasks</h2>

<p>…and is pretty good at them too!</p>

<p>Instead of manually describing changes or hoping it can interpret my
ChangeLog, I’ve automated the production of three ephemeral “Sidecar”
assets. These are generated on the fly, uploaded to the LLM, and then
purged after analysis - no storage required.</p>

<h3>The Assets</h3>

<ul>
<li><strong>The Manifest (<code>.lst</code>)</strong>: A simple list of every file touched,
ensuring the AI knows the exact scope of the release.</li>
<li><strong>The Logic (<code>.diffs</code>)</strong>: A unified diff (using <code>git diff
--no-ext-diff</code>) that provides the “what” and “why” of every code
change.</li>
<li><strong>The Context (<code>.tar.gz</code>)</strong>: This is the “secret sauce.” It contains
the full source of the changed files, allowing the AI to see the
final implementation - not just the delta.</li>
</ul>


<hr />

<h3>The <code>Makefile</code> Implementation</h3>

<p>If you’ve read any of my <a href="https://blog.tbcdevelopmentgroup.com/2023-01-12-post.html">blog
posts</a> you
know I’m a huge <code>Makefile</code> fan. To automate this I’m naturally going
to add a recipe to my <code>Makefile</code> or <code>Makefile.am</code>.</p>

<p>First, we explicitly set the shell to <code>/usr/bin/env bash</code> to ensure features
like brace expansion work consistently across all dev environments.</p>

<pre><code># Ensure a portable bash environment for advanced shell features
SHELL := /usr/bin/env bash

.PHONY: release-notes clean-local

# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)

release-notes:
    @curr_ver=$(VERSION); \
    last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
    diffs="release-$$curr_ver.diffs"; \
    diff_list="release-$$curr_ver.lst"; \
    diff_tarball="release-$$curr_ver.tar.gz"; \
    echo "Comparing $$last_tag to current $$curr_ver..."; \
    git diff --no-ext-diff "$$last_tag" "$$curr_ver" &gt; "$$diffs"; \
    git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" &gt; "$$diff_list"; \
    tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip &gt; "$$diff_tarball"; \
    ls -alrt release-$$curr_ver*

clean-local:
    @echo "Cleaning ephemeral release assets..."
    rm -f release-*.{tar.gz,lst,diffs}
</code></pre>

<h3>Breaking Down the Recipe</h3>

<ul>
<li><strong>The Shell Choice (<code>/usr/bin/env bash</code>)</strong>: We avoid hardcoding
paths to ensure the script finds the correct Bash path on macOS,
Linux, or inside a container.</li>
<li><strong>The Version Override (<code>VERSION ?=</code>)</strong>: This allows the
“pre-flight” trick: running <code>make release-notes VERSION=HEAD</code> to
iterate on notes before you’ve actually tagged the release.</li>
<li><strong>Smart Tag Discovery (<code>--sort=-v:refname</code>)</strong>: Using <code>v:refname</code>
forces Git to use semantic versioning logic (so <code>1.10.0</code> correctly
follows <code>1.2.0</code>), while the glob pattern filters out “noisy”
non-version tags.</li>
<li><strong>The Diff Filter (<code>--diff-filter=AMR</code>)</strong>: This ensures the tarball
only includes files that actually exist (Added, Modified, or
Renamed). If a release deleted a file, this filter prevents <code>tar</code>
from erroring out when it can’t find the missing file on disk.</li>
<li><strong>The Cleanup Crew (<code>clean-local</code>)</strong>: remove the ephemeral artifacts
using <code>bash</code> expansion</li>
</ul>


<h3>The AI Prompt</h3>

<p>Once your assets are generated, upload them to AI and use a prompt that
enforces your specific formatting standards.</p>

<p><strong>The Prompt:</strong></p>

<pre><code>I have uploaded a file manifest (`.lst`), a unified diff
(`.diffs`), and a tarball of the source code (`.tar.gz`). Please
analyze these to identify the primary themes of this
release. Generate a release entry using this protocol:

1. Separate versions with 71 dashes.
2. Include a high-level executive summary paragraph.
3. Indent bullet points exactly 21 spaces to maintain a vertical column.
4. Group changes under category labels like `Core Bedrock:` or `Miniserver:`.
5. Provide plain-text ASCII only inside a code block.
</code></pre>

<h3>The Result</h3>

<p>In a recent dry run for Bedrock 3.8.8, this exact workflow allowed the
AI to correctly identify that <code>bedrock-miniserver.pl.in</code> was
refactored into <code>Bedrock::MiniServer</code> and <code>Bedrock::MiniServer::CLI</code>,
while accurately noting that <code>Bedrock::Error</code> moved to the
<code>Bedrock-Core</code> distribution.</p>

<p>In actuality, version 3.8.8 focuses on the primary goals of MiniServer
maturation (more batteries included), Apache decoupling, and
distribution hygiene. Here’s AI’s version:</p>

<pre><code>-----------------------------------------------------------------------
Bedrock 3.8.8
-----------------------------------------------------------------------
This release focuses on the maturation of the MiniServer component, 
decoupling core Bedrock logic from Apache-specific dependencies, and 
improving distribution hygiene. Key architectural changes include 
refactoring the MiniServer into modular components and moving 
exception handling into the Bedrock-Core distribution.

2026-03-17 - 3.8.8 - MiniServer Maturation and Apache Decoupling
                     Miniserver:
                     - Refactored bedrock-miniserver.pl into modular 
                       Bedrock::MiniServer and Bedrock::MiniServer::CLI.
                     - Implemented zero-config scaffolding to 
                       automatically create application trees.
                     - Integrated full Bedrock configuration pipeline 
                       for parity with Apache environments.
                     - Updated bedrock_server_config to support both 
                       getter and setter operations.

                     Core:
                     - Moved Bedrock::Error and Bedrock::Exception to 
                       the Bedrock-Core distribution.
                     - Introduced Bedrock::FauxHandler as a production-
                       ready alias for test handlers.
                     - Added dist_dir() to BLM::Startup::Bedrock to 
                       expose distribution paths to templates.

                     Fixes:
                     - Demoted Apache-specific modules (mod_perl2, 
                       Apache2::Request) to optional recommendations.
                     - Improved Bedrock::Test::FauxHandler to handle 
                       caller-supplied loggers and safe destruction.
</code></pre>

<h1>Conclusion</h1>

<p>As I mentioned in a response to a <a href="https://medium.com/@rclauer/not-true-ai-programming-tools-are-accelerators-for-seasoned-professionals-that-understand-how-and-49c4090367e7">recent Medium
article</a>,
AI can be an accelerator for seasoned professionals. You’re not
cheating. You did the work. AI does the wordsmithing. You edit, add color, and
ship. What used to take 30 minutes now takes 3. <em>Now that’s working
smarter, not harder!</em></p>

<h1>Pro-Tip</h1>

<p>Add this to the top of your <code>Makefile</code></p>

<pre><code>SHELL := /usr/bin/env bash

# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)
</code></pre>

<p>Copy this to a file named <code>release-notes.mk</code></p>

<pre><code>.PHONY: release-notes clean-local

release-notes:
    @curr_ver=$(VERSION); \
    last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
    diffs="release-$$curr_ver.diffs"; \
    diff_list="release-$$curr_ver.lst"; \
    diff_tarball="release-$$curr_ver.tar.gz"; \
    echo "Comparing $$last_tag to current $$curr_ver..."; \
    git diff --no-ext-diff "$$last_tag" "$$curr_ver" &gt; "$$diffs"; \
    git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" &gt; "$$diff_list"; \
    tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip &gt; "$$diff_tarball"; \
    ls -alrt release-$$curr_ver*

clean-local:
    @echo "Cleaning ephemeral release assets..."
    rm -f release-*.{tar.gz,lst,diffs}
</code></pre>

<p>Then add <code>release-notes.mk</code> to your <code>Makefile</code></p>

<pre><code>include release-notes.mk
</code></pre>


]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[RIP nginx - Long Live Apache]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2026-02-23-post.html"/>
    <published>2026-02-23T00:00:00-05:00</published>
    <updated>2026-03-17T11:01:05-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2026-02-23-post.html</id>
    <content type="html"><![CDATA[<h1>RIP nginx - Long Live Apache</h1>

<p><strong>nginx is dead</strong>. Not metaphorically dead. Not “falling out of favor”
dead. <em>Actually, officially, put-a-date-on-it dead</em>.</p>

<p>In November 2025 the Kubernetes project announced the retirement of
Ingress NGINX — the controller running ingress for a significant
fraction of the world’s Kubernetes clusters. Best-effort maintenance
until March 2026. After that: no releases, no bugfixes, no security
patches. GitHub repositories go read-only. Tombstone in place.</p>

<p>And before the body was even cold, we learned why. IngressNightmare —
five CVEs disclosed in March 2025, headlined by CVE-2025-1974, rated
9.8 critical. Unauthenticated remote code execution. Complete cluster
takeover. No credentials required. Wiz Research found over 6,500
clusters with the vulnerable admission controller publicly exposed to
the internet, including Fortune 500 companies. 43% of cloud
environments vulnerable. The root cause wasn’t a bug that could be
patched cleanly - it was an architectural flaw baked into the design
from the beginning. And the project that ran ingress for millions of
production clusters was, in the end, sustained by one or two people
working in their spare time.</p>

<p>Meanwhile Apache has been quietly running the internet for 30 years,
governed by a foundation, maintained by a community, and looking
increasingly like the adult in the room.</p>

<p>Let’s talk about how we got here.</p>

<h2>Apache Was THE Web Server</h2>

<p>Before we talk about what went wrong, let’s remember what Apache
actually was. Not a web server. THE web server. At its peak Apache
served over 70% of all websites on the internet. It didn’t win that
position by accident - it won it by solving every problem the early
web threw at it. Virtual hosting. SSL. Authentication. Dynamic content
via CGI and then mod_perl. Rewrite rules. Per-directory
configuration. Access control. Compression. Caching. Proxying. One by
one, as the web evolved, Apache evolved with it, and the industry
built on top of it.</p>

<p>Apache wasn’t just infrastructure. It was the platform on which the
commercial internet was built. Every hosting provider ran it. Every
enterprise deployed it. Every web developer learned it. It was as
foundational as TCP/IP - so foundational that most people stopped
thinking about it, the way you stop thinking about running water.</p>

<p>Then nginx showed up with a compelling story at exactly the right
moment.</p>

<h2>The Narrative That Stuck</h2>

<p>The early 2000s brought a new class of problem - massively concurrent
web applications, long-polling, tens of thousands of simultaneous
connections. The C10K problem was real and Apache’s prefork MPM - one
process per connection - genuinely struggled under that specific load
profile. nginx’s event-driven architecture handled it elegantly. The
benchmarks were dramatic. The config was clean and minimal, a breath
of fresh air compared to Apache’s accumulated complexity. nginx felt
modern. Apache felt like your dad’s car.</p>

<p>The “Apache is legacy” narrative took hold and never let go - even
after the evidence for it evaporated.</p>

<p>Apache gained <code>mpm_event</code>, bringing the same non-blocking I/O and
async connection handling that nginx was celebrated for. The
performance gap on concurrent connections essentially closed. Then
CDNs solved the static file problem at the architectural level - your
static files live in S3 now, served from a Cloudflare edge node
milliseconds from your user, and your web server never sees them. The
two pillars of the nginx argument - concurrency and static file
performance - were addressed, one by Apache’s own evolution and one by
infrastructure that any serious deployment should be using regardless
of web server choice.</p>

<p>But nobody reruns the benchmarks. The “legacy” label outlived the
evidence by a decade. A generation of engineers learned nginx first,
taught it to the next generation, and the assumption calcified into
received wisdom. Blog posts from 2012 are still being cited as
architectural guidance in 2025.</p>

<h2>What Apache Does That nginx Can’t</h2>

<p>Strip away the benchmark mythology and look at what these servers
actually do when you need them to do something hard.</p>

<p>Apache’s input filter chain lets you intercept the raw request byte
stream mid-flight - before the body is fully received - and do
something meaningful with it. I’m currently building a multi-server
file upload handler with real-time Redis progress tracking, proper
session authentication, and CSRF protection implemented directly in
the filter chain. Zero JavaScript upload libraries. Zero npm
dependencies. Zero supply chain attack surface. The client sends
bytes. Apache intercepts them. Redis tracks them. Done. nginx needs a
paid commercial module to get close. Or you write C. Or you route
around it to application code and wonder why you needed nginx in the
first place.</p>

<p>Apache’s phase handlers let you hook into the exact right moment of
the request lifecycle - post-read, header parsing, access control,
authentication, response - each phase a precise intervention
point. <code>mod_perl</code> embeds a full Perl runtime in the server with
persistent state, shared memory, and pre-forked workers inheriting
connection pools and compiled code across requests. <code>mod_security</code>
gives you WAF capabilities your “modern” stack is paying a vendor
for. <code>mod_cache</code> is a complete RFC-compliant caching layer that nginx
reserves for paying customers.</p>

<p>And LDAP - one of the oldest enterprise authentication requirements
there is. With <code>mod_authnz_ldap</code> it’s a few lines of config:</p>

<pre><code class="apache">AuthType Basic
AuthName "Corporate Login"
AuthBasicProvider ldap
AuthLDAPURL ldap://ldap.company.com/dc=company,dc=com
AuthLDAPBindDN "cn=apache,dc=company,dc=com"
AuthLDAPBindPassword secret
Require ldap-group cn=developers,ou=groups,dc=company,dc=com
</code></pre>

<p>Connection pooling, SSL/TLS to the directory, group membership checks,
credential caching - all native, all in config, no code required. With
nginx you’re reaching for a community module with an inconsistent
maintenance history, writing Lua, or standing up a separate auth
service and proxying to it with <code>auth_request</code> - which is just
<code>mod_authnz_ldap</code> reimplemented badly across two processes with an
HTTP round trip in the middle.</p>

<h2>Apache Includes Everything You’re Now Paying For</h2>

<p>Look at Apache’s feature set and you’re reading the history of web
infrastructure, one solved problem at a time. SSL termination? Apache
had it before cloud load balancers existed to take it off your
plate. Caching? <code>mod_cache</code> predates Redis by years. Load balancing?
<code>mod_proxy_balancer</code> was doing weighted round-robin and health checks
before ELB was a product. Compression, rate limiting, IP-based access
control, bot detection via <code>mod_security</code> - Apache had answers to all
of it before the industry decided each problem deserved its own
dedicated service, its own operations overhead, and its own vendor
relationship.</p>

<p>Apache didn’t accumulate features because it was undisciplined. It
accumulated features because the web kept throwing problems at it and
it kept solving them. The fact that your load balancer now handles SSL
termination doesn’t mean Apache was wrong to support it - it means
Apache was right early enough that the rest of the industry eventually
built dedicated infrastructure around the same idea.</p>

<p>Now look at your AWS bill. CloudFront for CDN. ALB for load balancing
and SSL termination. WAF for request filtering. ElastiCache for
caching. Cognito for authentication. API Gateway for routing. Each one
a line item. Each one a managed service wrapping functionality that
Apache has shipped for free since before most of your team was writing
code.</p>

<p>Amazon Web Services is, in a very real sense, Apache’s feature set
repackaged as paid managed infrastructure. They looked at what the web
needed, looked at what Apache had already solved, and built a business
around operating those solutions at scale so you didn’t have
to. That’s a legitimate value proposition - operations is hard and
sometimes paying AWS is absolutely the right answer. But if you’re
running a handful of servers and paying for half a dozen AWS services
to handle concerns that Apache handles natively, maybe set the Wayback
Machine to 2005, spin up Apache, and keep the credit card in your
pocket.</p>

<p>Grandpa wasn’t just ahead of his time. Grandpa was so far ahead that
Amazon built a cloud business catching up to him.</p>

<h2>So Why Did You Choose nginx?</h2>

<p>Be honest. The real reason is that you learned it first, or your last
job used it, or a blog post from 2012 told you it was the modern
choice. Maybe someone at a conference said Apache was legacy and you
nodded along because everyone else was nodding. That’s how technology
adoption works - narrative momentum, not engineering analysis.</p>

<p>But those nginx blinders have a cost. And the Kubernetes ecosystem
just paid it in full.</p>

<h2>The Cost of the nginx Blinders</h2>

<p>The nginx Ingress Controller became the Kubernetes default early in
the ecosystem’s adoption curve and the pattern stuck. Millions of
production clusters. The de-facto standard. Fortune 500 companies. The
Swiss Army knife of Kubernetes networking - and that flexibility was
precisely its undoing.</p>

<p>The “snippets” feature that made it popular - letting users inject raw
nginx config via annotations - turned out to be an unsanitizable
attack surface baked into the design. CVE-2025-1974 exploited this to
achieve unauthenticated RCE via the admission controller, giving
attackers access to all secrets across all namespaces. Complete
cluster takeover from anything on the pod network. In many common
configurations the pod network is accessible to every workload in your
cloud VPC. The blast radius was the entire cluster.</p>

<p>The architectural flaw couldn’t be fixed without gutting the feature
that made the project worth using. So it was retired instead.</p>

<p>Here is the part nobody is saying out loud: <strong>Apache could have been
your Kubernetes ingress controller all along.</strong></p>

<p>The Apache Ingress Controller exists. It supports path and host-based
routing, TLS termination, WebSocket proxying, header manipulation,
rate limiting, mTLS - everything Ingress NGINX offered, built on a
foundation with 30 years of security hardening and a governance model
that doesn’t depend on one person’s spare time. It doesn’t have an
unsanitizable annotation system because Apache’s configuration model
was designed with proper boundaries from the beginning. The full
Apache module ecosystem - <code>mod_security</code>, <code>mod_authnz_ldap</code>, the
filter chain, all of it - available to every ingress request.</p>

<p>The Kubernetes community never seriously considered it. nginx had the
mindshare, nginx got the default recommendation, nginx became the
assumed answer before the question was even finished. Apache was
dismissed as grandpa’s web server by engineers who had never actually
used it for anything hard - and so the ecosystem bet its ingress layer
on a project sustained by volunteers and crossed its fingers.</p>

<p>The nginx blinders cost the industry IngressNightmare, 6,500 exposed
clusters, and a forced migration that will consume engineering hours
across thousands of organizations in 2026. Not because Apache wasn’t
available. Because nobody looked.</p>

<p>nginx is survived by its commercial fork nginx Plus, approximately
6,500 vulnerable Kubernetes clusters, and a generation of engineers
who will spend Q1 2026 migrating to Gateway API - a migration they
could have avoided entirely.</p>

<h2>Who’s Keeping The Lights On</h2>

<p>Here’s the conversation that should happen in every architecture
review but almost never does: who maintains this and what happens when
something goes wrong?</p>

<p>For Apache the answer has been the same for over 30 years. The Apache
Software Foundation - vendor-neutral, foundation-governed, genuinely
open source. Security vulnerabilities found, disclosed responsibly,
patched. A stable API that doesn’t break your modules between
versions. Predictable release cycles. Institutional stability that has
outlasted every company that ever tried to compete with it.</p>

<p>nginx’s history is considerably more complicated. Written by Igor
Sysoev while employed at Rambler, ownership murky for years, acquired
by F5 in 2019. Now a critical piece of infrastructure owned by a
networking hardware vendor whose primary business interests may or may
not align with the open source project. nginx Plus - the version with
the features that actually compete with Apache on a level playing
field - is commercial. OpenResty, the variant most people reach for
when they need real programmability, is a separate project with its
own maintenance trajectory.</p>

<p>The Ingress NGINX project had millions of users and a maintainership
you could count on one hand. That’s not a criticism of the maintainers
- it’s an indictment of an ecosystem that adopted a critical
infrastructure component without asking who was keeping the lights on.</p>

<p>Three decades of adversarial testing by the entire internet is a
security posture no startup’s stack can match. The Apache Software
Foundation will still be maintaining Apache httpd when the company
that owns your current stack has pivoted twice and been acqui-hired
into oblivion.</p>

<h2>Long Live Apache</h2>

<p>The engineers who dismissed Apache as legacy were looking at a 2003
benchmark and calling it a verdict. They missed the server that
anticipated every problem modern infrastructure is still solving, that
powered the internet before AWS existed to charge you for the
privilege, and that was sitting right there in the Kubernetes
ecosystem waiting to be evaluated while the community was busy betting
critical infrastructure on a volunteer project with an architectural
time bomb in its most popular feature.</p>

<p>Grandpa didn’t just know what he was doing. Grandpa was building the
platform you’re still trying to reinvent - badly, in JavaScript, with
a vulnerability disclosure coming next Tuesday and a maintainer
burnout announcement the Tuesday after that.</p>

<p>The server is fine. It was always fine. Touch grass, update your
mental model, and maybe read the Apache docs before your next
architecture meeting.</p>

<p><em>RIP nginx Ingress Controller. 2015-2026. Maintained by one guy in his
spare time. Missed by the 43% of cloud environments that probably
should have asked more questions.</em></p>

<h3>Sources</h3>

<ul>
<li>IngressNightmare - CVE details and exposure statistics
Wiz Research, March 24, 2025
https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities</li>
<li>Ingress NGINX Retirement Announcement
Kubernetes SIG Network and Security Response Committee, November 11, 2025
https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/</li>
<li>Ingress NGINX CVE-2025-1974 - Official Kubernetes Advisory
Kubernetes, March 24, 2025
https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/</li>
<li>Transitioning Away from Ingress NGINX - Maintainership and architectural analysis
Google Open Source Blog, February 2026
https://opensource.googleblog.com/2026/02/the-end-of-an-era-transitioning-away-from-ingress-nginx.html</li>
<li>F5 Acquisition of nginx
F5 Press Release, March 2019
https://www.f5.com/company/news/press-releases/f5-acquires-nginx-to-bridge-netops-and-devops</li>
</ul>


<p><em>Disclaimer: This article was written with AI assistance during a long
discussion on the features and history of Apache and nginx, drawing on
my experience maintaining and using Apache over the last 20+
years. The opinions, technical observations, and arguments are
entirely my own. I am in no way affiliated with the ASF, nor do I have
any financial interest in promoting Apache. I have been using and
benefiting from Apache since 1998 and continue to discover features
and capabilities that surprise me even to this day.</em></p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Go Ahead ‘make’ My Day (Part III)]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-11-14-post.html"/>
    <published>2025-11-14T00:00:00-05:00</published>
    <updated>2026-03-17T11:23:42-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-11-14-post.html</id>
    <content type="html"><![CDATA[<h1>Go Ahead ‘make’ My Day (Part III)</h1>

<p>This is the last in a 3 part series on <em>Scriptlets</em>. You can catch up
by reading our introduction and dissection of <em>Scriptlets</em>.</p>

<ul>
<li><a href="https://blog.tbcdevelopmentgroup.com/2023-01-12-post.html">Part I - <em>An introduction to scriptlets</em></a></li>
<li><a href="https://blog.tbcdevelopmentgroup.com/2025-11-13-post.html">Part II - <em>The anatomy of a scriptlet</em></a></li>
</ul>


<p>In this final part, we talk about restraint - the discipline that
keeps a clever trick from turning into a maintenance hazard.</p>

<h2>That uneasy feeling…</h2>

<p>So you are starting to write a few scriptlets and it seems pretty
cool. But something doesn’t feel quite right…</p>

<p>You’re editing a <code>Makefile</code> and suddenly you feel anxious.  Ah, you
expected syntax highlighting, linting, proper indentation, and maybe
that warm blanket of static analysis. So when we drop a 20 - line
chunk of Perl or Python into our <code>Makefile</code>, our inner OCD alarms go
off. No highlighting. No linting. Just raw text.</p>

<p>The discomfort isn’t a flaw - it’s feedback. It tells you when you’ve
added too much salt to the soup.</p>

<h2>A scriptlet is not a script!</h2>

<p>A <em>scriptlet</em> is a small, focused snippet of code embedded inside a
<code>Makefile</code> that performs one job quickly and
deterministically. The “-let” suffix matters. It’s not a standalone
program. It’s a helper function, a convenience, a single brushstroke
that belongs in the same canvas as the build logic it supports.</p>

<p>If you ever feel the urge to bite your nails, pick at your skin, or
start counting the spaces in your indentation - stop. You’ve crossed the
line. What you’ve written is no longer a scriptlet; it’s a
script. Give it a real file, a shebang, and a test harness. Keep the
build clean.</p>

<h2>Why we use them</h2>

<p>Scriptlets shine where proximity and simplicity matter more than reuse
(not that we can’t throw it in a separate file and <code>include</code> it in our
<code>Makefile</code>).</p>

<ul>
<li><strong>Cleanliness:</strong> prevents a recipe from looking like a shell script.</li>
<li><strong>Locality:</strong> live where they’re used. No path lookups, no installs.</li>
<li><strong>Determinism:</strong>  transform well-defined input into output. Nothing more.</li>
<li><strong>Portability (of the idea):</strong> every CI/CD system that can run
<code>make</code> can run a one-liner.</li>
</ul>


<p>A Makefile that can generate its own dependency file, extract version
numbers, or rewrite a <code>cpanfile</code> doesn’t need a constellation of helper
scripts. It just needs a few lines of inline glue.</p>

<h2>Why they’re sometimes painful</h2>

<p>We lose the comforts that make us feel like professional developers:</p>

<ul>
<li>No syntax highlighting.</li>
<li>No linting or type hints.</li>
<li>No indentation guides.</li>
<li>No “Format on Save.”</li>
</ul>


<p>The trick is to accept that pain as a necessary check on the limits of
the <em>scriptlet</em>. If you’re constantly wishing for linting and editor
help, it’s your subconscious telling you: <em>this doesn’t belong inline
anymore</em>. You’ve outgrown the <code>-let</code>.</p>

<h2>When to promote your <em>scriplet</em> to a <em>script</em>…</h2>

<p>Promote a scriptlet to a full-blown script when:</p>

<ul>
<li>It exceeds 30-50 lines.</li>
<li>It gains conditionals or error handling.</li>
<li>You need to test it independently.</li>
<li>It uses more than 1 or 2 non-core features.</li>
<li>It’s used by more than one target or project.</li>
<li>You’re debugging quoting more than logic.</li>
<li>You’re spending more time fixing indentation, than working on the build</li>
</ul>


<p>At that point, you’re writing software, not glue. Give it a name, a
shebang, and a home in your <code>tools/</code> directory.</p>

<h2>When to keep it inside your <code>Makefile</code>…</h2>

<p>Keep it inline when:</p>

<ul>
<li>It’s short, pure, and single-use.</li>
<li>It depends primarily on the environment already assumed by your build
(Perl, Python, awk, etc.).</li>
<li>It’s faster to read than to reference.</li>
</ul>


<p>A good scriptlet reads like a make recipe: <em>do this transformation
right here, right now.</em></p>

<pre><code>define create_cpanfile =
    while (&lt;STDIN&gt;) {
        s/[#].*//; s/^\s+|\s+$//g; next if $_ eq q{};
        my ($mod,$v) = split /\s+/, $_, 2;
        print qq{requires "$mod", "$v";\n};
    }
endef

export s_create_cpanfile = $(value create_cpanfile)
</code></pre>

<p>That’s a perfect scriptlet: small, readable, deterministic, and local.</p>

<blockquote><p><em><strong>Rule of Thumb:</strong> If it fits on one screen, keep it inline. If it
scrolls, promote it.</em></p></blockquote>

<h2>Tools for the OCD developer</h2>

<p>If you must relieve the OCD symptoms without promotion of your
<em>scriptlet</em> to a <em>script</em>…</p>

<ul>
<li>Add a <code>lint-scriptlets</code> target:
<code>perl -c -e '$(s_create_requires)'</code> checks syntax without running it.</li>
<li>Some editors (Emacs <code>mmm-mode</code>, Vim <code>polyglot</code>) can treat marked
sections as sub-languages to enable localized language specific
editing features.</li>
<li>Use <code>include</code> to include a scriptlet into your <code>Makefile</code></li>
</ul>


<p>…however try to resist the urge to over-optimize the
tooling. Feeling the uneasiness grow helps identify the boundary
between <em>scriptlets</em> and <em>scripts</em>.</p>

<h2>You’ve been warned!</h2>

<p>Because scriptlets are powerful, flexible, and fast, it’s easy to
reach for them too often or make them the focus of your project.  They
start as a cure for friction - a way to express a small transformation
inline - but left unchecked, they can sometimes grow arms and
legs. Before long, your <code>Makefile</code> turns into a Frankenstein monster.</p>

<p>The great philosopher Basho (or at least I think it was him) once said:</p>

<blockquote><p><em>A single aspirin tablet eases pain. A whole bottle sends you to the
hospital.</em></p></blockquote>

<p>Thanks for reading.</p>

<h2>Learn More</h2>

<ul>
<li><a href="https://www.gnu.org/software/make/manual/html_node/index.html">GNU <code>make</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Value-Function.html"><code>value</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Multi_002dLine.html"><code>define</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Variables_002fRecursion.html"><code>export</code></a></li>
</ul>

]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Go Ahead ‘make’ My Day (Part II)]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-11-13-post.html"/>
    <published>2025-11-13T00:00:00-05:00</published>
    <updated>2026-03-17T11:23:59-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-11-13-post.html</id>
    <content type="html"><![CDATA[<h1>Go Ahead ‘make’ My Day (Part II)</h1>

<p>In our previous blog post <a href="https://blog.tbcdevelopmentgroup.com/2023-01-12-post.html">“Go Ahead ‘make’ My
Day”</a> we
presented the <em>scriptlet</em>, an advanced <code>make</code> technique for spicing up
your <code>Makefile</code> recipes. In this follow-up, we’ll deconstruct the
scriptlet and detail the ingredients that make up the secret sauce.</p>

<hr />

<h1>Introducing the <em>Scriptlet</em></h1>

<p><code>Makefile</code> <em>scriptlets</em> are an advanced technique that uses
GNU <code>make</code>’s powerful functions to safely embed a multi-line script
(Perl, in our example) into a single, clean shell command. It turns a
complex block of logic into an easily executable template.</p>

<h2>An Example <em>Scriptlet</em></h2>

<pre><code>#-*- mode: makefile; -*-

DARKPAN_TEMPLATE="https://cpan.openbedrock.net/orepan2/authors/D/DU/DUMMY/%s-%s.tar.gz"

define create_requires =
 # scriptlet to create cpanfile from an list of required Perl modules
 # skip comments
 my $DARKPAN_TEMPLATE=$ENV{DARKPAN_TEMPLATE};

 while (s/^#[^\n]+\n//g){};

 # skip blank lines
 while (s/\n\n/\n/) {};

 for (split/\n/) { 
  my ($mod, $v) = split /\s+/;
  next if !$mod;

  my $dist = $mod;
  $dist =~s/::/\-/g;

  my $url = sprintf $DARKPAN_TEMPLATE, $dist, $v;

  print &lt;&lt;"EOF";
requires \"$mod\", \"$v\",
  url =&gt; \"$url\";
EOF
 }

endef

export s_create_requires = $(value create_requires)

cpanfile.darkpan: requires.darkpan
    DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE); \
    DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl -0ne "$$s_create_requires" $&lt; &gt; $@ || rm $@
</code></pre>

<hr />

<h2>Dissecting the <em>Scriptlet</em></h2>

<h3>1. The Container: Defining the Script (<code>define</code> / <code>endef</code>)</h3>

<p>This section creates the multi-line variable that holds your entire Perl program.</p>

<pre><code>define create_requires =
# Perl code here...
endef
</code></pre>

<ul>
<li><strong><code>define ... endef</code></strong>: This is GNU Make’s mechanism for defining
  a <strong>recursively expanded variable</strong> that spans multiple lines. The
  content is <em>not</em> processed by the shell yet; it’s simply stored by
  <code>make</code>.</li>
<li><strong>The Advantage:</strong> This is the only clean way to write readable,
  indented code (like your <code>while</code> loop and <code>if</code> statements)
  directly inside a <code>Makefile</code>.</li>
</ul>


<h3>2. The Bridge: Passing Environment Data (<code>my $ENV{...}</code>)</h3>

<p>This is a critical step for making your script template portable and
configurable.</p>

<pre><code class="makefile">my $DARKPAN_TEMPLATE=$ENV{DARKPAN_TEMPLATE};
</code></pre>

<ul>
<li><strong>The Problem:</strong> Your Perl script needs dynamic values (like the
  template URL) that are set by <code>make</code>.</li>
<li><strong>The Solution:</strong> Instead of hardcoding the URL, the Perl code is
  designed to read from the <strong>shell environment variable</strong>
  <code>$ENV{DARKPAN_TEMPLATE}</code>. This makes the script agnostic to its
  calling environment, delegating the data management back to the
  <code>Makefile</code>.</li>
</ul>


<h3>3. The Transformer: Shell Preparation (<code>export</code> and <code>$(value)</code>)</h3>

<p>This is the “magic” that turns the multi-line Make variable into a
single, clean shell command.</p>

<pre><code>export s_create_requires = $(value create_requires)
</code></pre>

<ul>
<li><strong><code>$(value create_requires)</code></strong>: This is a specific Make function
  that performs a <strong>direct, single-pass expansion</strong> of the
  variable’s raw content. Crucially, it converts the entire
  multi-line block into a <em>single-line string</em> suitable for export,
  preserving special characters and line breaks that the shell will
  execute.</li>
<li><strong><code>export s_create_requires = ...</code></strong>: This exports the multi-line
  Perl script content as an <strong>environment variable</strong>
  (<code>s_create_requires</code>) that will be accessible to any shell process
  running in the recipe’s environment.</li>
</ul>


<h3>4. The Execution: Atomic Execution (<code>$$</code> and <code>perl -0ne</code>)</h3>

<p>The final recipe executes the entire, complex process as a single,
atomic operation, which is the goal of robust Makefiles.</p>

<pre><code>cpanfile.darkpan: requires.darkpan
    DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE); \
    DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl -0ne "$$s_create_requires" $&lt; &gt; $@ || rm $@
</code></pre>

<ul>
<li><strong><code>DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE)</code></strong>: This creates the local
  shell variable.</li>
<li><strong><code>DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl...</code></strong>: This is the
  clean execution. The first <code>DARKPAN_TEMPLATE=</code> passes the newly
  created shell variable’s value as an <strong>environment variable</strong> to
  the <code>perl</code> process. The <code>$$</code> ensures the shell variable is
  properly expanded <em>before</em> the Perl interpreter runs it.</li>
<li><strong><code>perl -0ne "..."</code></strong>: Runs the Perl script:

<pre><code>* `-n` and `-e` (Execute script on input)
* `-0`: Tells Perl to read the input as one single block
  (slurping the file), which is necessary for your multi-line
  regex and `split/\n/` logic.
</code></pre></li>
<li><strong><code>|| rm $@</code></strong>: This is the final mark of quality. It makes the
  entire command <strong>transactional</strong>—if the Perl script fails, the
  half-written target file (<code>$@</code>) is deleted, forcing <code>make</code> to try
  again later.</li>
</ul>


<hr />

<h1>Hey Now! You’re a Rockstar!</h1>

<h2>(..get your game on!)</h2>

<p>Mastering build automation using <code>make</code> will <strong>transform</strong> you from
being an average DevOps engineer into a rockstar. GNU <code>make</code> is a Swiss
Army knife with more tools than you might think! The knives are sharp
and the tools are highly targeted to handle all the real-world issues
build automation has encountered over the decades. Learning to use
<code>make</code> effectively will put you head and shoulders above the herd (see
what I did there? 😉).</p>

<h2>Calling All Pythonistas!</h2>

<p>The scriptlet technique creates a powerful, universal pattern for
clean, atomic builds:</p>

<ul>
<li><strong>It’s Language Agnostic:</strong> Pythonistas! Join the fun! The same
<code>define</code>/<code>export</code> technique works perfectly with <code>python -c</code>.</li>
<li><strong>The Win:</strong> This ensures that every developer - regardless of their
preferred language - can achieve the same clean, atomic build and
<strong>avoid external script chaos</strong>.</li>
</ul>


<p>Learn more about <a href="https://www.gnu.org/software/make/manual/html_node/index.html">GNU
<code>make</code></a>
and move your Makefiles from simple shell commands to precision
instruments of automation.</p>

<p>Thanks for reading.</p>

<h2>Learn More</h2>

<ul>
<li><a href="https://www.gnu.org/software/make/manual/html_node/index.html">GNU <code>make</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Value-Function.html"><code>value</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Multi_002dLine.html"><code>define</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Variables_002fRecursion.html"><code>export</code></a></li>
</ul>

]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Bump Your Semantic Version]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-04-20-post.html"/>
    <published>2025-04-20T00:00:00-04:00</published>
    <updated>2026-03-17T11:01:05-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-04-20-post.html</id>
    <content type="html"><![CDATA[<h1>Bump Your Semantic Version</h1>

<p>While looking at some old <code>bash</code> script that bumps my semantic
versions I almost puked looking at my old ham handed way of bumping
the version.  That led me to see how I could do it “better”.  Why? I
dunno know…bored on a Saturday morning and not motivated enough to
do the NY Times crossword…</p>

<p>So you want to bump a semantic version string like <code>1.2.3</code>
- major, minor, or patch - and you don’t want ceremony. You want <strong>one
line</strong>, no dependencies, and enough arcane flair to scare off
coworkers.</p>

<p>Here’s a single-line Bash–Perl spell that does exactly that:</p>

<pre><code>v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
</code></pre>

<h2>What It Does</h2>

<ul>
<li>Reads the current version from a <code>VERSION</code> file (<code>1.2.3</code>)</li>
<li>Increments the part you pass (<code>0</code> for major, <code>1</code> for minor, <code>2</code> for patch)</li>
<li>Resets all lower parts to zero</li>
<li>Writes the result to <code>v</code></li>
</ul>


<hr />

<h2>Scriptlet Form</h2>

<p>Wrap it like this in a shell function:</p>

<pre><code class="bash">bump() {
  v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
  '$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
  echo "$v" &gt; VERSION
}
</code></pre>

<p>Then run:</p>

<pre><code>bump 2   # bump patch (1.2.3 =&gt; 1.2.4)
bump 1   # bump minor (1.2.3 =&gt; 1.3.0)
bump 0   # bump major (1.2.3 =&gt; 2.0.0)
</code></pre>

<hr />

<h1>Makefile Integration</h1>

<p>Want to bump right from <code>make</code>?</p>

<pre><code>bump-major:
    @v=$$(cat VERSION | p=0 perl -a -F[.] -pe '$$i=$$ENV{p};$$F[$$i]++;$$j=$$i+1;$$F[$$_]=0 for $$j..2;$$"=".";$_="$$F"') &amp;&amp; \
    echo $$v &gt; VERSION &amp;&amp; echo "New version: $$v"

bump-minor:
    @$(MAKE) bump-major p=1

bump-patch:
    @$(MAKE) bump-major p=2
</code></pre>

<p>Or break it out into a <code>.bump-version</code> script and source it from your build tooling.</p>

<h2>How It Works (or …Why I Love Perl)</h2>

<pre><code>-a            # autosplit into @F
-F[.]         # split on literal dot
$i=$ENV{p}    # get part index from environment (e.g., 1 for minor)
$F[$i]++      # bump it
$j=$i+1       # start index for resetting
$F[$_]=0 ...  # zero the rest
$"=".";       # join array with dots
$_="@F"       # set output
</code></pre>

<p>If you have to explain this to some junior dev, just say RTFM skippy
<code>perldoc perlrun</code>. Use the force Luke.</p>

<p>And if the senior dev wags his finger and say UUOC, tell him <strong>Ego
malum edo</strong>.</p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Just a Test]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-18-post.html"/>
    <published>2025-03-18T00:00:00-04:00</published>
    <updated>2026-03-17T11:01:05-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-18-post.html</id>
    <content type="html"><![CDATA[<p>This is a test of the new improved Plerd template…that will include
some javascript at some point…</p>

<p>test test
check check</p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[<code>END</code> Block Hijacking]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-15-post.html"/>
    <published>2025-03-15T00:00:00-04:00</published>
    <updated>2026-03-17T11:01:05-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-15-post.html</id>
    <content type="html"><![CDATA[<p>From <code>perldoc perlmod</code>…</p>

<pre><code>An "END" code block is executed as late as possible, that is, after perl
has finished running the program and just before the interpreter is
being exited, even if it is exiting as a result of a die() function.
(But not if it's morphing into another program via "exec", or being
blown out of the water by a signal--you have to trap that yourself (if
you can).) You may have multiple "END" blocks within a file--they will
execute in reverse order of definition; that is: last in, first out
(LIFO). "END" blocks are not executed when you run perl with the "-c"
switch, or if compilation fails....
</code></pre>

<p>Perl’s <code>END</code> blocks are useful inside your script for doing things
like cleaning up after itself, closing files or disconnecting from
databases. In many cases you use an <code>END</code> block to guarantee certain
behaviors like a commit or rollback of a transaction. You’ll
typically see <code>END</code> blocks in scripts, but occassionally you might find
one in a Perl module.</p>

<p>Over the last four years I’ve done a lot of maintenance work on legacy
Perl applications. I’ve learned more about Perl in these four years
than I learned in the previous 20. <strong>Digging into bad code is the best
way to learn how to write good code.</strong> It’s sometimes hard to decide if
code is good or bad but to paraphrase a supreme court justice, <em>I
can’t always define bad code, but I know it when I see it.</em></p>

<p>One of the gems I’ve stumbled upon was a module that provided needed
functionality for many scripts that included its own <code>END</code> block.</p>

<p>Putting an <code>END</code> block in a Perl module is an anti-pattern and just <strong>bad
mojo</strong>. <strong>A module should never contain an <code>END</code> block.</strong> Here are
some alternatives:</p>

<ul>
<li>If cleanup is necessary, provide (and document) a <code>cleanup()</code> method</li>
<li>Use <strong>a destructor (<code>DESTROY</code>) in an object-oriented module</strong></li>
</ul>


<p>Modules should provide functionality - not take control of your
script’s shutdown behavior.</p>

<h2>Why Would Someone Put an END Block in a Perl Module?</h2>

<p>The first and most obvious answer is that they were unaware of how
<code>DESTROY</code> blocks can be employed. If you know something about the
author and you’re convinced they know better, then why else?</p>

<p>I theorize that the author was trying to create a module that would
encapsulate functionality that he would use in <strong>EVERY</strong> script he
wrote for the application. While the <em>faux pas</em> might be forgiven I’m
not ready to put my wagging finger back in its holster.</p>

<p>If you want to write a wrapper for all your scripts and you’ve already
settled on using a Perl module to do so, then please for all that is
good and holy do it right. Here are some potential guidelines for
that wrapper:</p>

<ul>
<li>A <code>new()</code> method that instantiates your wrapper</li>
<li>An <code>init()</code> method that encapsulates the common
startup operations with options to control whether some are executed
or not</li>
<li>A <code>run()</code> method that executes the functionality for the script</li>
<li>A <code>finalize()</code> method for executing cleanup procedures</li>
<li>POD that describes the common functionality provided as well as any options
for controlling their invocation</li>
</ul>


<p>All of these methods could be overridden if you just use a plain ‘ol Perl
module. For those that prefer composition over inheritance, use a
role with something like <code>Role::Tiny</code> to provide those universally
required methods. Using <code>Role::Tiny</code> provides better flexibility
by allowing you to use those methods <em>before</em> or <em>after</em> your
modifications to their behavior.</p>

<h2>How Can We Take Back Control?</h2>

<p>The particular script I was working on included a module(<em>whose name I
shall not speak</em>) that included such an <code>END</code> block. My script
should have exited <strong>cleanly and quietly</strong>. Instead, it produced
mysterious messages during shutdown. Worse yet I feared some
undocumented behaviors and black magic might have been conjured up
during that process! After a bit of debugging, I found the culprit:</p>

<ul>
<li>The module had an <code>END</code> block baked into it</li>
<li>This <code>END</code> block printed debug messages to STDERR while doing other
cleanup operations</li>
<li>Worse, it <strong>ran unconditionally</strong> when my script terminated</li>
</ul>


<h3>The Naive Approach</h3>

<p>My initial attempt to suppress the module’s <code>END</code> block:</p>

<pre><code>END {
    use POSIX;
    POSIX::_exit(0);
}
</code></pre>

<p>This <strong>works</strong> as long as my script exits normally. But if my
script <strong>dies</strong> due to an error, the rogue <code>END</code> block <strong>still
executes</strong>. Not exactly the behavior I want.</p>

<h3>A Better Approach</h3>

<p>Here’s what I want to happen:</p>

<ul>
<li>Prevent <strong>any</strong> rogue <code>END</code> block from executing.</li>
<li>Handle errors gracefully.</li>
<li><p>Ensure my script always exits with a meaningful status code.</p></li>
<li><p>A better method for scripts that need an <code>END</code> block to claw back
control:</p></li>
</ul>


<pre><code>use English qw(-no_match_vars);
use POSIX ();

my $retval = eval {
    return main();
};

END {
  if ($EVAL_ERROR) {
      warn "$EVAL_ERROR";  # Preserve exact error message
  }

  POSIX::_exit( $retval // 1 );
}
</code></pre>

<ul>
<li><strong>Bypasses all <code>END</code> blocks</strong> – Since <code>POSIX::_exit()</code> terminates
the process <strong>immediately</strong></li>
<li><strong>Handles errors cleanly</strong> – If <code>main()</code> throws an exception, we
<strong>log it without modifying the message</strong></li>
<li><strong>Forces explicit return values</strong> – If <code>main()</code> forgets to return a
status, we default to <code>1</code>, ensuring no silent failures.</li>
<li>Future maintainers will see <strong>exactly what’s happening</strong></li>
</ul>


<h2>Caveat Emptor</h2>

<p>Of course, you should know what behavior you are bypassing if you
decide to wrestle control back from some misbehaving module. In my
case, I knew that the behaviors being executed in the <code>END</code> block
could safely be ignored. Even if they couldn’t be ignored, I can still
provide those behaviors in my own cleanup procedures.</p>

<p>Isn’t this what <strong>future me</strong> or the poor wretch tasked with a
dumpster dive into a legacy application would want? Explicitly
seeing the whole shebang without hours of scratching your head looking
for mysterious messages that emanate from the depths is
priceless. <strong>It’s gold, Jerry! Gold!</strong></p>

<p><em>Next up…a better wrapper.</em></p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[How to Fix Apache 2.4 Broken Directory Requests (Part III)]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-14-post.html"/>
    <published>2025-03-14T00:00:00-04:00</published>
    <updated>2026-03-17T11:01:05-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-14-post.html</id>
    <content type="html"><![CDATA[<h2>Introduction</h2>

<p>In our previous posts, we explored how <strong>Apache 2.4 changed its
handling of directory requests</strong> when <code>DirectorySlash Off</code> is set,
breaking the implicit <code>/dir → /dir/</code> redirect behavior that worked in
Apache 2.2. We concluded that while <strong>an external redirect is the only
reliable fix</strong>, this change in behavior led us to an even bigger
question:</p>

<blockquote><p>Is this a bug or an intentional design change in Apache?</p></blockquote>

<p>After digging deeper, we’ve uncovered something <strong>critically
important</strong> that is not well-documented:</p>

<h3>Apache does not restart the request cycle after an internal rewrite, and this can break expected behaviors like <code>DirectoryIndex</code>.</h3>

<p>This post explores why this happens, whether it’s a feature or a bug,
and why <strong>Apache’s documentation should explicitly clarify this
behavior</strong>.</p>

<hr />

<h2>What Happens When Apache Internally Rewrites a Request?</h2>

<p>Let’s revisit the problem: we tried using <strong>an internal rewrite</strong> to
append a trailing slash for directory requests:</p>

<pre><code>RewriteEngine On
RewriteCond %{REQUEST_URI} !/$
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
RewriteRule ^(.*)$ /$1/ [L]
</code></pre>

<p><strong>Expected Behavior:</strong>
- Apache should internally rewrite <code>/setup</code> to <code>/setup/</code>.
- Since <code>DirectoryIndex index.roc</code> is set, Apache should serve <code>index.roc</code>.</p>

<p><strong>Actual Behavior:</strong>
- Apache <strong>internally rewrites</strong> <code>/setup</code> to <code>/setup/</code>, but then
  immediately <strong>fails with a 403 Forbidden</strong>.
- The error log states:
  <code>
  AH01276: Cannot serve directory /var/www/vhosts/treasurersbriefcase/htdocs/setup/: No matching DirectoryIndex (none) found
 </code>
- <strong>Apache is treating <code>/setup/</code> as an empty directory instead of recognizing <code>index.roc</code>.</strong></p>

<hr />

<h2>The Key Issue: Apache Does Not Restart Request Processing After an Internal Rewrite</h2>

<p>Unlike what many admins assume, <strong>Apache does not <em>start over</em> after an internal rewrite</strong>.</p>

<h3>How Apache Processes Requests (Simplified)</h3>

<ol>
<li>The request arrives (<code>/setup</code>).</li>
<li>Apache processes <code>mod_rewrite</code>.</li>
<li>Apache determines how to serve the request.</li>
<li>If an index file (<code>index.html</code>, <code>index.php</code>, etc.) exists, <code>DirectoryIndex</code> resolves it.</li>
</ol>


<h3>Why Internal Rewrites Don’t Restart Processing</h3>

<ul>
<li><strong>Apache processes mod_rewrite before it checks <code>DirectoryIndex</code>.</strong></li>
<li>Once a rewrite occurs, Apache <strong>continues processing from where it left off</strong>.</li>
<li>This means <strong>it does not re-check <code>DirectoryIndex</code> after an internal rewrite.</strong></li>
<li>Instead, it sees <code>/setup/</code> as an <strong>empty directory with no default file</strong> and denies access with <em>403 Forbidden</em>.</li>
</ul>


<hr />

<h2>Is This a Bug or a Feature?</h2>

<p>First, let’s discuss why this worked in Apache 2.2.</p>

<p>The key reason internal rewrites worked in Apache 2.2 is that Apache
<strong>restarted the request processing cycle after a rewrite</strong>. This
meant that:</p>

<ul>
<li>After an internal rewrite, Apache treated the rewritten request as a
brand-new request.</li>
<li>As a result, it re-evaluated DirectoryIndex and correctly served
<code>index.html</code>, <code>index.php</code>, or any configured default file.</li>
<li>Since <code>DirectorySlash</code> was handled earlier in the request cycle,
Apache 2.2 still applied directory handling rules properly, even after
an internal rewrite.</li>
</ul>


<p>In Apache 2.4, this behavior changed. Instead of restarting the
request cycle, Apache continues processing the request from where it
left off. This means that after an internal rewrite, <code>DirectoryIndex</code> is
never reprocessed, leading to the <em>403 Forbidden</em> errors we
encountered. This fundamental change explains why no internal solution
works the way it did in Apache 2.2.</p>

<h3>Why It’s Likely an Intentional Feature</h3>

<ul>
<li>In <strong>Apache 2.2</strong>, some rewrites <strong>did restart the request cycle</strong>, which was seen as inefficient.</li>
<li>In <strong>Apache 2.4</strong>, request processing was optimized for
<strong>performance</strong>, meaning it <strong>does not restart</strong> after an internal
rewrite.</li>
<li>The behavior <strong>is consistent</strong> across different Apache 2.4 installations.</li>
<li>Some discussions in Apache’s mailing lists and bug tracker mention this as “expected behavior.”</li>
</ul>


<h3>Why This is Still a Problem</h3>

<ul>
<li><strong>This behavior is not explicitly documented</strong> in <code>mod_rewrite</code> or <code>DirectoryIndex</code> docs.</li>
<li>Most admins <strong>expect Apache to reprocess the request fully after a rewrite</strong>.</li>
<li>The lack of clarity leads to <strong>confusion and wasted debugging time</strong>.</li>
</ul>


<hr />

<h2>Implications for Apache 2.4 Users</h2>

<h3>1. Mod_Rewrite Behavior is Different From What Many Assume</h3>

<ul>
<li>Internal rewrites <strong>do not restart request processing</strong>.</li>
<li><strong>DirectoryIndex is only evaluated once</strong>, before the rewrite happens.</li>
<li>This is <strong>not obvious</strong> from Apache’s documentation.</li>
</ul>


<h3>2. The Only Reliable Fix is an External Redirect</h3>

<p>Since Apache <strong>won’t reprocess DirectoryIndex</strong>, the only way to
guarantee correct behavior is to <strong>force a new request</strong> via an
external redirect:</p>

<pre><code>RewriteEngine On
RewriteCond %{REQUEST_URI} !/$
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1/ [R=301,L]
</code></pre>

<p>This <strong>forces Apache to start a completely new request cycle</strong>,
ensuring that <code>DirectoryIndex</code> is evaluated properly.</p>

<h3>3. Apache Should Improve Its Documentation</h3>

<p>We believe this behavior <strong>should be explicitly documented</strong> in:
- <strong>mod_rewrite documentation</strong> (stating that rewrites do not restart request processing).
- <strong>DirectoryIndex documentation</strong> (noting that it will not be re-evaluated after an internal rewrite).</p>

<p>This would prevent confusion and help developers troubleshoot these issues more efficiently.</p>

<hr />

<h2>Conclusion: A Feature, But a Poorly Documented One</h2>

<ul>
<li>The fact that <strong>Apache does not restart processing after an internal
rewrite</strong> is <strong>likely an intentional design choice</strong>.</li>
<li>However, <strong>this is not well-documented</strong>, leading to confusion.</li>
<li>The <strong>only solution</strong> remains an <strong>external redirect</strong> to force a fresh request cycle.</li>
<li>We believe <strong>Apache should update its documentation</strong> to reflect this behavior more clearly.</li>
</ul>

]]></content>
  </entry>


</feed>
