<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">

  <title><![CDATA[BigfootInMouth]]></title>
  <link href="http://blog.tbcdevelopmentgroup.com//atom.xml" rel="self" />
  <link href="http://blog.tbcdevelopmentgroup.com/" />
  <updated>2026-02-23T05:51:11-05:00</updated>
  <id>http://blog.tbcdevelopmentgroup.com/</id>
  <author>
    <name><![CDATA[Rob Lauer]]></name>
    <email><![CDATA[rclauer@gmail.com]]></email>
  </author>
  <generator uri="https://github.com/jmacdotorg/plerd">Plerd</generator>


  <entry>
    <title type="html"><![CDATA[RIP nginx - Long Live Apache]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2026-02-23-post.html"/>
    <published>2026-02-23T05:51:11-05:00</published>
    <updated>2026-02-23T05:51:11-05:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2026-02-23-post.html</id>
    <content type="html"><![CDATA[<h1>RIP nginx - Long Live Apache</h1>

<p><strong>nginx is dead</strong>. Not metaphorically dead. Not “falling out of favor”
dead. <em>Actually, officially, put-a-date-on-it dead</em>.</p>

<p>In November 2025 the Kubernetes project announced the retirement of
Ingress NGINX — the controller running ingress for a significant
fraction of the world’s Kubernetes clusters. Best-effort maintenance
until March 2026. After that: no releases, no bugfixes, no security
patches. GitHub repositories go read-only. Tombstone in place.</p>

<p>And before the body was even cold, we learned why. IngressNightmare —
five CVEs disclosed in March 2025, headlined by CVE-2025-1974, rated
9.8 critical. Unauthenticated remote code execution. Complete cluster
takeover. No credentials required. Wiz Research found over 6,500
clusters with the vulnerable admission controller publicly exposed to
the internet, including Fortune 500 companies. 43% of cloud
environments vulnerable. The root cause wasn’t a bug that could be
patched cleanly - it was an architectural flaw baked into the design
from the beginning. And the project that ran ingress for millions of
production clusters was, in the end, sustained by one or two people
working in their spare time.</p>

<p>Meanwhile Apache has been quietly running the internet for 30 years,
governed by a foundation, maintained by a community, and looking
increasingly like the adult in the room.</p>

<p>Let’s talk about how we got here.</p>

<h2>Apache Was THE Web Server</h2>

<p>Before we talk about what went wrong, let’s remember what Apache
actually was. Not a web server. THE web server. At its peak Apache
served over 70% of all websites on the internet. It didn’t win that
position by accident - it won it by solving every problem the early
web threw at it. Virtual hosting. SSL. Authentication. Dynamic content
via CGI and then mod_perl. Rewrite rules. Per-directory
configuration. Access control. Compression. Caching. Proxying. One by
one, as the web evolved, Apache evolved with it, and the industry
built on top of it.</p>

<p>Apache wasn’t just infrastructure. It was the platform on which the
commercial internet was built. Every hosting provider ran it. Every
enterprise deployed it. Every web developer learned it. It was as
foundational as TCP/IP - so foundational that most people stopped
thinking about it, the way you stop thinking about running water.</p>

<p>Then nginx showed up with a compelling story at exactly the right
moment.</p>

<h2>The Narrative That Stuck</h2>

<p>The early 2000s brought a new class of problem - massively concurrent
web applications, long-polling, tens of thousands of simultaneous
connections. The C10K problem was real and Apache’s prefork MPM - one
process per connection - genuinely struggled under that specific load
profile. nginx’s event-driven architecture handled it elegantly. The
benchmarks were dramatic. The config was clean and minimal, a breath
of fresh air compared to Apache’s accumulated complexity. nginx felt
modern. Apache felt like your dad’s car.</p>

<p>The “Apache is legacy” narrative took hold and never let go - even
after the evidence for it evaporated.</p>

<p>Apache gained <code>mpm_event</code>, bringing the same non-blocking I/O and
async connection handling that nginx was celebrated for. The
performance gap on concurrent connections essentially closed. Then
CDNs solved the static file problem at the architectural level - your
static files live in S3 now, served from a Cloudflare edge node
milliseconds from your user, and your web server never sees them. The
two pillars of the nginx argument - concurrency and static file
performance - were addressed, one by Apache’s own evolution and one by
infrastructure that any serious deployment should be using regardless
of web server choice.</p>

<p>But nobody reruns the benchmarks. The “legacy” label outlived the
evidence by a decade. A generation of engineers learned nginx first,
taught it to the next generation, and the assumption calcified into
received wisdom. Blog posts from 2012 are still being cited as
architectural guidance in 2025.</p>

<h2>What Apache Does That nginx Can’t</h2>

<p>Strip away the benchmark mythology and look at what these servers
actually do when you need them to do something hard.</p>

<p>Apache’s input filter chain lets you intercept the raw request byte
stream mid-flight - before the body is fully received - and do
something meaningful with it. I’m currently building a multi-server
file upload handler with real-time Redis progress tracking, proper
session authentication, and CSRF protection implemented directly in
the filter chain. Zero JavaScript upload libraries. Zero npm
dependencies. Zero supply chain attack surface. The client sends
bytes. Apache intercepts them. Redis tracks them. Done. nginx needs a
paid commercial module to get close. Or you write C. Or you route
around it to application code and wonder why you needed nginx in the
first place.</p>

<p>Apache’s phase handlers let you hook into the exact right moment of
the request lifecycle - post-read, header parsing, access control,
authentication, response - each phase a precise intervention
point. <code>mod_perl</code> embeds a full Perl runtime in the server with
persistent state, shared memory, and pre-forked workers inheriting
connection pools and compiled code across requests. <code>mod_security</code>
gives you WAF capabilities your “modern” stack is paying a vendor
for. <code>mod_cache</code> is a complete RFC-compliant caching layer that nginx
reserves for paying customers.</p>

<p>And LDAP - one of the oldest enterprise authentication requirements
there is. With <code>mod_authnz_ldap</code> it’s a few lines of config:</p>

<pre><code class="apache">AuthType Basic
AuthName "Corporate Login"
AuthBasicProvider ldap
AuthLDAPURL ldap://ldap.company.com/dc=company,dc=com
AuthLDAPBindDN "cn=apache,dc=company,dc=com"
AuthLDAPBindPassword secret
Require ldap-group cn=developers,ou=groups,dc=company,dc=com
</code></pre>

<p>Connection pooling, SSL/TLS to the directory, group membership checks,
credential caching - all native, all in config, no code required. With
nginx you’re reaching for a community module with an inconsistent
maintenance history, writing Lua, or standing up a separate auth
service and proxying to it with <code>auth_request</code> - which is just
<code>mod_authnz_ldap</code> reimplemented badly across two processes with an
HTTP round trip in the middle.</p>

<h2>Apache Includes Everything You’re Now Paying For</h2>

<p>Look at Apache’s feature set and you’re reading the history of web
infrastructure, one solved problem at a time. SSL termination? Apache
had it before cloud load balancers existed to take it off your
plate. Caching? <code>mod_cache</code> predates Redis by years. Load balancing?
<code>mod_proxy_balancer</code> was doing weighted round-robin and health checks
before ELB was a product. Compression, rate limiting, IP-based access
control, bot detection via <code>mod_security</code> - Apache had answers to all
of it before the industry decided each problem deserved its own
dedicated service, its own operations overhead, and its own vendor
relationship.</p>

<p>Apache didn’t accumulate features because it was undisciplined. It
accumulated features because the web kept throwing problems at it and
it kept solving them. The fact that your load balancer now handles SSL
termination doesn’t mean Apache was wrong to support it - it means
Apache was right early enough that the rest of the industry eventually
built dedicated infrastructure around the same idea.</p>

<p>Now look at your AWS bill. CloudFront for CDN. ALB for load balancing
and SSL termination. WAF for request filtering. ElastiCache for
caching. Cognito for authentication. API Gateway for routing. Each one
a line item. Each one a managed service wrapping functionality that
Apache has shipped for free since before most of your team was writing
code.</p>

<p>Amazon Web Services is, in a very real sense, Apache’s feature set
repackaged as paid managed infrastructure. They looked at what the web
needed, looked at what Apache had already solved, and built a business
around operating those solutions at scale so you didn’t have
to. That’s a legitimate value proposition - operations is hard and
sometimes paying AWS is absolutely the right answer. But if you’re
running a handful of servers and paying for half a dozen AWS services
to handle concerns that Apache handles natively, maybe set the Wayback
Machine to 2005, spin up Apache, and keep the credit card in your
pocket.</p>

<p>Grandpa wasn’t just ahead of his time. Grandpa was so far ahead that
Amazon built a cloud business catching up to him.</p>

<h2>So Why Did You Choose nginx?</h2>

<p>Be honest. The real reason is that you learned it first, or your last
job used it, or a blog post from 2012 told you it was the modern
choice. Maybe someone at a conference said Apache was legacy and you
nodded along because everyone else was nodding. That’s how technology
adoption works - narrative momentum, not engineering analysis.</p>

<p>But those nginx blinders have a cost. And the Kubernetes ecosystem
just paid it in full.</p>

<h2>The Cost of the nginx Blinders</h2>

<p>The nginx Ingress Controller became the Kubernetes default early in
the ecosystem’s adoption curve and the pattern stuck. Millions of
production clusters. The de-facto standard. Fortune 500 companies. The
Swiss Army knife of Kubernetes networking - and that flexibility was
precisely its undoing.</p>

<p>The “snippets” feature that made it popular - letting users inject raw
nginx config via annotations - turned out to be an unsanitizable
attack surface baked into the design. CVE-2025-1974 exploited this to
achieve unauthenticated RCE via the admission controller, giving
attackers access to all secrets across all namespaces. Complete
cluster takeover from anything on the pod network. In many common
configurations the pod network is accessible to every workload in your
cloud VPC. The blast radius was the entire cluster.</p>

<p>The architectural flaw couldn’t be fixed without gutting the feature
that made the project worth using. So it was retired instead.</p>

<p>Here is the part nobody is saying out loud: <strong>Apache could have been
your Kubernetes ingress controller all along.</strong></p>

<p>The Apache Ingress Controller exists. It supports path and host-based
routing, TLS termination, WebSocket proxying, header manipulation,
rate limiting, mTLS - everything Ingress NGINX offered, built on a
foundation with 30 years of security hardening and a governance model
that doesn’t depend on one person’s spare time. It doesn’t have an
unsanitizable annotation system because Apache’s configuration model
was designed with proper boundaries from the beginning. The full
Apache module ecosystem - <code>mod_security</code>, <code>mod_authnz_ldap</code>, the
filter chain, all of it - available to every ingress request.</p>

<p>The Kubernetes community never seriously considered it. nginx had the
mindshare, nginx got the default recommendation, nginx became the
assumed answer before the question was even finished. Apache was
dismissed as grandpa’s web server by engineers who had never actually
used it for anything hard - and so the ecosystem bet its ingress layer
on a project sustained by volunteers and crossed its fingers.</p>

<p>The nginx blinders cost the industry IngressNightmare, 6,500 exposed
clusters, and a forced migration that will consume engineering hours
across thousands of organizations in 2026. Not because Apache wasn’t
available. Because nobody looked.</p>

<p>nginx is survived by its commercial fork nginx Plus, approximately
6,500 vulnerable Kubernetes clusters, and a generation of engineers
who will spend Q1 2026 migrating to Gateway API - a migration they
could have avoided entirely.</p>

<h2>Who’s Keeping The Lights On</h2>

<p>Here’s the conversation that should happen in every architecture
review but almost never does: who maintains this and what happens when
something goes wrong?</p>

<p>For Apache the answer has been the same for over 30 years. The Apache
Software Foundation - vendor-neutral, foundation-governed, genuinely
open source. Security vulnerabilities found, disclosed responsibly,
patched. A stable API that doesn’t break your modules between
versions. Predictable release cycles. Institutional stability that has
outlasted every company that ever tried to compete with it.</p>

<p>nginx’s history is considerably more complicated. Written by Igor
Sysoev while employed at Rambler, ownership murky for years, acquired
by F5 in 2019. Now a critical piece of infrastructure owned by a
networking hardware vendor whose primary business interests may or may
not align with the open source project. nginx Plus - the version with
the features that actually compete with Apache on a level playing
field - is commercial. OpenResty, the variant most people reach for
when they need real programmability, is a separate project with its
own maintenance trajectory.</p>

<p>The Ingress NGINX project had millions of users and a maintainership
you could count on one hand. That’s not a criticism of the maintainers
- it’s an indictment of an ecosystem that adopted a critical
infrastructure component without asking who was keeping the lights on.</p>

<p>Three decades of adversarial testing by the entire internet is a
security posture no startup’s stack can match. The Apache Software
Foundation will still be maintaining Apache httpd when the company
that owns your current stack has pivoted twice and been acqui-hired
into oblivion.</p>

<h2>Long Live Apache</h2>

<p>The engineers who dismissed Apache as legacy were looking at a 2003
benchmark and calling it a verdict. They missed the server that
anticipated every problem modern infrastructure is still solving, that
powered the internet before AWS existed to charge you for the
privilege, and that was sitting right there in the Kubernetes
ecosystem waiting to be evaluated while the community was busy betting
critical infrastructure on a volunteer project with an architectural
time bomb in its most popular feature.</p>

<p>Grandpa didn’t just know what he was doing. Grandpa was building the
platform you’re still trying to reinvent - badly, in JavaScript, with
a vulnerability disclosure coming next Tuesday and a maintainer
burnout announcement the Tuesday after that.</p>

<p>The server is fine. It was always fine. Touch grass, update your
mental model, and maybe read the Apache docs before your next
architecture meeting.</p>

<p><em>RIP nginx Ingress Controller. 2015-2026. Maintained by one guy in his
spare time. Missed by the 43% of cloud environments that probably
should have asked more questions.</em></p>

<h3>Sources</h3>

<ul>
<li>IngressNightmare - CVE details and exposure statistics
Wiz Research, March 24, 2025
https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities</li>
<li>Ingress NGINX Retirement Announcement
Kubernetes SIG Network and Security Response Committee, November 11, 2025
https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/</li>
<li>Ingress NGINX CVE-2025-1974 - Official Kubernetes Advisory
Kubernetes, March 24, 2025
https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/</li>
<li>Transitioning Away from Ingress NGINX - Maintainership and architectural analysis
Google Open Source Blog, February 2026
https://opensource.googleblog.com/2026/02/the-end-of-an-era-transitioning-away-from-ingress-nginx.html</li>
<li>F5 Acquisition of nginx
F5 Press Release, March 2019
https://www.f5.com/company/news/press-releases/f5-acquires-nginx-to-bridge-netops-and-devops</li>
</ul>


<p><em>Disclaimer: This article was written with AI assistance during a long
discussion on the features and history of Apache and nginx, drawing on
my experience maintaining and using Apache over the last 20+
years. The opinions, technical observations, and arguments are
entirely my own. I am in no way affiliated with the ASF, nor do I have
any financial interest in promoting Apache. I have been using and
benefiting from Apache since 1998 and continue to discover features
and capabilities that surprise me even to this day.</em></p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Go Ahead ‘make’ My Day (Part III)]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-11-14-post.html"/>
    <published>2025-11-14T05:23:45-05:00</published>
    <updated>2025-11-14T05:23:45-05:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-11-14-post.html</id>
    <content type="html"><![CDATA[<h1>Go Ahead ‘make’ My Day (Part III)</h1>

<p>This is the last in a 3 part series on <em>Scriptlets</em>. You can catch up
by reading our introduction and dissection of <em>Scriptlets</em>.</p>

<ul>
<li><a href="https://blog.tbcdevelopmentgroup.com/2023-01-12-post.html">Part I - <em>An introduction to scriptlets</em></a></li>
<li><a href="https://blog.tbcdevelopmentgroup.com/2025-11-13-post.html">Part II - <em>The anatomy of a scriptlet</em></a></li>
</ul>


<p>In this final part, we talk about restraint - the discipline that
keeps a clever trick from turning into a maintenance hazard.</p>

<h2>That uneasy feeling…</h2>

<p>So you are starting to write a few scriptlets and it seems pretty
cool. But something doesn’t feel quite right…</p>

<p>You’re editing a <code>Makefile</code> and suddenly you feel anxious.  Ah, you
expected syntax highlighting, linting, proper indentation, and maybe
that warm blanket of static analysis. So when we drop a 20 - line
chunk of Perl or Python into our <code>Makefile</code>, our inner OCD alarms go
off. No highlighting. No linting. Just raw text.</p>

<p>The discomfort isn’t a flaw - it’s feedback. It tells you when you’ve
added too much salt to the soup.</p>

<h2>A scriptlet is not a script!</h2>

<p>A <em>scriptlet</em> is a small, focused snippet of code embedded inside a
<code>Makefile</code> that performs one job quickly and
deterministically. The “-let” suffix matters. It’s not a standalone
program. It’s a helper function, a convenience, a single brushstroke
that belongs in the same canvas as the build logic it supports.</p>

<p>If you ever feel the urge to bite your nails, pick at your skin, or
start counting the spaces in your indentation - stop. You’ve crossed the
line. What you’ve written is no longer a scriptlet; it’s a
script. Give it a real file, a shebang, and a test harness. Keep the
build clean.</p>

<h2>Why we use them</h2>

<p>Scriptlets shine where proximity and simplicity matter more than reuse
(not that we can’t throw it in a separate file and <code>include</code> it in our
<code>Makefile</code>).</p>

<ul>
<li><strong>Cleanliness:</strong> prevents a recipe from looking like a shell script.</li>
<li><strong>Locality:</strong> live where they’re used. No path lookups, no installs.</li>
<li><strong>Determinism:</strong>  transform well-defined input into output. Nothing more.</li>
<li><strong>Portability (of the idea):</strong> every CI/CD system that can run
<code>make</code> can run a one-liner.</li>
</ul>


<p>A Makefile that can generate its own dependency file, extract version
numbers, or rewrite a <code>cpanfile</code> doesn’t need a constellation of helper
scripts. It just needs a few lines of inline glue.</p>

<h2>Why they’re sometimes painful</h2>

<p>We lose the comforts that make us feel like professional developers:</p>

<ul>
<li>No syntax highlighting.</li>
<li>No linting or type hints.</li>
<li>No indentation guides.</li>
<li>No “Format on Save.”</li>
</ul>


<p>The trick is to accept that pain as a necessary check on the limits of
the <em>scriptlet</em>. If you’re constantly wishing for linting and editor
help, it’s your subconscious telling you: <em>this doesn’t belong inline
anymore</em>. You’ve outgrown the <code>-let</code>.</p>

<h2>When to promote your <em>scriplet</em> to a <em>script</em>…</h2>

<p>Promote a scriptlet to a full-blown script when:</p>

<ul>
<li>It exceeds 30-50 lines.</li>
<li>It gains conditionals or error handling.</li>
<li>You need to test it independently.</li>
<li>It uses more than 1 or 2 non-core features.</li>
<li>It’s used by more than one target or project.</li>
<li>You’re debugging quoting more than logic.</li>
<li>You’re spending more time fixing indentation, than working on the build</li>
</ul>


<p>At that point, you’re writing software, not glue. Give it a name, a
shebang, and a home in your <code>tools/</code> directory.</p>

<h2>When to keep it inside your <code>Makefile</code>…</h2>

<p>Keep it inline when:</p>

<ul>
<li>It’s short, pure, and single-use.</li>
<li>It depends primarily on the environment already assumed by your build
(Perl, Python, awk, etc.).</li>
<li>It’s faster to read than to reference.</li>
</ul>


<p>A good scriptlet reads like a make recipe: <em>do this transformation
right here, right now.</em></p>

<pre><code>define create_cpanfile =
    while (&lt;STDIN&gt;) {
        s/[#].*//; s/^\s+|\s+$//g; next if $_ eq q{};
        my ($mod,$v) = split /\s+/, $_, 2;
        print qq{requires "$mod", "$v";\n};
    }
endef

export s_create_cpanfile = $(value create_cpanfile)
</code></pre>

<p>That’s a perfect scriptlet: small, readable, deterministic, and local.</p>

<blockquote><p><em><strong>Rule of Thumb:</strong> If it fits on one screen, keep it inline. If it
scrolls, promote it.</em></p></blockquote>

<h2>Tools for the OCD developer</h2>

<p>If you must relieve the OCD symptoms without promotion of your
<em>scriptlet</em> to a <em>script</em>…</p>

<ul>
<li>Add a <code>lint-scriptlets</code> target:
<code>perl -c -e '$(s_create_requires)'</code> checks syntax without running it.</li>
<li>Some editors (Emacs <code>mmm-mode</code>, Vim <code>polyglot</code>) can treat marked
sections as sub-languages to enable localized language specific
editing features.</li>
<li>Use <code>include</code> to include a scriptlet into your <code>Makefile</code></li>
</ul>


<p>…however try to resist the urge to over-optimize the
tooling. Feeling the uneasiness grow helps identify the boundary
between <em>scriptlets</em> and <em>scripts</em>.</p>

<h2>You’ve been warned!</h2>

<p>Because scriptlets are powerful, flexible, and fast, it’s easy to
reach for them too often or make them the focus of your project.  They
start as a cure for friction - a way to express a small transformation
inline - but left unchecked, they can sometimes grow arms and
legs. Before long, your <code>Makefile</code> turns into a Frankenstein monster.</p>

<p>The great philosopher Basho (or at least I think it was him) once said:</p>

<blockquote><p><em>A single aspirin tablet eases pain. A whole bottle sends you to the
hospital.</em></p></blockquote>

<p>Thanks for reading.</p>

<h2>Learn More</h2>

<ul>
<li><a href="https://www.gnu.org/software/make/manual/html_node/index.html">GNU <code>make</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Value-Function.html"><code>value</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Multi_002dLine.html"><code>define</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Variables_002fRecursion.html"><code>export</code></a></li>
</ul>

]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Go Ahead ‘make’ My Day (Part II)]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-11-13-post.html"/>
    <published>2025-11-13T09:25:25-05:00</published>
    <updated>2025-11-13T09:25:25-05:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-11-13-post.html</id>
    <content type="html"><![CDATA[<h1>Go Ahead ‘make’ My Day (Part II)</h1>

<p>In our previous blog post <a href="https://blog.tbcdevelopmentgroup.com/2023-01-12-post.html">“Go Ahead ‘make’ My
Day”</a> we
presented the <em>scriptlet</em>, an advanced <code>make</code> technique for spicing up
your <code>Makefile</code> recipes. In this follow-up, we’ll deconstruct the
scriptlet and detail the ingredients that make up the secret sauce.</p>

<hr />

<h1>Introducing the <em>Scriptlet</em></h1>

<p><code>Makefile</code> <em>scriptlets</em> are an advanced technique that uses
GNU <code>make</code>’s powerful functions to safely embed a multi-line script
(Perl, in our example) into a single, clean shell command. It turns a
complex block of logic into an easily executable template.</p>

<h2>An Example <em>Scriptlet</em></h2>

<pre><code>#-*- mode: makefile; -*-

DARKPAN_TEMPLATE="https://cpan.openbedrock.net/orepan2/authors/D/DU/DUMMY/%s-%s.tar.gz"

define create_requires =
 # scriptlet to create cpanfile from an list of required Perl modules
 # skip comments
 my $DARKPAN_TEMPLATE=$ENV{DARKPAN_TEMPLATE};

 while (s/^#[^\n]+\n//g){};

 # skip blank lines
 while (s/\n\n/\n/) {};

 for (split/\n/) { 
  my ($mod, $v) = split /\s+/;
  next if !$mod;

  my $dist = $mod;
  $dist =~s/::/\-/g;

  my $url = sprintf $DARKPAN_TEMPLATE, $dist, $v;

  print &lt;&lt;"EOF";
requires \"$mod\", \"$v\",
  url =&gt; \"$url\";
EOF
 }

endef

export s_create_requires = $(value create_requires)

cpanfile.darkpan: requires.darkpan
    DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE); \
    DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl -0ne "$$s_create_requires" $&lt; &gt; $@ || rm $@
</code></pre>

<hr />

<h2>Dissecting the <em>Scriptlet</em></h2>

<h3>1. The Container: Defining the Script (<code>define</code> / <code>endef</code>)</h3>

<p>This section creates the multi-line variable that holds your entire Perl program.</p>

<pre><code>define create_requires =
# Perl code here...
endef
</code></pre>

<ul>
<li><strong><code>define ... endef</code></strong>: This is GNU Make’s mechanism for defining
  a <strong>recursively expanded variable</strong> that spans multiple lines. The
  content is <em>not</em> processed by the shell yet; it’s simply stored by
  <code>make</code>.</li>
<li><strong>The Advantage:</strong> This is the only clean way to write readable,
  indented code (like your <code>while</code> loop and <code>if</code> statements)
  directly inside a <code>Makefile</code>.</li>
</ul>


<h3>2. The Bridge: Passing Environment Data (<code>my $ENV{...}</code>)</h3>

<p>This is a critical step for making your script template portable and
configurable.</p>

<pre><code class="makefile">my $DARKPAN_TEMPLATE=$ENV{DARKPAN_TEMPLATE};
</code></pre>

<ul>
<li><strong>The Problem:</strong> Your Perl script needs dynamic values (like the
  template URL) that are set by <code>make</code>.</li>
<li><strong>The Solution:</strong> Instead of hardcoding the URL, the Perl code is
  designed to read from the <strong>shell environment variable</strong>
  <code>$ENV{DARKPAN_TEMPLATE}</code>. This makes the script agnostic to its
  calling environment, delegating the data management back to the
  <code>Makefile</code>.</li>
</ul>


<h3>3. The Transformer: Shell Preparation (<code>export</code> and <code>$(value)</code>)</h3>

<p>This is the “magic” that turns the multi-line Make variable into a
single, clean shell command.</p>

<pre><code>export s_create_requires = $(value create_requires)
</code></pre>

<ul>
<li><strong><code>$(value create_requires)</code></strong>: This is a specific Make function
  that performs a <strong>direct, single-pass expansion</strong> of the
  variable’s raw content. Crucially, it converts the entire
  multi-line block into a <em>single-line string</em> suitable for export,
  preserving special characters and line breaks that the shell will
  execute.</li>
<li><strong><code>export s_create_requires = ...</code></strong>: This exports the multi-line
  Perl script content as an <strong>environment variable</strong>
  (<code>s_create_requires</code>) that will be accessible to any shell process
  running in the recipe’s environment.</li>
</ul>


<h3>4. The Execution: Atomic Execution (<code>$$</code> and <code>perl -0ne</code>)</h3>

<p>The final recipe executes the entire, complex process as a single,
atomic operation, which is the goal of robust Makefiles.</p>

<pre><code>cpanfile.darkpan: requires.darkpan
    DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE); \
    DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl -0ne "$$s_create_requires" $&lt; &gt; $@ || rm $@
</code></pre>

<ul>
<li><strong><code>DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE)</code></strong>: This creates the local
  shell variable.</li>
<li><strong><code>DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl...</code></strong>: This is the
  clean execution. The first <code>DARKPAN_TEMPLATE=</code> passes the newly
  created shell variable’s value as an <strong>environment variable</strong> to
  the <code>perl</code> process. The <code>$$</code> ensures the shell variable is
  properly expanded <em>before</em> the Perl interpreter runs it.</li>
<li><strong><code>perl -0ne "..."</code></strong>: Runs the Perl script:

<pre><code>* `-n` and `-e` (Execute script on input)
* `-0`: Tells Perl to read the input as one single block
  (slurping the file), which is necessary for your multi-line
  regex and `split/\n/` logic.
</code></pre></li>
<li><strong><code>|| rm $@</code></strong>: This is the final mark of quality. It makes the
  entire command <strong>transactional</strong>—if the Perl script fails, the
  half-written target file (<code>$@</code>) is deleted, forcing <code>make</code> to try
  again later.</li>
</ul>


<hr />

<h1>Hey Now! You’re a Rockstar!</h1>

<h2>(..get your game on!)</h2>

<p>Mastering build automation using <code>make</code> will <strong>transform</strong> you from
being an average DevOps engineer into a rockstar. GNU <code>make</code> is a Swiss
Army knife with more tools than you might think! The knives are sharp
and the tools are highly targeted to handle all the real-world issues
build automation has encountered over the decades. Learning to use
<code>make</code> effectively will put you head and shoulders above the herd (see
what I did there? 😉).</p>

<h2>Calling All Pythonistas!</h2>

<p>The scriptlet technique creates a powerful, universal pattern for
clean, atomic builds:</p>

<ul>
<li><strong>It’s Language Agnostic:</strong> Pythonistas! Join the fun! The same
<code>define</code>/<code>export</code> technique works perfectly with <code>python -c</code>.</li>
<li><strong>The Win:</strong> This ensures that every developer - regardless of their
preferred language - can achieve the same clean, atomic build and
<strong>avoid external script chaos</strong>.</li>
</ul>


<p>Learn more about <a href="https://www.gnu.org/software/make/manual/html_node/index.html">GNU
<code>make</code></a>
and move your Makefiles from simple shell commands to precision
instruments of automation.</p>

<p>Thanks for reading.</p>

<h2>Learn More</h2>

<ul>
<li><a href="https://www.gnu.org/software/make/manual/html_node/index.html">GNU <code>make</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Value-Function.html"><code>value</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Multi_002dLine.html"><code>define</code></a></li>
<li><a href="https://www.gnu.org/software/make/manual/html_node/Variables_002fRecursion.html"><code>export</code></a></li>
</ul>

]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Bump Your Semantic Version]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-04-20-post.html"/>
    <published>2025-04-20T11:37:03-04:00</published>
    <updated>2025-04-20T11:37:03-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-04-20-post.html</id>
    <content type="html"><![CDATA[<h1>Bump Your Semantic Version</h1>

<p>While looking at some old <code>bash</code> script that bumps my semantic
versions I almost puked looking at my old ham handed way of bumping
the version.  That led me to see how I could do it “better”.  Why? I
dunno know…bored on a Saturday morning and not motivated enough to
do the NY Times crossword…</p>

<p>So you want to bump a semantic version string like <code>1.2.3</code>
- major, minor, or patch - and you don’t want ceremony. You want <strong>one
line</strong>, no dependencies, and enough arcane flair to scare off
coworkers.</p>

<p>Here’s a single-line Bash–Perl spell that does exactly that:</p>

<pre><code>v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
'$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
</code></pre>

<h2>What It Does</h2>

<ul>
<li>Reads the current version from a <code>VERSION</code> file (<code>1.2.3</code>)</li>
<li>Increments the part you pass (<code>0</code> for major, <code>1</code> for minor, <code>2</code> for patch)</li>
<li>Resets all lower parts to zero</li>
<li>Writes the result to <code>v</code></li>
</ul>


<hr />

<h2>Scriptlet Form</h2>

<p>Wrap it like this in a shell function:</p>

<pre><code class="bash">bump() {
  v=$(cat VERSION | p=$1 perl -a -F[.] -pe \
  '$i=$ENV{p};$F[$i]++;$j=$i+1;$F[$_]=0 for $j..2;$"=".";$_="@F"')
  echo "$v" &gt; VERSION
}
</code></pre>

<p>Then run:</p>

<pre><code>bump 2   # bump patch (1.2.3 =&gt; 1.2.4)
bump 1   # bump minor (1.2.3 =&gt; 1.3.0)
bump 0   # bump major (1.2.3 =&gt; 2.0.0)
</code></pre>

<hr />

<h1>Makefile Integration</h1>

<p>Want to bump right from <code>make</code>?</p>

<pre><code>bump-major:
    @v=$$(cat VERSION | p=0 perl -a -F[.] -pe '$$i=$$ENV{p};$$F[$$i]++;$$j=$$i+1;$$F[$$_]=0 for $$j..2;$$"=".";$_="$$F"') &amp;&amp; \
    echo $$v &gt; VERSION &amp;&amp; echo "New version: $$v"

bump-minor:
    @$(MAKE) bump-major p=1

bump-patch:
    @$(MAKE) bump-major p=2
</code></pre>

<p>Or break it out into a <code>.bump-version</code> script and source it from your build tooling.</p>

<h2>How It Works (or …Why I Love Perl)</h2>

<pre><code>-a            # autosplit into @F
-F[.]         # split on literal dot
$i=$ENV{p}    # get part index from environment (e.g., 1 for minor)
$F[$i]++      # bump it
$j=$i+1       # start index for resetting
$F[$_]=0 ...  # zero the rest
$"=".";       # join array with dots
$_="@F"       # set output
</code></pre>

<p>If you have to explain this to some junior dev, just say RTFM skippy
<code>perldoc perlrun</code>. Use the force Luke.</p>

<p>And if the senior dev wags his finger and say UUOC, tell him <strong>Ego
malum edo</strong>.</p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[<code>END</code> Block Hijacking]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-15-post.html"/>
    <published>2025-03-15T00:00:00-04:00</published>
    <updated>2025-04-01T17:04:10-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-15-post.html</id>
    <content type="html"><![CDATA[<p>From <code>perldoc perlmod</code>…</p>

<pre><code>An "END" code block is executed as late as possible, that is, after perl
has finished running the program and just before the interpreter is
being exited, even if it is exiting as a result of a die() function.
(But not if it's morphing into another program via "exec", or being
blown out of the water by a signal--you have to trap that yourself (if
you can).) You may have multiple "END" blocks within a file--they will
execute in reverse order of definition; that is: last in, first out
(LIFO). "END" blocks are not executed when you run perl with the "-c"
switch, or if compilation fails....
</code></pre>

<p>Perl’s <code>END</code> blocks are useful inside your script for doing things
like cleaning up after itself, closing files or disconnecting from
databases. In many cases you use an <code>END</code> block to guarantee certain
behaviors like a commit or rollback of a transaction. You’ll
typically see <code>END</code> blocks in scripts, but occassionally you might find
one in a Perl module.</p>

<p>Over the last four years I’ve done a lot of maintenance work on legacy
Perl applications. I’ve learned more about Perl in these four years
than I learned in the previous 20. <strong>Digging into bad code is the best
way to learn how to write good code.</strong> It’s sometimes hard to decide if
code is good or bad but to paraphrase a supreme court justice, <em>I
can’t always define bad code, but I know it when I see it.</em></p>

<p>One of the gems I’ve stumbled upon was a module that provided needed
functionality for many scripts that included its own <code>END</code> block.</p>

<p>Putting an <code>END</code> block in a Perl module is an anti-pattern and just <strong>bad
mojo</strong>. <strong>A module should never contain an <code>END</code> block.</strong> Here are
some alternatives:</p>

<ul>
<li>If cleanup is necessary, provide (and document) a <code>cleanup()</code> method</li>
<li>Use <strong>a destructor (<code>DESTROY</code>) in an object-oriented module</strong></li>
</ul>


<p>Modules should provide functionality - not take control of your
script’s shutdown behavior.</p>

<h2>Why Would Someone Put an END Block in a Perl Module?</h2>

<p>The first and most obvious answer is that they were unaware of how
<code>DESTROY</code> blocks can be employed. If you know something about the
author and you’re convinced they know better, then why else?</p>

<p>I theorize that the author was trying to create a module that would
encapsulate functionality that he would use in <strong>EVERY</strong> script he
wrote for the application. While the <em>faux pas</em> might be forgiven I’m
not ready to put my wagging finger back in its holster.</p>

<p>If you want to write a wrapper for all your scripts and you’ve already
settled on using a Perl module to do so, then please for all that is
good and holy do it right. Here are some potential guidelines for
that wrapper:</p>

<ul>
<li>A <code>new()</code> method that instantiates your wrapper</li>
<li>An <code>init()</code> method that encapsulates the common
startup operations with options to control whether some are executed
or not</li>
<li>A <code>run()</code> method that executes the functionality for the script</li>
<li>A <code>finalize()</code> method for executing cleanup procedures</li>
<li>POD that describes the common functionality provided as well as any options
for controlling their invocation</li>
</ul>


<p>All of these methods could be overridden if you just use a plain ‘ol Perl
module. For those that prefer composition over inheritance, use a
role with something like <code>Role::Tiny</code> to provide those universally
required methods. Using <code>Role::Tiny</code> provides better flexibility
by allowing you to use those methods <em>before</em> or <em>after</em> your
modifications to their behavior.</p>

<h2>How Can We Take Back Control?</h2>

<p>The particular script I was working on included a module(<em>whose name I
shall not speak</em>) that included such an <code>END</code> block. My script
should have exited <strong>cleanly and quietly</strong>. Instead, it produced
mysterious messages during shutdown. Worse yet I feared some
undocumented behaviors and black magic might have been conjured up
during that process! After a bit of debugging, I found the culprit:</p>

<ul>
<li>The module had an <code>END</code> block baked into it</li>
<li>This <code>END</code> block printed debug messages to STDERR while doing other
cleanup operations</li>
<li>Worse, it <strong>ran unconditionally</strong> when my script terminated</li>
</ul>


<h3>The Naive Approach</h3>

<p>My initial attempt to suppress the module’s <code>END</code> block:</p>

<pre><code>END {
    use POSIX;
    POSIX::_exit(0);
}
</code></pre>

<p>This <strong>works</strong> as long as my script exits normally. But if my
script <strong>dies</strong> due to an error, the rogue <code>END</code> block <strong>still
executes</strong>. Not exactly the behavior I want.</p>

<h3>A Better Approach</h3>

<p>Here’s what I want to happen:</p>

<ul>
<li>Prevent <strong>any</strong> rogue <code>END</code> block from executing.</li>
<li>Handle errors gracefully.</li>
<li><p>Ensure my script always exits with a meaningful status code.</p></li>
<li><p>A better method for scripts that need an <code>END</code> block to claw back
control:</p></li>
</ul>


<pre><code>use English qw(-no_match_vars);
use POSIX ();

my $retval = eval {
    return main();
};

END {
  if ($EVAL_ERROR) {
      warn "$EVAL_ERROR";  # Preserve exact error message
  }

  POSIX::_exit( $retval // 1 );
}
</code></pre>

<ul>
<li><strong>Bypasses all <code>END</code> blocks</strong> – Since <code>POSIX::_exit()</code> terminates
the process <strong>immediately</strong></li>
<li><strong>Handles errors cleanly</strong> – If <code>main()</code> throws an exception, we
<strong>log it without modifying the message</strong></li>
<li><strong>Forces explicit return values</strong> – If <code>main()</code> forgets to return a
status, we default to <code>1</code>, ensuring no silent failures.</li>
<li>Future maintainers will see <strong>exactly what’s happening</strong></li>
</ul>


<h2>Caveat Emptor</h2>

<p>Of course, you should know what behavior you are bypassing if you
decide to wrestle control back from some misbehaving module. In my
case, I knew that the behaviors being executed in the <code>END</code> block
could safely be ignored. Even if they couldn’t be ignored, I can still
provide those behaviors in my own cleanup procedures.</p>

<p>Isn’t this what <strong>future me</strong> or the poor wretch tasked with a
dumpster dive into a legacy application would want? Explicitly
seeing the whole shebang without hours of scratching your head looking
for mysterious messages that emanate from the depths is
priceless. <strong>It’s gold, Jerry! Gold!</strong></p>

<p><em>Next up…a better wrapper.</em></p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[How to Fix Apache 2.4 Broken Directory Requests (Part III)]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-14-post.html"/>
    <published>2025-03-14T00:00:00-04:00</published>
    <updated>2025-04-01T17:04:10-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-14-post.html</id>
    <content type="html"><![CDATA[<h2>Introduction</h2>

<p>In our previous posts, we explored how <strong>Apache 2.4 changed its
handling of directory requests</strong> when <code>DirectorySlash Off</code> is set,
breaking the implicit <code>/dir → /dir/</code> redirect behavior that worked in
Apache 2.2. We concluded that while <strong>an external redirect is the only
reliable fix</strong>, this change in behavior led us to an even bigger
question:</p>

<blockquote><p>Is this a bug or an intentional design change in Apache?</p></blockquote>

<p>After digging deeper, we’ve uncovered something <strong>critically
important</strong> that is not well-documented:</p>

<h3>Apache does not restart the request cycle after an internal rewrite, and this can break expected behaviors like <code>DirectoryIndex</code>.</h3>

<p>This post explores why this happens, whether it’s a feature or a bug,
and why <strong>Apache’s documentation should explicitly clarify this
behavior</strong>.</p>

<hr />

<h2>What Happens When Apache Internally Rewrites a Request?</h2>

<p>Let’s revisit the problem: we tried using <strong>an internal rewrite</strong> to
append a trailing slash for directory requests:</p>

<pre><code>RewriteEngine On
RewriteCond %{REQUEST_URI} !/$
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
RewriteRule ^(.*)$ /$1/ [L]
</code></pre>

<p><strong>Expected Behavior:</strong>
- Apache should internally rewrite <code>/setup</code> to <code>/setup/</code>.
- Since <code>DirectoryIndex index.roc</code> is set, Apache should serve <code>index.roc</code>.</p>

<p><strong>Actual Behavior:</strong>
- Apache <strong>internally rewrites</strong> <code>/setup</code> to <code>/setup/</code>, but then
  immediately <strong>fails with a 403 Forbidden</strong>.
- The error log states:
  <code>
  AH01276: Cannot serve directory /var/www/vhosts/treasurersbriefcase/htdocs/setup/: No matching DirectoryIndex (none) found
 </code>
- <strong>Apache is treating <code>/setup/</code> as an empty directory instead of recognizing <code>index.roc</code>.</strong></p>

<hr />

<h2>The Key Issue: Apache Does Not Restart Request Processing After an Internal Rewrite</h2>

<p>Unlike what many admins assume, <strong>Apache does not <em>start over</em> after an internal rewrite</strong>.</p>

<h3>How Apache Processes Requests (Simplified)</h3>

<ol>
<li>The request arrives (<code>/setup</code>).</li>
<li>Apache processes <code>mod_rewrite</code>.</li>
<li>Apache determines how to serve the request.</li>
<li>If an index file (<code>index.html</code>, <code>index.php</code>, etc.) exists, <code>DirectoryIndex</code> resolves it.</li>
</ol>


<h3>Why Internal Rewrites Don’t Restart Processing</h3>

<ul>
<li><strong>Apache processes mod_rewrite before it checks <code>DirectoryIndex</code>.</strong></li>
<li>Once a rewrite occurs, Apache <strong>continues processing from where it left off</strong>.</li>
<li>This means <strong>it does not re-check <code>DirectoryIndex</code> after an internal rewrite.</strong></li>
<li>Instead, it sees <code>/setup/</code> as an <strong>empty directory with no default file</strong> and denies access with <em>403 Forbidden</em>.</li>
</ul>


<hr />

<h2>Is This a Bug or a Feature?</h2>

<p>First, let’s discuss why this worked in Apache 2.2.</p>

<p>The key reason internal rewrites worked in Apache 2.2 is that Apache
<strong>restarted the request processing cycle after a rewrite</strong>. This
meant that:</p>

<ul>
<li>After an internal rewrite, Apache treated the rewritten request as a
brand-new request.</li>
<li>As a result, it re-evaluated DirectoryIndex and correctly served
<code>index.html</code>, <code>index.php</code>, or any configured default file.</li>
<li>Since <code>DirectorySlash</code> was handled earlier in the request cycle,
Apache 2.2 still applied directory handling rules properly, even after
an internal rewrite.</li>
</ul>


<p>In Apache 2.4, this behavior changed. Instead of restarting the
request cycle, Apache continues processing the request from where it
left off. This means that after an internal rewrite, <code>DirectoryIndex</code> is
never reprocessed, leading to the <em>403 Forbidden</em> errors we
encountered. This fundamental change explains why no internal solution
works the way it did in Apache 2.2.</p>

<h3>Why It’s Likely an Intentional Feature</h3>

<ul>
<li>In <strong>Apache 2.2</strong>, some rewrites <strong>did restart the request cycle</strong>, which was seen as inefficient.</li>
<li>In <strong>Apache 2.4</strong>, request processing was optimized for
<strong>performance</strong>, meaning it <strong>does not restart</strong> after an internal
rewrite.</li>
<li>The behavior <strong>is consistent</strong> across different Apache 2.4 installations.</li>
<li>Some discussions in Apache’s mailing lists and bug tracker mention this as “expected behavior.”</li>
</ul>


<h3>Why This is Still a Problem</h3>

<ul>
<li><strong>This behavior is not explicitly documented</strong> in <code>mod_rewrite</code> or <code>DirectoryIndex</code> docs.</li>
<li>Most admins <strong>expect Apache to reprocess the request fully after a rewrite</strong>.</li>
<li>The lack of clarity leads to <strong>confusion and wasted debugging time</strong>.</li>
</ul>


<hr />

<h2>Implications for Apache 2.4 Users</h2>

<h3>1. Mod_Rewrite Behavior is Different From What Many Assume</h3>

<ul>
<li>Internal rewrites <strong>do not restart request processing</strong>.</li>
<li><strong>DirectoryIndex is only evaluated once</strong>, before the rewrite happens.</li>
<li>This is <strong>not obvious</strong> from Apache’s documentation.</li>
</ul>


<h3>2. The Only Reliable Fix is an External Redirect</h3>

<p>Since Apache <strong>won’t reprocess DirectoryIndex</strong>, the only way to
guarantee correct behavior is to <strong>force a new request</strong> via an
external redirect:</p>

<pre><code>RewriteEngine On
RewriteCond %{REQUEST_URI} !/$
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1/ [R=301,L]
</code></pre>

<p>This <strong>forces Apache to start a completely new request cycle</strong>,
ensuring that <code>DirectoryIndex</code> is evaluated properly.</p>

<h3>3. Apache Should Improve Its Documentation</h3>

<p>We believe this behavior <strong>should be explicitly documented</strong> in:
- <strong>mod_rewrite documentation</strong> (stating that rewrites do not restart request processing).
- <strong>DirectoryIndex documentation</strong> (noting that it will not be re-evaluated after an internal rewrite).</p>

<p>This would prevent confusion and help developers troubleshoot these issues more efficiently.</p>

<hr />

<h2>Conclusion: A Feature, But a Poorly Documented One</h2>

<ul>
<li>The fact that <strong>Apache does not restart processing after an internal
rewrite</strong> is <strong>likely an intentional design choice</strong>.</li>
<li>However, <strong>this is not well-documented</strong>, leading to confusion.</li>
<li>The <strong>only solution</strong> remains an <strong>external redirect</strong> to force a fresh request cycle.</li>
<li>We believe <strong>Apache should update its documentation</strong> to reflect this behavior more clearly.</li>
</ul>

]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[How to Fix Apache 2.4 Broken Directory Requests (Part II)]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-13-post.html"/>
    <published>2025-03-13T00:00:00-04:00</published>
    <updated>2025-04-01T17:04:10-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-13-post.html</id>
    <content type="html"><![CDATA[<h2>Introduction</h2>

<p>In our previous blog post, we detailed a clever external redirect
solution to address the <strong>Apache 2.4 regression</strong> that broke automatic
directory handling when <code>DirectorySlash Off</code> was set. We teased the
question: <em>Can we achieve the same behavior with an internal
redirect?</em> Spoiler alert - <strong>Nope</strong>.</p>

<p>In this follow-up post, we’ll explore why internal rewrites fall
short, our failed attempts to make them work, and why the <strong>external
redirect remains the best and only solution</strong>.</p>

<hr />

<h2>The Apache 2.2 vs. 2.4 Behavior Shift</h2>

<h3><strong>How Apache 2.2 Handled Directory Requests</strong></h3>

<ul>
<li>If a user requested <code>/dir</code> <strong>without a trailing slash</strong>, Apache
would automatically <strong>redirect</strong> them to <code>/dir/</code>.</li>
<li>Apache would then serve the directory’s <code>index.html</code> (or any file
specified by <code>DirectoryIndex</code>).</li>
<li>This behavior was <strong>automatic and required no additional configuration</strong>.</li>
</ul>


<h3><strong>What Changed in Apache 2.4</strong></h3>

<ul>
<li>When <code>DirectorySlash Off</code> is set, <strong>Apache stops auto-redirecting
directories</strong>.</li>
<li>Instead of treating <code>/dir</code> as <code>/dir/</code>, Apache <strong>tries to serve
<code>/dir</code> as a file</strong>, which leads to <strong>403 Forbidden</strong> errors.</li>
<li><code>DirectoryIndex</code> <strong>no longer inherits</strong> globally from the document
root - each directory <strong>must be explicitly configured</strong> to serve an
index file.</li>
<li><strong>Apache does not reprocess <code>DirectoryIndex</code> after an internal rewrite.</strong></li>
</ul>


<hr />

<h2>The Internal Rewrite Attempt</h2>

<h3><strong>Our Initial Idea</strong></h3>

<p>Since Apache wasn’t redirecting directories automatically anymore, we
thought we could internally rewrite requests like this:</p>

<pre><code> RewriteEngine On

 # If the request is missing a trailing slash and is a directory, rewrite it internally
 RewriteCond %{REQUEST_URI} !/$
 RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
 RewriteRule ^(.*)$ /$1/ [L]
</code></pre>

<p><strong>Why This Doesn’t Work</strong>:</p>

<ul>
<li>While this <strong>internally rewrites the request</strong>, Apache <strong>does not
reprocess <code>DirectoryIndex</code> after the rewrite</strong>.</li>
<li>If <code>DirectoryIndex</code> is not <strong>explicitly defined for each
directory</strong>, Apache still refuses to serve the file and throws a
<strong>403 Forbidden</strong>.</li>
<li>Unlike Apache 2.2, Apache 2.4 treats <code>/dir</code> as a raw file request instead of checking for an index page.</li>
</ul>


<h3><strong>The Per-Directory Fix (Which Also Fails)</strong></h3>

<p>To make it work, we tried to manually configure every directory:</p>

<pre><code>&lt;Directory "/var/www/vhosts/treasurersbriefcase/htdocs/setup/"&gt;
    Require all granted
    DirectoryIndex index.roc
&lt;/Directory&gt;
</code></pre>

<h3>Why This Also Fails:</h3>

<ul>
<li>Apache does not reprocess <code>DirectoryIndex</code> after an internal
rewrite. Even though <code>DirectoryIndex index.roc</code> is explicitly set,
Apache never reaches this directive after rewriting <code>/setup</code> to <code>/setup/</code>.</li>
<li>Apache still treats <code>/setup/</code> as an empty directory, leading to a <em>403
Forbidden</em> error.</li>
<li>The only way to make this work per directory would be to use an
external redirect to force a new request cycle.</li>
</ul>


<p>This means that even if we were willing to configure every
directory manually, it still wouldn’t work as expected.</p>

<h2>FallbackResource (Why This Was a Dead End)</h2>

<p>We briefly considered whether <code>FallbackResource</code> could help by
redirecting requests for directories to their respective index files:</p>

<pre><code>&lt;Directory "/var/www/vhosts/treasurersbriefcase/htdocs/setup/"&gt;
    DirectoryIndex index.roc
    Options -Indexes
    Require all granted
    FallbackResource /setup/index.roc
&lt;/Directory&gt;
</code></pre>

<h3>Why This Makes No Sense</h3>

<ul>
<li><code>FallbackResource</code> is designed to handle <em>404 Not Found</em> errors, not <em>403 Forbidden</em> errors.</li>
<li>Since Apache already recognizes <code>/setup/</code> as a valid directory but
refuses to serve it, <code>FallbackResource</code> is never triggered.</li>
<li>This does not address the fundamental issue that Apache does not
reprocess <code>DirectoryIndex</code> after an internal rewrite.</li>
</ul>


<p>This was a <strong>red herring</strong> in our troubleshooting <code>FallbackResource</code> was
never a viable solution.</p>

<hr />

<h2>Sledge Hammer Approach: Direct Rewrite to the Index File</h2>

<p>Another way to handle this issue would be to explicitly rewrite
directory requests to their corresponding index file, bypassing
Apache’s <code>DirectoryIndex</code> handling entirely:</p>

<pre><code>RewriteEngine On
RewriteCond %{REQUEST_URI} !/$
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
RewriteRule ^(.*)$ /$1/index.roc [L]
</code></pre>

<h3>It works…</h3>

<ul>
<li>It completely avoids Apache’s broken <code>DirectoryIndex</code> handling in Apache 2.4.</li>
<li>No need for <code>DirectorySlash Off</code>, since the request is rewritten directly to a file.</li>
<li>It prevents the <em>403 Forbidden</em> issue because Apache is no longer serving a bare directory.</li>
</ul>


<h3>…but it’s not ideal</h3>

<ul>
<li>Every directory would need an explicit rewrite rule to its corresponding index file.</li>
<li>If different directories use different index files (<code>index.html</code>,
<code>index.php</code>, etc.), additional rules would be required.</li>
<li>This does not scale well without complex conditional logic.</li>
</ul>


<p>While this approach <strong>technically works</strong>, it reinforces our main
conclusion:
- Apache 2.4 no longer restarts the request cycle after a rewrite, so we need to account for it manually.
- The external redirect remains the only scalable solution.</p>

<h2>Why the External Redirect is the Best Approach</h2>

<p>Instead of <strong>fighting</strong> Apache’s new behavior, we can <strong>work with it</strong> using an external redirect:</p>

<pre><code>RewriteEngine On
RewriteCond %{REQUEST_URI} !/$
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1/ [R=301,L]
</code></pre>

<h3><strong>Why This Works Perfectly</strong></h3>

<ul>
<li>Redirects <code>/dir</code> to <code>/dir/</code> <strong>before Apache even tries to serve it</strong>.</li>
<li>Preserves <strong><code>DirectoryIndex</code> behavior</strong> globally, just like Apache 2.2.</li>
<li><strong>No need</strong> for per-directory configuration.</li>
<li>Ensures <strong>SEO-friendly canonical URLs</strong> with the correct trailing slash.</li>
</ul>


<hr />

<h2>Conclusion</h2>

<p>So, can we solve with an internal redirect?</p>

<ul>
<li><em>NO!</em> … because Apache does not reprocess <code>DirectoryIndex</code> after an internal rewrite.</li>
<li>Even explicit per-directory <code>DirectoryIndex</code> settings fail if Apache
has already decided the request is invalid.</li>
<li><code>FallbackResource</code> never applied, since Apache rejected the request with <em>403 Forbidden</em>, not <em>404
Not Found</em>.</li>
</ul>


<p>Does our external redirect solution still hold up?</p>

<p><strong>Yes!!</strong>, and in fact, it’s not just the <em>best solution</em> - it’s the only <em>reliable one</em>.</p>

<h2>Lessons Learned</h2>

<ul>
<li>Apache 2.4 fundamentally changed how it handles directory requests
when <code>DirectorySlash Off</code> is set.</li>
<li>Internal rewrites cannot fully restore Apache 2.2 behavior because Apache does not restart request
processing after a rewrite.</li>
<li>The only way to ensure correct behavior is to use an external
redirect before Apache attempts to serve the request.</li>
</ul>


<p>If you haven’t read our original post yet, check it out
<a href="https://blog.tbcdevelopmentgroup.com/2025-03-12-post.md">here</a> for
the full explanation of our external redirect fix.</p>

<p>In <a href="https://blog.tbcdevelopmentgroup.com/2025-03-14-post.html">part III of this series</a> we’ll beat this dead horse and potentially
explain why this was a problem in the first place…</p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[How to Fix Apache 2.4 Broken Directory Requests]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-12-post.html"/>
    <published>2025-03-12T00:00:00-04:00</published>
    <updated>2025-04-01T17:04:10-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-12-post.html</id>
    <content type="html"><![CDATA[<h1>How to Fix Apache 2.4 Broken Directory Requests</h1>

<h2><strong>Introduction</strong></h2>

<p>I’m currently re-architecting my non-profit accounting SaaS (<a href="https://www.treasurersbriefcase.com">Treasurer’s
Briefcase</a>) to use Docker containers for a portable development
environment and eventually for running under Kubernetes. The current
architecture designed in 2012 uses an EC2 based environment for both development and
run-time execution.</p>

<p>As part of that effort I’m migrating from Apache 2.2 to 2.4. In the
new development environment I’m using my Chromebook to access the
containerized application running on the EC2 dev server. I described
that setup in my <a href="https://blog.tbcdevelopmentgroup.com/2025-03-07-post.html">previous
blog</a>. In
that setup I’m accessing the application on port 8080 as <code>http://local-dev:8080</code>.</p>

<p>If, as in the setup describe in my blog, you are running Apache on a
<strong>non-standard port</strong> (e.g., <code>:8080</code>) - perhaps in <strong>Docker, EC2, or
via an SSH tunnel</strong> you may have noticed an annoying issue after
migrating from <strong>Apache 2.2 to Apache 2.4</strong>…</p>

<h2>Apache 2.4 Drops the Port When Redirecting Directories!</h2>

<p>Previously, when requesting a directory <strong>without a trailing slash</strong>
(e.g., <code>/setup</code>), Apache automatically redirected to <strong><code>/setup/</code></strong>
while preserving the port. However, in Apache 2.4, the redirect is
done <strong>externally</strong> AND <strong>drops the port</strong>, breaking relative URLs and
form submissions.</p>

<p>For example let’s suppose you have a form under the <code>/setup</code> directory that
has as its <code>action</code> “next-step.html”. The expected behavior on that page
would be to post to the page <code>/setup/next-step.html</code>. But what really
happens is different. You can’t even get to the form in the first
place with the URL <code>http://local-dev:8080/setup</code>!</p>

<ul>
<li><strong>Expected Redirect (Apache 2.2)</strong>:\
<code>http://yourserver:8080/setup</code> => <code>http://yourserver:8080/setup/</code></li>
<li><strong>Actual Redirect (Apache 2.4, Broken)</strong>:\
<code>http://yourserver:8080/setup</code> => <code>http://yourserver/setup/</code> (<strong>port </strong><code>8080</code><strong> is missing!</strong>)</li>
</ul>


<p>This causes problems for some pages in web applications running behind
<strong>Docker, SSH tunnels, and EC2 environments</strong>, where port forwarding
is typically used.</p>

<h2>Investigating the Issue</h2>

<p>If you’re experiencing this problem, you can confirm it by running:</p>

<pre><code>curl -IL http://yourserver:8080/setup
</code></pre>

<p>You’ll likely see:</p>

<pre><code>HTTP/1.1 301 Moved Permanently
Location: http://yourserver/setup/
</code></pre>

<p>Apache <strong>dropped </strong><code>8080</code><strong> from the redirect</strong>, causing requests to break.</p>

<h2>Workarounds</h2>

<p>Several workarounds exist, but they don’t work in our example.</p>

<ul>
<li><strong>Disabling</strong> <code>DirectorySlash</code>: Prevents redirects but causes <code>403 Forbidden</code> errors when accessing directories.</li>
<li><strong>Using</strong> <code>FallbackResource</code>: Works, but <strong>misroutes unrelated requests</strong>.</li>
<li><strong>Hardcoding the port in rewrite rules</strong>: Not flexible across different environments.</li>
</ul>


<p>Instead, we need a solution that <strong>dynamically preserves the port when necessary.</strong></p>

<h2>The Fix</h2>

<p>To <strong>restore Apache 2.2 behavior</strong>, we can use a rewrite rule that
<strong>only preserves the port if it was in the original request</strong>.</p>

<h3>Apache 2.4 Fix: Port-Preserving Rewrite Rule</h3>

<pre><code>&lt;VirtualHost *:8080&gt;
    ServerName yourserver
    DocumentRoot /var/www/html

    &lt;Directory /var/www/html&gt;
        Options -Indexes +FollowSymLinks
        DirectoryIndex index.html index.php
        Require all granted
        DirectorySlash On  # Keep normal Apache directory behavior
    &lt;/Directory&gt;

    # Fix Apache 2.4 Directory Redirects: Preserve Non-Standard Ports
    RewriteEngine On
    RewriteCond %{REQUEST_URI} !/$
    RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_URI} -d
    RewriteCond %{SERVER_PORT} !^80$ [OR]
    RewriteCond %{SERVER_PORT} !^443$
    RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1/ [R=301,L]

    UseCanonicalName Off
&lt;/VirtualHost&gt;
</code></pre>

<h2>Explanation</h2>

<ul>
<li>Automatically appends a missing trailing slash (<strong><code>/setup</code></strong> => <strong><code>/setup/</code></strong>).</li>
<li>Preserves the port only if it’s non-standard (<strong><code>!=80</code></strong>,  <strong><code>!=443</code></strong>)</li>
<li>Avoids hardcoding <strong><code>:8080</code></strong>, making it flexible for any non-standard port</li>
<li>Restores Apache 2.2 behavior while keeping things modern and correct.</li>
</ul>


<h2>Example: Running Apache in Docker on EC2 via SSH Tunnel</h2>

<h3>The Setup (<a href="https://blog.tbcdevelopmentgroup.com/2025-03-07-post.html">see previous blog</a>)</h3>

<ol>
<li><strong>Docker container</strong> running Apache on <strong>port 80</strong> inside an <strong>EC2 instance</strong>.</li>
<li><strong>Firewall rules allow only my home IP</strong> to access the server.</li>
<li><strong>SSH tunnel (jump box) forwards port 80 securely.</strong></li>
<li><strong>Chromebook’s SSH settings forward port </strong><code>8080</code><strong> locally to </strong><code>80</code><strong> on the jump box.</strong></li>
</ol>


<h3>How the Fix Helps</h3>

<ul>
<li>Previously, <code>/setup</code> redirected externally <strong>without the port</strong>, causing failures.</li>
<li>This fix use <code>mod_rewrite</code> and a <code>RewriteRule</code> that <strong>ensures that port </strong><code>8080</code><strong> is preserved</strong></li>
</ul>


<h2>Conclusion</h2>

<p>Apache 2.4’s <strong>port-dropping behavior is an unexpected regression</strong>
from 2.2, but we can fix it with a simple rewrite rule that <strong>restores
the expected behavior without breaking anything</strong>.</p>

<p>If you’re running <strong>Docker, EC2, or SSH tunnels</strong>, this is a fix that
will prevent you from jumping through hoops by altering the
application or changing your networking setup.</p>

<h2>Postamble</h2>

<p>Hmmmm…maybe we can use internal redirects instead of an external
redirect??? Stay tuned.</p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[Web Development on a Chromebook]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-03-07-post.html"/>
    <published>2025-03-07T19:21:14-05:00</published>
    <updated>2025-04-01T17:04:10-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-03-07-post.html</id>
    <content type="html"><![CDATA[<h1>Using a Chromebook for Remote Web Development</h1>

<h2>Introduction</h2>

<p>If you’re doing some development on a remote server that you access
via a bastion host (e.g., an EC2 instance), and you want a seamless
way to work from a Chromebook, you can set up an SSH tunnel through
the bastion host to access your development server.</p>

<p>This guide outlines how to configure a secure and efficient
development workflow using Docker, SSH tunnels, and a Chromebook.</p>

<h2>Motivation</h2>

<p>Your Chromebook is a great development environment, but truth be told,
the cloud is better. Why? Because you can leverage a bucket load of
functionality, resources and infrastructure that is powerful yet
inexpensive. Did I mention backups? My Chromebook running a version of
debian rocks, but in general I use it as a conduit to the cloud.</p>

<p>So here’s the best of both worlds. I can use a kick-butt terminal
(<code>terminator</code>) on my Chromie and use its networking mojo to access my
web servers running in the cloud.</p>

<h2>Network Setup Overview</h2>

<p>In this setup:</p>

<ul>
<li>The Chromebook runs Linux and connects via SSH to the bastion host.</li>
<li>The bastion host acts as an intermediary, forwarding requests to the private EC2 development server.</li>
<li>The EC2 instance is firewalled and only accessible from the bastion host.</li>
<li>Port 80 on the EC2 instance is mapped to port 8080 locally on the Chromebook through the SSH tunnel. You’ll need to set that up on your Chromebook in the <strong>Linux development environment</strong> settings.</li>
</ul>


<p><img src="/img/chromebook-setup.png" alt="chromebook setup" width="800px"/></p>

<h3>Network Diagram</h3>

<p>Here’s what it looks like in ASCII art…</p>

<pre><code>   +--------------+    +--------------+    +--------------+
   | Chromebook   |    | Bastion Host |    |     EC2      |
   |              | 22 |              | 22 |              |
   |  Local SSH   |----|   Jump Box   |----| Development  |
   |  Tunnel:8080 | 80 | (Accessible) | 80 |  Server      |
   +--------------+    +--------------+    +--------------+
</code></pre>

<h2>Setting Up the SSH Tunnel</h2>

<p>To create an SSH tunnel through the bastion host:</p>

<pre><code class="bash">ssh -N -L 8080:EC2_PRIVATE_IP:80 user@bastion-host
</code></pre>

<h3>Explanation:</h3>

<ul>
<li><code>-N</code>: Do not execute remote commands, just forward ports.</li>
<li><code>-L 8080:EC2_PRIVATE_IP:80</code>: Forwards local port 8080 to port 80 on the development server (EC2 instance).</li>
<li><code>user@bastion-host</code>: SSH into the bastion host as <code>user</code>.</li>
</ul>


<p>Once connected, any request to <code>localhost:8080</code> on the Chromebook will
be forwarded to port 80 on the EC2 instance.</p>

<h2>Making It Persistent on Your Chromebook</h2>

<p>To maintain the tunnel connection automatically:</p>

<ol>
<li>Use an SSH config file (<code>~/.ssh/config</code>):</li>
</ol>


<pre><code>Host bastion
    HostName bastion-host
    User your-user
    IdentityFile ~/.ssh/id_rsa
    LocalForward 8080 EC2_PRIVATE_IP:80
    ServerAliveInterval 60
    ServerAliveCountMax 3
</code></pre>

<ol>
<li>Start the tunnel in the background:</li>
</ol>


<pre><code>ssh -fN bastion
</code></pre>

<ol>
<li>Verify it is working:</li>
</ol>


<pre><code>curl -I http://localhost:8080
</code></pre>

<p>You should see a response from your EC2 instance’s web server.</p>

<h2>Integrating with Docker on EC2</h2>

<p>If your EC2 instance runs a Dockerized web application, expose port 80 from the container:</p>

<pre><code>docker run -d -p 80:80 my-web-app
</code></pre>

<p>Now, accessing <code>http://localhost:8080</code> on your Chromebook browser will
open the web app running inside the Docker container on EC2.</p>

<h2>Final Thoughts</h2>

<p>This setup allows you to securely access a remote development
environment from a Chromebook, leveraging SSH tunneling through a
bastion host.</p>

<ul>
<li>Why This Works:

<ul>
<li>Keeps the EC2 instance private while still making it accessible for development.</li>
<li>Allows seamless local access (<code>localhost:8080</code>) to a remote web app.</li>
<li>Works even when using strict firewall rules.</li>
</ul>
</li>
</ul>


<p>Now you can develop on remote servers with a Chromebook, as if they were local!</p>
]]></content>
  </entry>

  <entry>
    <title type="html"><![CDATA[OrePAN2::S3 Release Announcement]]></title>
    <link href="http://blog.tbcdevelopmentgroup.com/2025-02-25-post.html"/>
    <published>2025-02-25T20:15:58-05:00</published>
    <updated>2025-04-01T17:04:10-04:00</updated>
    <id>http://blog.tbcdevelopmentgroup.com/2025-02-25-post.html</id>
    <content type="html"><![CDATA[<p>I’m excited to announce the release of
<a href="https://github.com/rlauer6/OrePAN2-S3"><strong>OrePAN2::S3</strong></a>, a new Perl
distribution designed to streamline the creation and management of
private CPAN (DarkPAN) repositories using Amazon S3 and
CloudFront. This tool simplifies the deployment of your own Perl
module repository, ensuring efficient distribution and scaling
capabilities.</p>

<p>This effort is the terminal event (I hope) in my <em>adventure in Perl
packaging</em> that led me down the rabbit hole of <a href="https://blog.tbcdevelopmentgroup.com/2025-02-18-post.html">CloudFront
distributions?</a></p>

<blockquote><p>My adventure in packaging started many years ago when it seemed like
a good idea to use RPMs. <strong>No really, it was!</strong></p>

<p>The distribtions embraced Perl and life was good…until of course it
wasn’t. Slowly but surely, those modules we wanted weren’t available
in <strong>any</strong> of the repos. Well, here we are in 2025 and you can’t even
grovel enough to get AWS to include Perl in its images? Sheesh,
really? I have to <code>yum install perl-core</code>?  Seems like someone has
specifically put Perl on the <strong>shit list</strong>.</p>

<p>I’ve been swimming upstream for years using
<a href="https://github.com/rlauer6/cpanspec"><code>cpanspec</code></a> and learning how to
create my own <code>yum</code> repos with all of the stuff Amazon just didn’t
want to package. I’ve finally given up. CPAN forever! <code>cpanm</code> is
awesome!  Okay, yeah, but I’m still not all in with <code>carton</code>. Maybe
some day?</p></blockquote>

<p><strong>Key Features of <code>OrePAN2::S3</code></strong></p>

<ul>
<li><p><strong>Seamless Integration with AWS</strong>: <code>OrePAN2::S3</code> leverages Amazon S3
for storage and CloudFront for content delivery, providing a highly
available, scalable solution for hosting your private CPAN repository.</p></li>
<li><p><strong>User-Friendly Setup</strong>: The distribution offers straightforward
scripts and configurations, enabling you to set up your DarkPAN with
minimal effort…I hope. If you find some points of friction, log an issue.</p></li>
<li><p><strong>Flexible Deployment Options</strong>: Whether you prefer a simple
S3-backed website or a full-fledged CloudFront distribution,
<code>OrePAN2::S3</code> accommodates both setups to suit your specific
needs. If you really just want a website enable S3 bucket to serve
as your DarkPAN we got your back. But be careful…</p></li>
</ul>


<p><strong>Getting Started:</strong></p>

<p>To begin using <code>OrePAN2::S3</code>, ensure you have the following
prerequisites:</p>

<ol>
<li><p><strong>AWS Account</strong>: An active Amazon Web Services account.</p></li>
<li><p><strong>S3 Bucket</strong>: A designated S3 bucket to store your CPAN modules.</p></li>
<li><p><strong>CloudFront Distribution</strong> (optional): For enhanced content delivery and caching.</p></li>
</ol>


<p>Detailed instructions for setting up your S3 bucket and CloudFront
distribution are available in the <a href="https://github.com/rlauer6/OrePAN2-S3">project’s repository</a>.</p>

<p><strong>Why Choose <code>OrePAN2::S3</code>?</strong></p>

<p>Managing a private CPAN repository can be complex, but with
<code>OrePAN2::S3</code>, the process becomes efficient and scalable. By harnessing
the power of AWS services, this distribution ensures your Perl modules
are readily accessible and securely stored.</p>

<p>Oh, and let’s give credit where credit is due. This is all based on
<a href="https://metacpan.org/pod/OrePAN2"><code>OrePAN2</code></a>.</p>

<p>For more information and to access the repository, visit:
<a href="https://github.com/rlauer6/OrePAN2-S3">https://github.com/rlauer6/OrePAN2-S3</a></p>

<p>We look forward to your
<a href="https://github.com/rlauer6/OrePAN2-S3/issues">feedback</a> and
contributions to make <code>OrePAN2::S3</code> even more robust and
user-friendly.</p>
]]></content>
  </entry>


</feed>
