Scribe for Laravel: API Docs That Stay Fresh, and a Calm Way to Upgrade Them

Most Laravel teams reach a point where their API documentation is either out of date, written somewhere it shouldn’t be (Confluence, anyone?), or just doesn’t exist. Scribe is the package that quietly fixes this — it reads your routes, controllers, and docblocks, and turns them into a polished, browsable docs page. Less work for you, fresher docs for whoever consumes your API. 🎉

What Scribe actually does

You annotate controllers with familiar phpDoc tags — @group, @urlParam, @queryParam, @responseFile — and Scribe extracts everything into intermediate YAML files under .scribe/. Those YAMLs are your editable source of truth: you can hand-tweak descriptions, add example values, mark endpoints deprecated, and so on. Scribe then renders them into a Blade view (or static HTML, your choice) that ships with your app.

A typical controller looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/**
 * @group Campaign Management
 *
 * APIs for creating and managing campaigns.
 */

class CampaignController extends Controller
{
    /**
     * Retrieve All Campaign Data (Paginated).
     *
     * @queryParam page integer Current page number. Example: 1
     * @queryParam per_page integer Items per page (max 100). Example: 10
     * @queryParam filter[status] Filter by status. Enum: draft, scheduled, in_progress, ended.
     * @responseFile storage/responses/campaigns.index.json
     */

    public function index(Request $request) { /* ... */ }
}

That’s it. The first paragraph of the docblock becomes the title; the rest is the description. @responseFile points to a JSON fixture so your example responses don’t depend on a live database during doc generation. 💡

How scribe:generate works

One command does the whole job:

1
php artisan scribe:generate

Behind the scenes, this is a two-phase pipeline:

  1. Extraction. Scribe walks your routes, parses each controller’s docblocks, and writes .scribe/endpoints.cache/*.yaml (its internal source of truth) and .scribe/endpoints/*.yaml (the user-overridable copies). If you’ve manually edited the non-cache files, Scribe respects your edits on the next run.
  2. Rendering. Scribe takes the YAML, applies its template, and writes the docs. With type: laravel, you get resources/views/scribe/index.blade.php served via a normal Laravel route — usually something like /api/v1/docs behind your auth middleware. With type: static, you get a self-contained HTML bundle in public/docs/.

The neat part: .scribe/ is meant to be checked into git. Every regenerate produces a diff you can review. If a teammate’s PR changes a route’s signature, that diff shows up in code review. Documentation drift becomes visible.

Upgrading Scribe without breaking your docs page

Recently I worked through an upgrade from Scribe 5.1 to 5.9 on a real project. The trigger was a Composer warning that nobody had been able to silence:

1
2
Package spatie/data-transfer-object is abandoned, you should avoid using it.
Use spatie/laravel-data instead.

The first instinct was to migrate from spatie/data-transfer-object to spatie/laravel-data. But a quick grep across the codebase showed something interesting: nothing in the app actually used Spatie\DataTransferObject. Zero imports, zero extends. The package was a transitive dependency — pulled in by Scribe itself.

Composer prints abandonment notices for every package in the resolved tree, including transitives. So the right fix wasn’t to migrate our code; it was to upgrade Scribe to a version that had dropped the abandoned dependency. A bit of digging through GitHub tags revealed that Scribe 5.4.0 (released October 2025) was the first release to remove it.

The actual upgrade — and the safeguard

Once you know which version to target, the upgrade itself is one command:

1
composer require --dev "knuckleswtf/scribe:^5.4" --with-all-dependencies

The –dev flag matters because Scribe lives in require-dev; without it, Composer would happily move the package to require. –with-all-dependencies lets Composer bump anything Scribe depends on. The lockfile gets rewritten cleanly; no composer install needed afterward.

But the real question is: does the upgrade break anything? This is where the committed .scribe/ directory pays you back. The whole verification dance is:

1
2
3
4
5
6
7
8
9
10
11
# Make sure your baseline is committed first
git status .scribe/

# Upgrade Scribe
composer require --dev "knuckleswtf/scribe:^5.4" --with-all-dependencies

# Regenerate
php artisan scribe:generate

# The diff is your safeguard
git diff .scribe/

In my case, the diff was almost entirely schema additions: every endpoint and parameter gained a deprecated: false field, and the Blade template’s JS asset bumped from theme-default-5.1.0.js to theme-default-5.9.0.js. No endpoints disappeared, no parameter descriptions got mangled, no response examples changed. A clean upgrade. ✅

The smoke test that catches what diffs miss

YAML diffs tell you about extracted data. They don’t tell you what an actual API consumer sees. After every Scribe upgrade, open the live docs URL in a browser:

  1. Does the page render at all?
  2. Is the base URL correct? (If you’ve configured a placeholder like https://mydomain.com and rewrite it at runtime to the tenant host, this is where you find out the rewrite still works.)
  3. Do the example curl commands look right?
  4. Does the auth section show the bearer scheme you configured?

This three-minute check catches everything the YAML diff can’t see — Blade template breakage, missing CSS assets, broken navigation. If the page loads and the example requests look sane, you’re done.

Worth knowing

A few things I’d tell my past self before starting:

  • Commit .scribe/ to git. Treat it like a build artifact you want diffable. Without that baseline, you can’t tell whether a regeneration changed anything meaningful.
  • Faker can introduce noise. Parameters without an explicit Example: value get random Faker output, which differs every run. If your diffs are noisy across regenerations even without a Scribe upgrade, that’s the cause. Pin examples in your docblocks for stable diffs.
  • Packagist’s API can lie. When researching which Scribe version dropped a dependency, I found Packagist’s v2 endpoint serving stale require data for some tags. The authoritative source is the actual composer.json in the GitHub tag — https://raw.githubusercontent.com/knuckleswtf/scribe/<tag>/composer.json.

Scribe is one of those packages that quietly removes a class of recurring chores from your day. Stale docs, undocumented endpoints, the awkward Confluence page nobody updates — all gone, replaced by something that lives next to the code and gets regenerated as part of your normal workflow. And when it’s time to upgrade, the same .scribe/ directory that powers your docs becomes the thing that tells you whether the upgrade was safe. Boring. Useful. Exactly what you want from a tool. 🐘

Posted in Laravel, php | Tagged , , | Leave a comment

Three Years of the AI Boom: The Stocks That Ran

I have been watching the AI boom unfold for the last three years and figured I should write down what I am seeing before the dust settles. Future-me will thank present-me for the bookmark.

What kicked it off

ChatGPT landed in late 2022. By early 2023 every serious tech company was scrambling to ship something with “AI” stamped on it. The capital markets noticed, and a small group of stocks took off like nothing I had seen in years.

The biggest movers

  • NVIDIA (NVDA) — the obvious one. Their GPUs became the picks-and-shovels of the AI gold rush. Roughly an 11x move since the start of 2023.
  • Palantir (PLTR) — quietly the biggest winner of all. Their AIP platform caught on with enterprise and government clients, and the stock is up around 23x since 2023.
  • Broadcom (AVGO) — custom AI silicon for hyperscalers like Google and Meta. Crossed a trillion-plus market cap on the strength of that business.
  • TSMC (TSM) — manufactures basically every advanced AI chip on the planet. The picks-and-shovels of the picks-and-shovels.
  • AMD (AMD) — the credible second source for AI accelerators. The MI300 line gave hyperscalers a reason to diversify away from NVIDIA.
  • Microsoft (MSFT) — bought a front-row seat via OpenAI and turned Azure into the default place to run frontier models.
  • Meta (META) — not a pure AI play, but their open-weights Llama strategy and ad-targeting wins re-rated the stock dramatically.

What I take away from it

Two patterns keep showing up. First, infrastructure beat applications: the companies selling chips, foundries, and cloud capacity printed money before most application-layer startups had a working business model. Second, the winners traded at valuations that looked insane the whole way up, and going up anyway. That is uncomfortable but worth remembering.

I am not making predictions about 2026 and beyond. I just want to remember what the past three years actually looked like, so when the next cycle starts I have a reference point.

Posted in Uncategorized | Tagged , , | Leave a comment

Reading Laravel Config From a Queued Job — and the env() Trap That Bites You in Production

Today’s lesson came from a perfectly innocent-looking change in a Laravel app. We had a magic number — a chunk size — sprinkled across three call sites:

1
2
3
foreach (array_chunk($userIds, 100) as $chunk) {
    SendOnboardingEmailJob::dispatch($chunk);
}

One reviewer flagged it: “If we ever need to tune this, three files need to change.” Fair. So I did the obvious Laravel thing — I’d reach for env(), drop a default in .env.example, and call it a day. 🐘

Then a colleague asked the right question: does that actually work inside a queued job?

The trap nobody talks about

Here’s the bit that catches teams over and over:

env() reads from $_ENV at runtime. That works fine in development. But the moment your deployment runs php artisan config:cache — and most production deployments do, because it’s a 10x boot-time win — Laravel stops loading .env on subsequent requests. Inside a queue worker, env(‘FOO’) often returns null, and you silently fall back to whatever default you passed.

So this:

1
$chunkSize = (int) env('USER_CHUNK_SIZE', 100);

Looks tunable. Behaves like 100 always. Forever. Because the env var “isn’t there” by the time the worker reads it. 💡

The Laravel docs are explicit about this — env() is meant to be called inside config/*.php files, nowhere else. It’s a real footgun, especially in CI pipelines that already run php artisan optimize as part of deploy.

The fix: one knob, three guards

The canonical pattern is to centralize the value in a config file and read it via config() from your application code. Like this:

1
2
3
4
5
6
// config/app.php
return [
    // ...
    'campaign_user_chunk_size' => (int) env('CAMPAIGN_USER_CHUNK_SIZE', 100),
    // ...
];
1
2
3
4
5
// In a job, controller, service — anywhere outside config/
$chunkSize = (int) config('app.campaign_user_chunk_size', 100);
foreach (array_chunk($userIds, $chunkSize) as $chunk) {
    SomeJob::dispatch($chunk);
}

Three things are happening here, and each one matters:

  1. The default in .env.example. This is documentation — anyone provisioning a new environment can see the var exists and what a sensible value looks like. It’s not load-bearing for behavior; it’s load-bearing for discoverability.
  2. The default in the config file. env(‘CAMPAIGN_USER_CHUNK_SIZE’, 100) means if the env var is missing entirely, the config still has a sane value. This is the layer that survives config:cache, because once cached, the array is committed to disk with that 100 baked in.
  3. The default at the call site. config(‘app.campaign_user_chunk_size’, 100) — yes, also there. “But isn’t that redundant?” Sort of. It’s defense in depth. If someone deletes the key from config/app.php, or forgets to run config:cache after a deploy, your code still works. The cost is one literal integer; the upside is one less way for a deploy to silently break.

Why call-site defaults aren’t “DRY violation”

I went back and forth with myself on this. The DRY instinct says: define the default once, in config/app.php, and reference it from the call site bare. config(‘app.campaign_user_chunk_size’). Done.

But production deploys are not a controlled environment. They’re a parade of small mistakes — someone edits the config file and removes the key, the cached config is stale because the deploy script didn’t run config:cache, the env var has a typo. The cost of the call-site default is one duplicate literal. The cost of a missing-key crash on a Friday afternoon deploy is significantly higher. Repetition wins. 🛡️

Mental model: env vs config

The rule that stuck with me, and that I’d offer to anyone learning Laravel:

  • env() belongs in config/*.php. Nowhere else.
  • config() is what your app reads everywhere else — controllers, jobs, services, blade views, tests.

Once you internalize that, you stop having to think about whether config:cache is on or off. Your code reads from a stable in-memory array; the config files are the only place env vars get materialized.

Three layers of defaults, one canonical source, no surprises in queue workers. That’s the boring version of “production-ready,” and it’s worth the extra fifteen minutes. 🎉

Posted in Laravel, php | Tagged , , | Leave a comment

When the third-party PPA goes down: replacing a Dockerfile with a pre-built image on Docker Hub

Three days, three CI failures, all rooted in the same place: the Dockerfile our build runs from rebuilds the world from scratch every single CI run, and every external source it touches is somebody else’s reliability problem. Here’s what that looks like in practice and how we replaced it with a pre-built image hosted on Docker Hub.

The Dockerfile that bites you

Ours is the standard Laravel Sail base image — Ubuntu 22.04, PHP 8.2 + 26 extensions from Ondrej Surý’s PPA, Node, npm, pnpm, bun, Puppeteer with bundled Chromium, PostgreSQL client, MySQL client, Yarn. About 50 packages, ~2.3 GB built. It’s one giant RUN apt-get update && … chain that pulls from six different external sources:

  • Ubuntu archive (archive.ubuntu.com)
  • Ondrej’s PHP PPA (ppa.launchpadcontent.net)
  • NodeSource (deb.nodesource.com)
  • Yarn deb repo (dl.yarnpkg.com)
  • PostgreSQL APT (apt.postgresql.org)
  • Composer installer (getcomposer.org)

If any of those is unreachable for 30 seconds, the whole build dies. And our CI didn’t cache the resulting image — it ran sail build –no-cache laravel.test on every job. So every push to a branch was 4-5 minutes of “please all six of you be up at the same time, please.”

What actually broke

Two days ago, Ondrej’s PPA went unreachable for several hours. The error in the pipeline was nice and clear:

1
2
3
4
Err:5 https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy InRelease
  Could not connect to ppa.launchpadcontent.net:443 (185.125.190.80), connection timed out

E: Unable to locate package php8.2-cli

The second line looks like a different error but it’s a consequence of the first — without an InRelease index, apt has no record of php8.2-cli existing, so the install fails with a generic “unable to locate.” The diagnostic clue is the order: connection timeout first, package-not-found second. ⏳

I did the small fix first: wrapped the offending apt-get update in a shell retry loop:

1
&& (for i in 1 2 3 4 5; do apt-get update -o Acquire::Retries=5 && break || sleep 30; done) \

That helps when the PPA is flapping (down for seconds-to-minutes). It does nothing when the PPA is down for hours. Which is what happened the next day. So the retry wasn’t enough.

The real fix: build once, push to Docker Hub, pull from CI

The mental shift here is small but important. The Dockerfile is your recipe. You don’t need CI to bake the cake fresh on every job; you can bake it once and pass slices around. The recipe stays in the repo unchanged. CI just learns to fetch a slice instead of starting from flour.

The setup is shorter than you’d think. Three steps.

1. Build locally and tag for Docker Hub

From your repo root, with your Docker Hub username (mine is pringadi, hosted at hub.docker.com/r/pringadi/sail-php-8.2):

1
2
3
4
5
docker build --platform linux/amd64 \
  --build-arg WWWGROUP=0 \
  -t pringadi/sail-php-8.2:v1 \
  -t pringadi/sail-php-8.2:latest \
  docker/8.2/

Three details worth pointing out:

The –platform linux/amd64 flag. If you’re on Apple Silicon, your Mac defaults to arm64 and Puppeteer 17’s bundled Chromium binary doesn’t exist for arm64. Force amd64 to match what your CI runner uses. Build will be slower under emulation but the image you push is what production sees.

The –build-arg WWWGROUP=0. Sail’s Dockerfile uses $WWWGROUP to groupadd the container’s sail user. If you don’t pass it, the value is empty and groupadd -g sail errors with “invalid group ID ‘sail’.” Match what CI uses (often 0, the root group, in containerized runners).

Two tags in one command. Both names point at the same image bytes. :v1 is your immutable version pin (CI references this — it never moves). :latest is mutable convenience for humans typing docker pull ad hoc. Never reference :latest from CI — it makes builds non-reproducible.

2. Push to Docker Hub

1
2
docker push pringadi/sail-php-8.2:v1
docker push pringadi/sail-php-8.2:latest

The big push is the :v1 one (~2.3 GB on first publish; subsequent layer-only changes are much smaller). The :latest push is near-instant — Docker Hub recognizes the layers are already uploaded and just attaches the second tag.

Verify it’s actually accessible to anyone:

1
2
docker logout
docker pull pringadi/sail-php-8.2:v1

The logout step is the test. After it, you’re an anonymous client; if the pull succeeds, the image is genuinely public. (Free Docker Hub accounts get unlimited public repos and one private repo. Public is the right call for this use case — the image contains no secrets, just a stock PHP/Node/Postgres install.)

3. Tell CI to pull instead of build

The cleanest pattern: a tiny override compose file that gets layered on top of the main one. Local dev keeps building from the Dockerfile (so contributors don’t need to know any of this); CI gets the registry path.

New file docker-compose.ci.yml:

1
2
3
4
services:
    laravel.test
:
        image
: pringadi/sail-php-8.2:v1
        pull_policy
: always

And in .gitlab-ci.yml, two changes:

1
2
3
4
5
6
7
8
variables:
  COMPOSE_FILE
: "docker-compose.yml:docker-compose.ci.yml"

test
:
  before_script
:
   # ... unchanged ...
    - ./vendor/bin/sail pull laravel.test     # was: sail build --no-cache laravel.test
    - ./vendor/bin/sail up --no-build -d      # --no-build belt-and-suspenders

COMPOSE_FILE is a docker-compose env var that tells it to load multiple files in order; later files override keys from earlier ones. Setting it once at the variables level means every sail command in the job picks up the override consistently — sail pull, sail up, sail down, sail exec, all of them.

The –no-build flag on up is paranoia: it ensures docker-compose can’t accidentally re-trigger a build even if the merged config still contains a build: block (which it does, inherited from the base compose file).

Gotchas worth flagging

The override doesn’t actually delete build:. When you merge two compose files, keys are added, not replaced. So the laravel.test service in the merged config has both an image: and a build:. Compose v2 has a !reset null directive to nullify a key, but compose v1 doesn’t, and many CI environments still ship with docker-compose (the v1 binary). So instead of fighting it, we use docker compose pull to fetch the registry image first, then up –no-build. Both v1 and v2 honor this.

The Dockerfile is still in the repo. This isn’t a fork-of-fork situation — your Dockerfile remains the source of truth. The pre-built image is its output, snapshotted to Docker Hub. When the Dockerfile changes (you add an extension, bump a Node version), you rebuild + push + bump the version tag in the override:

1
2
3
4
5
6
docker build --platform linux/amd64 --build-arg WWWGROUP=0 \
  -t pringadi/sail-php-8.2:v2 \
  -t pringadi/sail-php-8.2:latest \
  docker/8.2/
docker push pringadi/sail-php-8.2:v2
docker push pringadi/sail-php-8.2:latest

Then change image: pringadi/sail-php-8.2:v1 to :v2 in the override file and commit. The discipline is: image and Dockerfile have to be kept in sync, but you control when that sync happens, not CI.

Reverting is one PR. Set COMPOSE_FILE back to docker-compose.yml (or remove the line), put sail build –no-cache laravel.test back. The override file can stay or go. No hidden state.

What it bought us

Before After
Pull/build base image step sail build –no-cache, 4-5 min (or fail when Ondrej is down) sail pull, 30-90 sec on first runner pull, near-instant when cached
External sources in CI build path Six (Ubuntu archive, Ondrej PPA, NodeSource, Yarn, PostgreSQL APT, Composer installer) One (Docker Hub)
Time saved per CI run ~3-4 minutes

Three to four minutes per build is significant in absolute terms — over a few hundred runs a quarter, that’s hours of developer wait time you give back to the team. But the bigger win is the tail: when one of those six third-party sources goes down for an afternoon, your CI keeps working because it never touches them anymore. The blast radius of upstream flakiness shrinks to a single dependency you’ve explicitly accepted (Docker Hub), which is itself one of the most reliable pieces of internet plumbing in existence.

The one piece of operational discipline you take on

You become responsible for rebuilding the image when the Dockerfile changes. Forget to, and CI keeps using the stale image — “but I added that extension yesterday, why isn’t the test seeing it?” confusion.

Two mitigations help. First, document the rebuild command somewhere obvious — I put it in a comment block at the top of the override file:

1
2
3
4
5
6
# When the Dockerfile changes, rebuild + push:
#   docker build --platform linux/amd64 --build-arg WWWGROUP=0 \
#     -t pringadi/sail-php-8.2:vN -t pringadi/sail-php-8.2:latest docker/8.2/
#   docker push pringadi/sail-php-8.2:vN
#   docker push pringadi/sail-php-8.2:latest
# Then bump the tag below from vN-1 to vN and commit.

Second, if you have time later, add a CI job that detects Dockerfile changes and triggers a rebuild automatically — fail loudly if the image and Dockerfile drift. We haven’t done that yet.

Lessons

  • Anything your build pulls from the internet is a reliability dependency. Count them. Six was too many. One you trust is enough.
  • Retry loops absorb minutes, not hours. Useful for the 90% case (transient blips); useless for the 10% (sustained outages). Don’t let them lull you into thinking you’ve solved the problem.
  • Caching your image solves more than speed. The reliability win is bigger than the speed win. You stop being a victim of someone else’s bad afternoon.
  • Public Docker Hub is fine for build environments. Stock OS + language toolchain has no secrets. The application code that depends on private credentials lives elsewhere (Composer install against your private Git, env vars at runtime). Don’t hide image contents that already aren’t sensitive.
  • Local dev shouldn’t have to know any of this. Override files exist precisely so contributors keep using the simple sail up -d from the main compose file. CI complexity belongs in CI configuration, not in everyone’s daily workflow.

Three days of CI failures, one afternoon of work, and the dependency surface area shrinks from six external services to one. That’s a trade I’d take every time. 🐳

Posted in DevOps | Tagged , , , , | Leave a comment

When a Composer package vanishes from GitHub: don’t panic, and don’t delete vendor/

Today our CI/CD pipeline went red on a job that hadn’t been touched in months. The .gitlab-ci.yml was untouched. The branch built fine yesterday. composer install exploded.

The relevant chunk of the failure log:

1
2
3
4
5
6
7
8
Failed to download acme/some-nova-tool from dist:
  https://api.github.com/repos/old-owner/some-nova-tool/zipball/24bd3d8...
  HTTP/2 404

In Git.php line 657:
  Failed to execute git clone --mirror -- https://github.com/old-owner/some-nova-tool.git ...
  remote: Repository not found.
  fatal: repository 'https://github.com/old-owner/some-nova-tool.git/' not found

The package itself was still listed on Packagist — but the canonical GitHub repo it points to had been deleted. Even better: both the original repo and the namespace-renamed fork it had been moved to were gone. Packagist had quietly marked the package as frozen with a tiny note: “This package’s canonical repository appears to be gone and the package has been frozen as a result.” 💀

The package was tiny (a Laravel Nova permissions tool) but load-bearing — twelve files in our codebase imported a trait from it, plus a service provider registration. Removing it was not an option for today.

Why local dev kept working

Here’s the part I want you to internalize before anything else: do not rm -rf vendor/ when you hit this kind of failure. Not on your laptop, not on the developer machine of whoever first reports the issue. 🛑

The vendor/ directory is your last copy of that package’s source code. Composer downloaded it months ago from a repository that, today, no longer exists. If you blow away vendor/ and re-run composer install, you will get the exact 404 the CI runner got, and now you have no way to recover the source short of finding a teammate whose vendor/ is still warm.

Tell your team the same thing. The instinct on a broken composer install is to nuke vendor/ and try again. That instinct is wrong here. Until you have a plan, treat the existing vendor/ tree like an artifact you’d lose forever if you deleted it — because that’s what it is.

The recovery: copy, fork, host it yourself

Once you have a backed-up copy of the package source, the recovery is straightforward. The shape of the fix:

  1. Copy the package source out of vendor/ into a scratch directory.
  2. Push it to a Git host you control (your company’s GitLab, a personal GitHub org, wherever).
  3. Tag a version on your fork.
  4. Tell composer.json to look at your fork instead of Packagist.

Step one and two:

1
2
3
4
5
6
7
8
9
mkdir /tmp/some-nova-tool && cd /tmp/some-nova-tool
cp -R ~/projects/myapp/vendor/acme/some-nova-tool/. .
git init -b main
git add .
git commit -m "Import acme/some-nova-tool source (upstream deleted)"
git tag v1.0.8-beta.0
git remote add origin https://gitlab.example.com/internal/some-nova-tool.git
git push -u origin main
git push origin v1.0.8-beta.0

A note on the tag. The locked commit in our composer.lock was on the dev-main branch, several months past the package’s last tagged release (v1.0.7). Rather than invent a v4.0.0 from thin air, I anchored the tag to actual upstream history: v1.0.8-beta.0 — “newer than 1.0.7, not stable, exact snapshot of where main was the day upstream disappeared.” The version string is arbitrary as long as it’s valid SemVer; pick one that won’t lie to a future reader. 🪦

Then in composer.json, add a VCS repository entry pointing at your fork and pin the version:

1
2
3
4
5
6
7
8
9
10
11
{
  "require": {
    "acme/some-nova-tool": "v1.0.8-beta.0"
  },
  "repositories": [
    {
      "type": "vcs",
      "url": "https://gitlab.example.com/internal/some-nova-tool.git"
    }
  ]
}

Crucially, keep the package name the same — acme/some-nova-tool. Composer’s package name and the autoload PSR-4 namespace are what your application code references. If you change the package name, every use Acme\SomeNovaTool\… statement across your codebase breaks. Keep the name; just change where Composer looks for it.

Regenerate the lockfile with the new source:

1
composer update acme/some-nova-tool --with-dependencies

Commit composer.json and composer.lock together and your CI runs green again. The next developer to composer install on a cold cache will pull from your fork and never know there was ever a problem.

Two small details that bit us

HTTPS vs SSH. Make sure the repository URL in composer.json is HTTPS, not SSH. Your laptop probably has an SSH key on the host; CI runners don’t, and they almost always authenticate via an HTTPS token (composer config –global gitlab-token.gitlab.example.com $TOKEN). One of them is in your shell config; the other has to work in a fresh container with only env vars. If they don’t agree, CI fails with auth errors that look nothing like the original 404.

Packagist will not save you. The package page may still resolve — the metadata lives on Packagist, not on GitHub — but the dist URL embedded in that metadata points at GitHub. Composer reads the dist URL, fetches it, gets a 404, falls back to a git clone, gets another 404, and gives up. Once the upstream Git host is gone, Packagist is just a tombstone. 🪦

The lessons, in one sentence each

  • Vendor is your backup. A populated vendor/ tree is the only copy of a deleted package you’ll ever have. Treat it like data, not cache.
  • Pin to tags, not branches. Tracking dev-main means “whatever HEAD is” — fine until HEAD is gone. A pinned tag on a fork you control is reproducible forever.
  • Self-host anything load-bearing. If a third-party package is woven into a dozen of your files, the cost of mirroring it on a Git host you control is one afternoon. The cost of not doing it is the day it disappears and your CI is red and you can’t ship.

Software supply-chain rot is a real thing. Repos get deleted, packages get unpublished, maintainers leave platforms, accounts get suspended. The defensive move costs almost nothing and pays off the one day you really need it. 🛡️

Posted in php | Tagged , , , , | Leave a comment

Local HTTPS in 5 minutes with Caddy 🔒

I used to dread setting up https for local development. Self-signed certs got the browser to scream. Editing nginx.conf for two hostnames felt like building a cathedral. Caddy changed all that for me — it’s a tiny single-binary web server that does automatic HTTPS out of the box. Point it at a hostname, and it either gets a real Let’s Encrypt cert (for public domains) or generates and trusts a local cert (for development) — without you running certbot, openssl req, or anything else.

This post is the cheat sheet I wish I’d had: install it, point it at a local app, get https in five minutes. ⏱️

Install

Caddy is a single static binary. The package managers wrap it nicely:

1
2
3
4
5
6
7
8
# macOS
brew install caddy

# Debian / Ubuntu
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy

The Debian install also drops a caddy systemd service and a default Caddyfile at /etc/caddy/Caddyfile. On macOS, Homebrew puts the example Caddyfile under /opt/homebrew/etc/Caddyfile (Apple Silicon) or /usr/local/etc/Caddyfile (Intel).

The simplest possible Caddyfile

Caddy reads a config file called the Caddyfile — a tiny domain-specific format that maps hostnames to behaviours. The smallest useful one:

1
2
3
myapp.local {
    reverse_proxy localhost:8000
}

Three lines. “When something asks for myapp.local, terminate TLS and forward the plaintext request to localhost:8000.” Caddy generates a local certificate, installs the matching root CA into your system trust store the first time it runs (you’ll be prompted for a password), and serves https://myapp.local with a green padlock — provided myapp.local resolves to your machine. Add a line to /etc/hosts:

1
127.0.0.1   myapp.local

Run it:

1
2
3
4
5
6
7
8
# Foreground (good for trying it out)
caddy run --config /opt/homebrew/etc/Caddyfile

# As a background service (Debian / systemd)
sudo systemctl enable --now caddy

# Reload after editing the Caddyfile (no downtime)
sudo systemctl reload caddy

Bring your own cert

Sometimes you don’t want Caddy’s auto-generated cert — maybe you’ve already created one with mkcert, or you’ve been issued a cert by your team’s internal CA. Tell Caddy where the .pem files live with the tls directive:

1
2
3
4
myapp.example.com {
    tls /path/to/myapp.example.com.pem /path/to/myapp.example.com-key.pem
    reverse_proxy localhost:8000
}

The first argument is the certificate (full chain), the second is the private key. Caddy stops trying to auto-issue and just uses what you gave it.

Generating a development cert with mkcert is the path of least resistance — install it once, run mkcert -install (which adds its CA to your system trust store), then for any hostname:

1
2
mkcert myapp.example.com
# Creates myapp.example.com.pem and myapp.example.com-key.pem in the current directory

Multiple sites in one block

If you have several hostnames that should share the same TLS settings and proxy target — common with multi-tenant local development — put them on one line, comma-separated:

1
2
3
4
5
6
7
myapp.local, one.myapp.local, two.myapp.local {
    tls /path/to/myapp.local+2.pem /path/to/myapp.local+2-key.pem
    reverse_proxy localhost:80 {
        header_up Host {host}
        header_up X-Forwarded-Proto https
    }
}

Two things worth noticing in that block:

  • header_up Host {host} forwards the original Host header to the upstream — important when your app routes by hostname (multi-tenant, virtual hosts, etc.). Without this, the upstream sees localhost and may not know which tenant is being requested.
  • header_up X-Forwarded-Proto https tells the upstream that the original connection was https. Frameworks like Laravel, Django, and Rails need this to generate correct absolute URLs and to enforce secure-cookie flags.

The +2 in the cert filename is an mkcert convention: when you generate a cert for multiple hostnames, mkcert names the file after the first one and appends +N for the count of additional SANs (Subject Alternative Names).

Useful global options

The block at the very top of the Caddyfile, wrapped in plain { … } with no hostname, is the global options block. The two I reach for most:

1
2
3
{
    auto_https disable_redirects
}

By default, Caddy auto-redirects http:// traffic to https://. Useful in production, occasionally annoying locally — for example, if you’re testing a service that’s already running on port 80 with its own non-https endpoint, the redirect gets in the way. disable_redirects turns that off but keeps the auto-cert magic. Other handy globals:

1
2
3
4
5
{
    debug                                        # verbose logs while iterating
    email you@example.com                        # used by Let's Encrypt for cert expiry warnings
    storage file_system /var/lib/caddy           # where issued certs are cached
}

The thing that won me over

Once you’ve used Caddy for a week, going back to nginx + certbot for a new project feels strange. The Caddyfile fits on a Post-it. There’s no separate cron job to renew certs — Caddy renews them itself. There’s no special config for HTTP/2 or HTTP/3 — they’re on by default. And when the site doesn’t load, the error message tells you why in one sentence, not via a stack trace from journalctl.

It’s not a replacement for nginx everywhere — at high traffic, behind a CDN, or as a proxy for very specialised workloads, nginx still has the edge. But for personal sites, internal tools, and local development, Caddy is hard to beat. 🎉

Posted in Web Development | Tagged , , | Leave a comment

Spatie activity_log: which method writes to which column? 🐘

If you’re using spatie/laravel-activitylog, you’ve probably written something like activity()->event(…)->log(…) a hundred times without thinking about where each piece lands in the database. The fluent API is friendly, but the column mapping isn’t obvious until you go look — so here it is in one place.

The package writes to a single table called activity_log. Every chained method on the builder corresponds to one column on that row. 💡

1
2
3
4
5
6
7
activity()
    ->useLog('SyncCampaignUsersJob')
    ->event('Sync user without detaching')
    ->performedOn($campaign)
    ->causedBy($actor)
    ->withProperties(['chunk' => '3/20', 'count' => 100])
    ->log('Processing chunk 3/20 (100 users).');

That single fluent call writes one row. Here’s the full mapping:

Method Column(s) What it stores
useLog(“string”) log_name Filterable bucket like “Auth” or “SyncCampaignUsersJob”. Never leave empty — defaults to the literal string “default”, which makes filtering useless.
log(“string”) description Free-form, human-readable message. Returned by $activity->description.
event(“string”) event Short verb-ish label like “created”, “updated”, “Synced user with detaching”. Useful for grouping similar actions.
performedOn($model) subject_type + subject_id Polymorphic reference to the affected model. e.g. “App\Models\Campaign” + 6.
causedBy($user) causer_type + causer_id Polymorphic reference to the actor. Pass a model instance or just the ID.
withProperties([…]) properties Arbitrary JSON. Great for structured context: counts, IDs, batch labels, before/after diffs.

The empty-useLog gotcha 🪤

Here’s the failure mode worth burning into memory. This call:

1
2
3
4
5
activity()
    ->event('Sync user without detaching')
    ->performedOn($campaign)
    ->causedBy($actor)
    ->log('Synced 100 users.');

…silently writes log_name = “default”. Six months later you open the activity log dashboard, filter by log name, and you’re staring at 47,000 rows in the default bucket. Always add useLog() with a meaningful string. The class name of the job or service writing the log is a perfectly fine default — future-you will thank present-you when grep’ing through audit history.

One more nuance: causer_type

If you pass an integer to causedBy(), the package needs to know what model that ID points to. By default it assumes your auth user model (set in config/auth.php). If your causers are sometimes a User and sometimes a SystemActor or a tenant model, pass the model instance instead of the ID — the polymorphic columns will resolve correctly and querying back becomes painless.

That’s the whole mental model: one row, one chained call, one column per method. Keep useLog() populated and you’ll have a queryable audit trail instead of a blob of “default” entries. 🎯

Posted in php | Tagged , , , | Leave a comment

The Null Coalescing Operator: A Small PHP Feature That Quietly Changed Everything

If you’ve been writing PHP for a while, you probably remember the days of nested „isset()” checks cluttering up every template and controller. Since PHP 7, there’s a much cleaner way — and if you haven’t fully embraced it yet, it’s worth a second look.

The null coalescing operator (??) returns the left operand if it exists and isn’t null, otherwise the right. No warnings, no notices, no ceremony.

1
2
3
4
5
6
7
8
9
<?php
// The old way — verbose and easy to get wrong
$username = isset($_GET['user']) ? $_GET['user'] : 'guest';

// With null coalescing — same behavior, far less noise
$username = $_GET['user'] ?? 'guest';

// It chains too, which is where it really shines
$config = $userConfig['theme'] ?? $siteConfig['theme'] ?? 'default';

PHP 7.4 took it a step further with the null coalescing assignment operator (??=), which only assigns if the variable is currently null or unset:

1
2
3
4
5
6
7
8
9
<?php
$options = ['timeout' => 30];

// Only set 'retries' if it isn't already defined
$options['retries'] ??= 3;
$options['timeout'] ??= 60; // stays 30 — already set

print_r($options);
// Array ( [timeout] => 30 [retries] => 3 )

One subtle thing to keep in mind: ?? only reacts to null or unset — not to falsy values like „0″, „””, or „false”. That’s usually what you want, but it’s a meaningful difference from the older ?: (Elvis) operator, which falls back on any falsy value.

1
2
3
4
5
<?php
$count = 0;

echo $count ?? 10;  // prints 0 — because 0 is not null
echo $count ?: 10;  // prints 10 — because 0 is falsy

Small syntax, big quality-of-life improvement. If your codebase still has rows of „isset()” ternaries, refactoring them is one of those low-risk cleanups that pays off every time someone reads the file next. 🐘

Posted in php | Tagged | Leave a comment

Did You Know? Python’s Walrus Operator Can Make Your Code Cleaner

Did you know? Since Python 3.8, you can use the walrus operator ( := ) to assign a value to a variable as part of an expression. It’s a small piece of syntax that can meaningfully tidy up loops and comprehensions where you’d otherwise compute the same value twice.

Here’s a classic example — reading lines from a file until you hit an empty line:

1
2
3
4
5
6
7
8
9
10
11
# Without the walrus operator
with open("data.txt") as f:
line = f.readline()
while line:
print(line.strip())
line = f.readline()

# With the walrus operator — assign and test in one step
with open("data.txt") as f:
while (line := f.readline()):
print(line.strip())

It’s also handy in list comprehensions when you want to filter on a computed value without recomputing it:

1
2
3
4
5
6
7
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

# Keep only squares greater than 20, without squaring twice
big_squares = [sq for n in numbers if (sq := n * n) &gt; 20]

print(big_squares)
# [25, 36, 49, 64, 81, 100]

A word of caution: the walrus operator is powerful but easy to overuse. Reach for it when it genuinely removes duplication or makes intent clearer — not just because it’s clever. 🐍

Posted in Python | Tagged | Leave a comment

Did You Know? Python Dictionaries Preserve Insertion Order

Did you know? Since Python 3.7, the built-in

1
dict

type officially preserves the order in which keys are inserted. Before that, if you needed ordering guarantees you had to reach for

1
collections.OrderedDict

. Today, a plain dictionary is enough for most cases.

Here’s a small demonstration:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Keys stay in the order they were added
user = {}
user["name"] = "Ada"
user["role"] = "Author"
user["joined"] = 2026

for key, value in user.items():
    print(f"{key}: {value}")

# Output:
# name: Ada
# role: Author
# joined: 2026

This also means dictionary comprehensions and merges keep a predictable order, which is surprisingly useful when serializing to JSON or building config objects:

1
2
3
4
5
6
7
defaults = {"host": "localhost", "port": 8080}
overrides = {"port": 9090, "debug": True}

# Merge with the | operator (Python 3.9+)
config = defaults | overrides
print(config)
# {'host': 'localhost', 'port': 9090, 'debug': True}

One caveat: ordering is a property of the dictionary, not of equality. Two dicts with the same keys and values are considered equal even if their insertion order differs. 🐍

Posted in Python | Tagged | Leave a comment