Local HTTPS in 5 minutes with Caddy 🔒

I used to dread setting up https for local development. Self-signed certs got the browser to scream. Editing nginx.conf for two hostnames felt like building a cathedral. Caddy changed all that for me — it’s a tiny single-binary web server that does automatic HTTPS out of the box. Point it at a hostname, and it either gets a real Let’s Encrypt cert (for public domains) or generates and trusts a local cert (for development) — without you running certbot, openssl req, or anything else.

This post is the cheat sheet I wish I’d had: install it, point it at a local app, get https in five minutes. ⏱️

Install

Caddy is a single static binary. The package managers wrap it nicely:

1
2
3
4
5
6
7
8
# macOS
brew install caddy

# Debian / Ubuntu
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy

The Debian install also drops a caddy systemd service and a default Caddyfile at /etc/caddy/Caddyfile. On macOS, Homebrew puts the example Caddyfile under /opt/homebrew/etc/Caddyfile (Apple Silicon) or /usr/local/etc/Caddyfile (Intel).

The simplest possible Caddyfile

Caddy reads a config file called the Caddyfile — a tiny domain-specific format that maps hostnames to behaviours. The smallest useful one:

1
2
3
myapp.local {
    reverse_proxy localhost:8000
}

Three lines. “When something asks for myapp.local, terminate TLS and forward the plaintext request to localhost:8000.” Caddy generates a local certificate, installs the matching root CA into your system trust store the first time it runs (you’ll be prompted for a password), and serves https://myapp.local with a green padlock — provided myapp.local resolves to your machine. Add a line to /etc/hosts:

1
127.0.0.1   myapp.local

Run it:

1
2
3
4
5
6
7
8
# Foreground (good for trying it out)
caddy run --config /opt/homebrew/etc/Caddyfile

# As a background service (Debian / systemd)
sudo systemctl enable --now caddy

# Reload after editing the Caddyfile (no downtime)
sudo systemctl reload caddy

Bring your own cert

Sometimes you don’t want Caddy’s auto-generated cert — maybe you’ve already created one with mkcert, or you’ve been issued a cert by your team’s internal CA. Tell Caddy where the .pem files live with the tls directive:

1
2
3
4
myapp.example.com {
    tls /path/to/myapp.example.com.pem /path/to/myapp.example.com-key.pem
    reverse_proxy localhost:8000
}

The first argument is the certificate (full chain), the second is the private key. Caddy stops trying to auto-issue and just uses what you gave it.

Generating a development cert with mkcert is the path of least resistance — install it once, run mkcert -install (which adds its CA to your system trust store), then for any hostname:

1
2
mkcert myapp.example.com
# Creates myapp.example.com.pem and myapp.example.com-key.pem in the current directory

Multiple sites in one block

If you have several hostnames that should share the same TLS settings and proxy target — common with multi-tenant local development — put them on one line, comma-separated:

1
2
3
4
5
6
7
myapp.local, one.myapp.local, two.myapp.local {
    tls /path/to/myapp.local+2.pem /path/to/myapp.local+2-key.pem
    reverse_proxy localhost:80 {
        header_up Host {host}
        header_up X-Forwarded-Proto https
    }
}

Two things worth noticing in that block:

  • header_up Host {host} forwards the original Host header to the upstream — important when your app routes by hostname (multi-tenant, virtual hosts, etc.). Without this, the upstream sees localhost and may not know which tenant is being requested.
  • header_up X-Forwarded-Proto https tells the upstream that the original connection was https. Frameworks like Laravel, Django, and Rails need this to generate correct absolute URLs and to enforce secure-cookie flags.

The +2 in the cert filename is an mkcert convention: when you generate a cert for multiple hostnames, mkcert names the file after the first one and appends +N for the count of additional SANs (Subject Alternative Names).

Useful global options

The block at the very top of the Caddyfile, wrapped in plain { … } with no hostname, is the global options block. The two I reach for most:

1
2
3
{
    auto_https disable_redirects
}

By default, Caddy auto-redirects http:// traffic to https://. Useful in production, occasionally annoying locally — for example, if you’re testing a service that’s already running on port 80 with its own non-https endpoint, the redirect gets in the way. disable_redirects turns that off but keeps the auto-cert magic. Other handy globals:

1
2
3
4
5
{
    debug                                        # verbose logs while iterating
    email you@example.com                        # used by Let's Encrypt for cert expiry warnings
    storage file_system /var/lib/caddy           # where issued certs are cached
}

The thing that won me over

Once you’ve used Caddy for a week, going back to nginx + certbot for a new project feels strange. The Caddyfile fits on a Post-it. There’s no separate cron job to renew certs — Caddy renews them itself. There’s no special config for HTTP/2 or HTTP/3 — they’re on by default. And when the site doesn’t load, the error message tells you why in one sentence, not via a stack trace from journalctl.

It’s not a replacement for nginx everywhere — at high traffic, behind a CDN, or as a proxy for very specialised workloads, nginx still has the edge. But for personal sites, internal tools, and local development, Caddy is hard to beat. 🎉

Posted in Web Development | Tagged , , | Leave a comment

Spatie activity_log: which method writes to which column? 🐘

If you’re using spatie/laravel-activitylog, you’ve probably written something like activity()->event(…)->log(…) a hundred times without thinking about where each piece lands in the database. The fluent API is friendly, but the column mapping isn’t obvious until you go look — so here it is in one place.

The package writes to a single table called activity_log. Every chained method on the builder corresponds to one column on that row. 💡

1
2
3
4
5
6
7
activity()
    ->useLog('SyncCampaignUsersJob')
    ->event('Sync user without detaching')
    ->performedOn($campaign)
    ->causedBy($actor)
    ->withProperties(['chunk' => '3/20', 'count' => 100])
    ->log('Processing chunk 3/20 (100 users).');

That single fluent call writes one row. Here’s the full mapping:

Method Column(s) What it stores
useLog(“string”) log_name Filterable bucket like “Auth” or “SyncCampaignUsersJob”. Never leave empty — defaults to the literal string “default”, which makes filtering useless.
log(“string”) description Free-form, human-readable message. Returned by $activity->description.
event(“string”) event Short verb-ish label like “created”, “updated”, “Synced user with detaching”. Useful for grouping similar actions.
performedOn($model) subject_type + subject_id Polymorphic reference to the affected model. e.g. “App\Models\Campaign” + 6.
causedBy($user) causer_type + causer_id Polymorphic reference to the actor. Pass a model instance or just the ID.
withProperties([…]) properties Arbitrary JSON. Great for structured context: counts, IDs, batch labels, before/after diffs.

The empty-useLog gotcha 🪤

Here’s the failure mode worth burning into memory. This call:

1
2
3
4
5
activity()
    ->event('Sync user without detaching')
    ->performedOn($campaign)
    ->causedBy($actor)
    ->log('Synced 100 users.');

…silently writes log_name = “default”. Six months later you open the activity log dashboard, filter by log name, and you’re staring at 47,000 rows in the default bucket. Always add useLog() with a meaningful string. The class name of the job or service writing the log is a perfectly fine default — future-you will thank present-you when grep’ing through audit history.

One more nuance: causer_type

If you pass an integer to causedBy(), the package needs to know what model that ID points to. By default it assumes your auth user model (set in config/auth.php). If your causers are sometimes a User and sometimes a SystemActor or a tenant model, pass the model instance instead of the ID — the polymorphic columns will resolve correctly and querying back becomes painless.

That’s the whole mental model: one row, one chained call, one column per method. Keep useLog() populated and you’ll have a queryable audit trail instead of a blob of “default” entries. 🎯

Posted in php | Tagged , , , | Leave a comment

The Null Coalescing Operator: A Small PHP Feature That Quietly Changed Everything

If you’ve been writing PHP for a while, you probably remember the days of nested „isset()” checks cluttering up every template and controller. Since PHP 7, there’s a much cleaner way — and if you haven’t fully embraced it yet, it’s worth a second look.

The null coalescing operator (??) returns the left operand if it exists and isn’t null, otherwise the right. No warnings, no notices, no ceremony.

1
2
3
4
5
6
7
8
9
<?php
// The old way — verbose and easy to get wrong
$username = isset($_GET['user']) ? $_GET['user'] : 'guest';

// With null coalescing — same behavior, far less noise
$username = $_GET['user'] ?? 'guest';

// It chains too, which is where it really shines
$config = $userConfig['theme'] ?? $siteConfig['theme'] ?? 'default';

PHP 7.4 took it a step further with the null coalescing assignment operator (??=), which only assigns if the variable is currently null or unset:

1
2
3
4
5
6
7
8
9
<?php
$options = ['timeout' => 30];

// Only set 'retries' if it isn't already defined
$options['retries'] ??= 3;
$options['timeout'] ??= 60; // stays 30 — already set

print_r($options);
// Array ( [timeout] => 30 [retries] => 3 )

One subtle thing to keep in mind: ?? only reacts to null or unset — not to falsy values like „0″, „””, or „false”. That’s usually what you want, but it’s a meaningful difference from the older ?: (Elvis) operator, which falls back on any falsy value.

1
2
3
4
5
<?php
$count = 0;

echo $count ?? 10;  // prints 0 — because 0 is not null
echo $count ?: 10;  // prints 10 — because 0 is falsy

Small syntax, big quality-of-life improvement. If your codebase still has rows of „isset()” ternaries, refactoring them is one of those low-risk cleanups that pays off every time someone reads the file next. 🐘

Posted in php | Tagged | Leave a comment

Did You Know? Python’s Walrus Operator Can Make Your Code Cleaner

Did you know? Since Python 3.8, you can use the walrus operator ( := ) to assign a value to a variable as part of an expression. It’s a small piece of syntax that can meaningfully tidy up loops and comprehensions where you’d otherwise compute the same value twice.

Here’s a classic example — reading lines from a file until you hit an empty line:

1
2
3
4
5
6
7
8
9
10
11
# Without the walrus operator
with open("data.txt") as f:
line = f.readline()
while line:
print(line.strip())
line = f.readline()

# With the walrus operator — assign and test in one step
with open("data.txt") as f:
while (line := f.readline()):
print(line.strip())

It’s also handy in list comprehensions when you want to filter on a computed value without recomputing it:

1
2
3
4
5
6
7
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

# Keep only squares greater than 20, without squaring twice
big_squares = [sq for n in numbers if (sq := n * n) &gt; 20]

print(big_squares)
# [25, 36, 49, 64, 81, 100]

A word of caution: the walrus operator is powerful but easy to overuse. Reach for it when it genuinely removes duplication or makes intent clearer — not just because it’s clever. 🐍

Posted in Python | Tagged | Leave a comment

Did You Know? Python Dictionaries Preserve Insertion Order

Did you know? Since Python 3.7, the built-in

1
dict

type officially preserves the order in which keys are inserted. Before that, if you needed ordering guarantees you had to reach for

1
collections.OrderedDict

. Today, a plain dictionary is enough for most cases.

Here’s a small demonstration:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Keys stay in the order they were added
user = {}
user["name"] = "Ada"
user["role"] = "Author"
user["joined"] = 2026

for key, value in user.items():
    print(f"{key}: {value}")

# Output:
# name: Ada
# role: Author
# joined: 2026

This also means dictionary comprehensions and merges keep a predictable order, which is surprisingly useful when serializing to JSON or building config objects:

1
2
3
4
5
6
7
defaults = {"host": "localhost", "port": 8080}
overrides = {"port": 9090, "debug": True}

# Merge with the | operator (Python 3.9+)
config = defaults | overrides
print(config)
# {'host': 'localhost', 'port': 9090, 'debug': True}

One caveat: ordering is a property of the dictionary, not of equality. Two dicts with the same keys and values are considered equal even if their insertion order differs. 🐍

Posted in Python | Tagged | Leave a comment

Laravel Sail: a developer’s cheat sheet 🐳

Laravel ships with Sail — a thin command-line wrapper around docker compose that gives you the whole Laravel toolchain (PHP, MySQL, Redis, Mailpit, Node) in containers, without you needing to install any of them on your host. The only thing you need on the laptop is Docker. Everything else lives in containers and goes away when you delete the project.

This is the quick-reference I keep open in another tab while building Laravel apps on macOS. 🍎

What you actually need on the host

  • macOS (these notes target Apple Silicon and Intel Macs equally)
  • Docker Desktop — the only hard prerequisite. Sail uses it for everything else (PHP, Composer, Node, MySQL, Redis).
  • That’s it. You don’t need PHP installed locally. You don’t need Composer locally. You don’t need Node locally. You install them once via Sail’s bootstrap and from then on every command runs inside containers.

Spin up a fresh project (with MySQL and Redis)

The official one-liner uses Laravel’s builder image to scaffold a new app and pre-select the services you want. Tell it mysql and redis in the with query parameter:

1
2
3
curl -s "https://laravel.build/example-app?with=mysql,redis" | bash
cd example-app
./vendor/bin/sail up -d

That brings up four containers — your app, MySQL, Redis, and Mailpit (the dev mail-catcher) — and exposes the app on http://localhost. The first run pulls images and takes a couple of minutes; subsequent sail up calls are fast.

Tip: alias sail so you don’t have to type the long path every time.

1
alias sail='[ -f sail ] && sh sail || sh vendor/bin/sail'

Drop that into your ~/.zshrc and you can just type sail up -d, sail artisan …, etc., from anywhere inside a Sail project.

The Artisan commands you’ll reach for daily

Anything you’d run as php artisan … on a non-Sail setup, you run as sail artisan …. Sail just shells into the app container and forwards the command. The most common ones:

1
2
3
4
5
6
sail artisan tinker                      # interactive REPL with your app booted
sail artisan route:list                  # show every registered route
sail artisan migrate                     # run pending migrations
sail artisan make:controller UserController
sail artisan make:model Department -m    # model + migration in one shot
sail artisan queue:work                  # start a worker against the default queue

tinker is the standout feature you’ll likely use most — it’s a Laravel-aware PHP REPL with every facade, every model, and your full config() ready to go. Need to check what User::find(1)->roles returns? sail artisan tinker, type the expression, get an answer. Beats writing a controller-and-route just to peek at data.

Mailpit — see every email your app sends

Sail bundles Mailpit, a friendly local SMTP server with a web UI. Any mail your app tries to send (password resets, notifications, queued emails) gets caught and shown at:

1
http://localhost:8025

No SMTP credentials, no real provider, no actual emails leaving your machine. Just open the inbox and see what your app sent. The .env Sail generates already wires MAIL_MAILER=smtp, MAIL_HOST=mailpit, MAIL_PORT=1025, so it works on first run.

Database workflow: migrate, seed, refresh

The mental model: migrations describe schema changes, seeders insert sample data, and there’s a small family of commands for moving between states while you’re iterating on a feature.

1
2
3
4
5
6
7
8
9
# Wipe the database, re-run every migration from scratch, then run seeders
sail artisan migrate:refresh --seed

# Create a new migration file in database/migrations/
sail artisan make:migration create_departments_table

# Roll back the last batch (or the last N batches) and re-apply forward —
# the fastest way to iterate on a brand-new migration you're still tweaking
sail artisan migrate:rollback --step=1 && sail artisan migrate

The third one is the workhorse for daily development: edit the migration, roll it back one step, run forward, repeat. migrate:refresh –seed is heavier — it nukes everything and re-applies, so save it for when you’ve made many changes and want a clean slate.

Installing dependencies

Composer (PHP) and npm (frontend) both run inside the Sail container. The full “I just pulled a fresh branch” sequence:

1
sail composer install && sail npm install && sail npm run dev

sail npm run dev starts Vite in dev mode for hot reloading. For a production-style build, use sail npm run build and serve the compiled assets.

Routes and pages

The flow for a new page is short. Define a route, point it at a controller method, render a Blade view.

1
2
3
4
5
// routes/web.php
use App\Http\Controllers\DashboardController;

Route::get('/dashboard', [DashboardController::class, 'index'])
    ->name('dashboard');
1
sail artisan make:controller DashboardController
1
2
3
4
5
// app/Http/Controllers/DashboardController.php
public function index()
{
    return view('dashboard', ['user' => auth()->user()]);
}

Then check what’s wired by listing every registered route:

1
sail artisan route:list

Add –except-vendor to hide the Laravel default routes and see only yours; –name=dashboard filters to a single route by name.

Getting a shell inside a container

Sometimes you need to poke around inside a container — inspect a config file, run a one-off mysql command, check redis state. Sail has shortcuts:

1
2
3
sail shell        # bash inside the app container (root — be careful)
sail mysql        # mysql client connected to the dev database
sail redis        # redis-cli connected to the local redis

Under the hood these are just docker exec calls. The equivalents:

1
2
3
docker exec -it example-app-laravel.test-1 bash    # what 'sail shell' does
docker exec -it example-app-mysql-1 bash           # what 'sail mysql shell' does
docker exec -it example-app-redis-1 sh             # what 'sail redis shell' does

The container names are <project-name>-<service-name>-1, so substitute your project’s directory name for example-app. sail shell drops you in as root in the app container — that’s deliberate (Sail’s container is a development sandbox), but it does mean you can break things by being careless. Treat it like an SSH session into a dev box.

Tests

Laravel uses PHPUnit under the hood (with Pest as a popular alternative). Sail makes the runner one command:

1
2
3
4
5
6
7
8
# Generate a unit test stub
sail artisan make:test UserTest --unit

# Run the whole suite
sail artisan test

# Run with HTML coverage (output goes to ./coverage)
sail artisan test --coverage-html coverage

–unit creates the test under tests/Unit/ (no Laravel app boot, fastest to run). Without it, you get a feature test under tests/Feature/ which boots the application and gives you the full HTTP-style helpers ($this->get(‘/dashboard’)->assertOk()). Use Unit for pure logic, Feature for anything touching routes, models, or services.

The –coverage-html flag requires Xdebug or PCOV in the container. Sail’s image ships PCOV, so this works out of the box on a default Sail setup.

When things misbehave: the cleanup checklist

Laravel caches a lot — config, routes, views, compiled service container. After bigger changes (especially editing config/*.php or env vars), the caches can lie to you. The reset:

1
2
3
4
sail artisan cache:clear
sail artisan config:clear
sail artisan route:clear
sail artisan view:clear

And of course, the first place to look when something is broken is the application log. Tail it in a separate terminal while you reproduce the bug:

1
tail -f storage/logs/laravel.log

Stack traces, query logs, anything you’ve Log::info()‘d — it all ends up here. If your app is logging to a different channel (configured in config/logging.php), check there instead.

The day-to-day shape

Once you’ve used Sail for a project or two, the daily loop becomes muscle memory: sail up -d in the morning, sail artisan commands as you build, sail artisan test before pushing, sail down when you switch projects. Nothing leaks onto the host, every project’s PHP/MySQL/Redis versions stay independent, and onboarding a new teammate is “install Docker, clone the repo, ./vendor/bin/sail up“.

For most Laravel work I do these days, I never type php directly anymore. ⛵

Posted in Web Development | Tagged , , , | Leave a comment

List open or listening ports

You started a service, you can’t tell whether it actually bound to its port, and you want to see what’s listening — or you want to find out which process is squatting on port 8080. Two one-liners, two operating systems:

macOS

1
lsof -nP -i4TCP

RedHat / CentOS 7

1
netstat -tulpn

What the flags do: lsof -nP turns off DNS and port-name resolution (so you see 192.168.1.5:443 instead of app-server.local:https — faster and unambiguous). -i4TCP filters to IPv4 TCP sockets. For netstat -tulpn: t = TCP, u = UDP, l = listening only, p = show the PID/process, n = numeric (no DNS).


A few useful additions.

On modern Linux, prefer ss over netstat. The net-tools package that ships netstat is largely deprecated — most distros have moved to iproute2‘s ss (socket statistics). It’s faster on busy machines (reads from netlink instead of /proc) and uses the same flags you already know:

1
ss -tulpn

If you’ve been muscle-memory-typing netstat for years, the migration is one character. Same flags, same shape, modern implementation.

Listening-only on macOS. lsof -i4TCP shows every TCP connection — listeners and established. To narrow to just the things accepting new connections, add -sTCP:LISTEN:

1
2
3
4
5
# All listening TCP sockets (IPv4 + IPv6)
lsof -nP -iTCP -sTCP:LISTEN

# Add UDP for the full picture
lsof -nP -iUDP

The question you actually want answered: “what’s on port 8080?” Three flavours of the same question:

1
2
3
4
5
6
7
8
# macOS / Linux
lsof -i :8080

# Linux (modern)
ss -tulpn | grep :8080

# Linux (also handy — kill-by-port)
sudo fuser -k 8080/tcp

The last one is the nuclear option: fuser -k kills whoever has the port. Useful when a stale process is holding it and you don’t care about graceful shutdown.

Run it as root if you want to see other users’ processes. Without sudo, lsof, netstat -p, and ss -p only show process names for processes you own. If you see a port listed as LISTEN but the PID column is blank, that’s the symptom — re-run with sudo and the owner pops out.

Windows. The closest equivalent on Windows is netstat -ano from cmd (the -o shows the PID; cross-reference in Task Manager or with tasklist /fi “PID eq 1234”). PowerShell users get something nicer — Get-NetTCPConnection returns proper objects you can pipe and filter:

1
Get-NetTCPConnection -State Listen | Select-Object LocalAddress, LocalPort, OwningProcess

Pair that with Get-Process -Id $pid to translate OwningProcess back to a process name. 🔌

Posted in Bash, Operating System | Leave a comment

MongoDB Notes

If you’re storing binary files inside MongoDB, the convention is called GridFS. It splits each logical file into two collections: a metadata document and a sequence of binary chunks. This post is a cheat sheet for inspecting and tweaking those documents from the Mongo shell. 🍃

When using MongoDB to store files, we have two collections:

  1. The place where MongoDB stores the file metadata: store.files
  2. And the place where MongoDB stores the file content: store.chunks

Depending on the size of the file, one entry in store.files can point to many entries in store.chunks. The bigger the file, the more entries you’ll encounter.

1
2
3
4
5
6
7
8
// Show all / list all entries from store.files
db.getCollection('store.files').find({});

// Show only a particular entry from store.files
db.getCollection('store.files').find({ _id: ObjectId("5b02d232cbce1d07e08401c7") });

// The same can be used for store.chunks.
db.getCollection('store.chunks').find({});

The metadata fields in store.files can be augmented at query time (the new field exists only in the result, not in the database):

1
2
3
4
db.getCollection('store.files').aggregate([
    { $match: { _id: ObjectId("5b02d232cbce1d07e08401c7") } },
    { $addFields: { 'key_reference': '1234' } }
]);

Or we can do an update on store.files, which actually persists the new field into the database:

1
2
3
4
db.getCollection('store.files').updateMany(
    { _id: ObjectId("5b02d232cbce1d07e08401c7") },
    { $set: { 'key_reference': '1234' } }
);

A few useful additions.

Why files are split into chunks. MongoDB’s per-document hard limit is 16 MB. GridFS works around that by splitting any file larger than the chunk size into many small chunk documents and writing one metadata doc that links them together. The default chunk size is 255 KB, configurable per bucket. So a 10 MB upload becomes one *.files doc and roughly 40 *.chunks docs, all sharing the same files_id. To inspect that relationship for a specific file:

1
2
3
db.getCollection('store.chunks')
    .find({ files_id: ObjectId("5b02d232cbce1d07e08401c7") })
    .sort({ n: 1 });   // n is the chunk index, 0..N-1

The bucket name store.* is custom. The default GridFS bucket is named fs, so out of the box you’d see fs.files and fs.chunks. The bucket name is whatever the application set when it opened the GridFS handle. If your app uses store, replace fs with store in any docs example you find online.

Putting and getting files in the first place. The shell snippets above are for inspecting files that are already there — they don’t help you upload or download the binary content. For that, use the mongofiles CLI or the driver-level GridFS API:

1
2
3
4
5
6
7
8
# Upload
mongofiles --uri "mongodb://localhost/mydb" --prefix store put /path/to/file.pdf

# Download
mongofiles --uri "mongodb://localhost/mydb" --prefix store get file.pdf

# List
mongofiles --uri "mongodb://localhost/mydb" --prefix store list

From application code, every official driver has a GridFS class — GridFSBucket in Node and Java, GridFS in PyMongo, IGridFSBucket in C#. They handle the chunking and reassembly for you.

Don’t delete files by hand. A common pitfall: deleting a row from store.files directly leaves the matching chunks orphaned in store.chunks, slowly bloating the collection. Either use mongofiles delete <filename>, or your driver’s GridFSBucket.delete(fileId), both of which remove the metadata and the chunks atomically.

Should you actually use GridFS? A practical heads-up: if your files are bigger than 16 MB and you already use MongoDB, GridFS is a reasonable fit and keeps backups simple. But for most modern stacks, putting the bytes in object storage (S3, GCS, MinIO, R2) and keeping only a URL or key in MongoDB is cheaper, faster, and easier to scale. GridFS is most defensible when you genuinely want files transactionally co-located with the database — e.g. mobile/embedded scenarios, or when network egress to S3 is a non-starter. 💡

Posted in Database | Leave a comment

CentOS 6 repo Settings

To fix repo settings in CentOS 6

1. make sure there is no proxy or funny settings in
vi /etc/yum.conf

2. There are a couple of files within /etc/yum.repos.d/. Make sure the url are correct (accessible) and enabled=1
ll /etc/yum.repos.d/

3. Cleanup the repo, list and retest
yum –enablerepo=base clean metadata;
yum repolist all
yum search java-1.8.0-openjdk

Posted in Linux | Leave a comment

Show Linux Partition Tree Mountpoint and If SSD

1
lsblk -o TYPE,NAME,KNAME,UUID,MOUNTPOINT,SIZE,ROTA
Posted in Linux | Leave a comment