Skip to main content

5 posts tagged with "Docker"

View all tags

PostgreSQL Authentication Failed in WSL? Docker Is Silently Using Your Port

· 4 min read

TL;DR

Docker Desktop silently occupies port 5432 in WSL2. An SSH tunnel to localhost:5432 actually connects to Docker's PostgreSQL instead of the remote server. The password authentication failed error is misleading — the password is correct, but you're talking to the wrong instance. Fix: change the tunnel to local port 5433 and isolate the dev config with .env.local.


Problem

Connecting to remote PostgreSQL via SSH tunnel fails with:

PostgresError: password authentication failed for user "postgres"
severity: 'FATAL'
code: '28P01'
file: 'auth.c'
routine: 'auth_failed'

The tunnel command appears to succeed without errors:

ssh -L 5432:localhost:5432 -L 3003:localhost:3003 user@server -N &

No error from the tunnel itself, but the connection always fails authentication. The password is verified correct — direct login on the remote server works fine.


Root Cause

In WSL2's network architecture, Docker Desktop creates virtual network interfaces inside WSL2. When a Docker container maps 5432:5432, Docker listens on port 5432 in the WSL2 network layer.

The SSH tunnel -L 5432:localhost:5432 forwards local port 5432 to the remote server's port 5432. But local port 5432 is already taken by Docker — the tunnel binding silently fails and the connection gets intercepted by Docker.

The result: localhost:5432 connects to the PostgreSQL instance inside the Docker container, not the remote server. That container has different user credentials, so it throws password authentication failed.

What makes this deceptive: the message says "wrong password," not "port occupied" or "tunnel failed." You end up re-checking the password repeatedly while the real problem is connecting to the wrong machine.


Solution

Step 1: Identify the conflict

# Inside WSL2
ss -tlnp | grep 5432

WSL may not show it (Docker ports map from the Windows side). Cross-check from PowerShell:

netstat -ano | findstr :5432

If you see docker-proxy on port 5432, the conflict is confirmed.

Step 2: Change the tunnel port

# Before: local port 5432 (conflicts with Docker)
ssh -L 5432:localhost:5432 user@server -N &

# After: local port 5433 (no conflict)
ssh -L 5433:localhost:5432 user@server -N &

The -L format is local_port:remote_host:remote_port. Only the local port changed — the remote server still uses 5432.

Step 3: Create .env.local for dev config

# server/.env.local (add to .gitignore)
DATABASE_URL=postgresql://postgres:your_password@localhost:5433/your_db

Step 4: Modify dotenv loading to prioritize .env.local

// Before:
import 'dotenv/config';

// After:
import dotenv from 'dotenv';
import { existsSync } from 'fs';
import { resolve } from 'path';

if (existsSync(resolve(__dirname, '../.env.local'))) {
dotenv.config({ path: resolve(__dirname, '../.env.local') });
} else {
dotenv.config();
}

Development uses .env.local (port 5433), production continues using .env (port 5432). Two configs, zero interference.


Important Notes

Error messages can mislead you

password authentication failed does not always mean the password is wrong. When connected to a different instance (like Docker's PostgreSQL), that instance may not have the same user or password, producing the same error. If the password is verified correct, prioritize checking whether you're connecting to the right instance.

Don't hardcode a different port in app code

Change the local port at the SSH tunnel level and manage it through .env.local. Don't change the default port in application code — that breaks production and container connections. Environment differences should be resolved through environment variables, not hardcoded values.

WSL port issues: check the Windows side

Docker Desktop in WSL2 mode maps container ports directly in the Windows network layer. ss and lsof inside WSL won't show the PID. When encountering mysterious port conflicts, check Windows side with netstat -ano.


Container Port Unreachable from WSL2? The Docker Desktop network_mode:host Trap

· 3 min read

Encountered this issue while building an AI data analytics platform (Airflow + PostgreSQL) for a client. Here's the root cause and solution.

TL;DR

On Docker Desktop for Windows (WSL2 backend), network_mode: host container ports are unreachable from the WSL2 host. The container shows the port listening, but curl localhost:PORT returns connection refused. The fix: use network_mode: !reset in your override file to remove host mode, then switch to bridge + external network + port mapping.

Problem

A project uses docker-compose.yml with host networking for Airflow:

x-airflow-common:
&airflow-common
image: airflow-ai-dag:latest
network_mode: host # Works on servers, breaks on WSL2
volumes:
- ..:/opt/airflow/project

services:
airflow-webserver:
<<: *airflow-common
command: webserver
environment:
- AIRFLOW__WEBSERVER__WEB_SERVER_PORT=8082

Container starts normally, logs confirm port binding:

[INFO] Listening at: http://0.0.0.0:8082

But unreachable from the host:

$ ss -tlnp | grep 8082
# Empty, nothing listening

$ curl localhost:8082/health
curl: (7) Failed to connect to localhost port 8082

Meanwhile, a pgAdmin container with -p 8080:80 works fine at localhost:8080.

Root Cause

Docker Desktop for Windows has a different network architecture than native Linux Docker:

Windows Browser

WSL2 Host (your terminal)
↕ Docker Desktop pipeline
Docker Desktop Utility VM ← network_mode: host points HERE

Containers

On a Linux server, network_mode: host shares the host's network directly — container ports = host ports. But on Docker Desktop WSL2:

  • "host" in host mode = the Docker Desktop utility VM, not WSL2
  • Container's /proc/net/tcp shows the port listening
  • WSL2 host's /proc/net/tcp has no matching entry
  • Containers with -p port mapping are unaffected (Docker Desktop auto-forwards them)

Verification: Compare network namespaces:

# Inside container: 8082 (hex 1F92) is listening
$ docker exec webserver grep '1F92' /proc/net/tcp
4: 00000000:1F92 00000000:0000 0A ...

# On WSL2 host: 8082 doesn't exist
$ grep '1F92' /proc/net/tcp
# No output

Solution

Step 1: Create docker-compose.override.yml

Use !reset to remove the base's network_mode: host (requires Compose v2.24+):

# docker-compose.override.yml
services:
airflow-webserver:
network_mode: !reset # Remove base's host mode
networks:
- app_network # Join external network
ports:
- "8082:8082" # Port mapping
environment:
- DB_HOST=postgres-db # Use container name for DB

airflow-scheduler:
network_mode: !reset
networks:
- app_network
environment:
- DB_HOST=postgres-db

networks:
app_network:
external: true # Reference existing external network

Step 2: Ensure DB container is on the same network

# Check which network the DB container uses
$ docker inspect postgres-db --format '{{json .NetworkSettings.Networks}}'
# {"app_network": {...}}

# Create the network if it doesn't exist
$ docker network create app_network # Skip if already exists

Step 3: Restart and verify

$ docker compose down
$ docker compose up -d

Verify:

$ ss -tlnp | grep 8082
LISTEN 0 4096 *:8082 *:*

$ curl -s -o /dev/null -w "%{http_code}" http://localhost:8082/health
200

Key Point: !reset Must Be in Service Definitions

This does NOT work:

# ❌ Wrong: inside an anchor
x-airflow-local:
&airflow-local
network_mode: !reset # Doesn't work!
networks:
- app_network

services:
airflow-webserver:
<<: *airflow-local # Base's network_mode:host still present after merge

Correct:

# ✅ Correct: in each service definition
services:
airflow-webserver:
network_mode: !reset # Must be here
networks:
- app_network

Caveats

  • network_mode: !reset requires Docker Compose v2.24+. Check with docker compose version
  • !reset only works on scalar fields (like network_mode), not lists or dictionaries
  • If the base uses YAML anchors (<<: *anchor), !reset must be at the service definition level, not inside another anchor
  • After switching to bridge mode, containers can't reach each other via localhost — use container names or a shared Docker network
  • network_mode and networks are mutually exclusive; having both causes a mutually exclusive error

WordPress Block Theme Changes Not Taking Effect? FSE Development Troubleshooting Guide

· 7 min read

Encountered these five issues repeatedly while developing WordPress Block Themes for clients. Each one took significant debugging time. This guide covers the root causes and provides ready-to-use solutions.

TL;DR

Five issues ranked by frequency: file changes not applying (database cache overrides files), block nesting errors (unclosed comments), child theme content not rendering (missing post-content block), SVG icons disappearing (WP_Filesystem polluted by plugins), and WP-CLI mail failures (SMTP plugins don't hook in CLI). Each scenario includes copy-paste diagnostic commands.

Scenario 1: Changed Theme Files, But the Page Looks the Same

Problem

You modified theme.json, templates/*.html, or parts/*.html in your theme directory, but the page shows no change. Even after git pull updates the code, the frontend still renders the old version.

Root Cause

FSE themes have their templates and global styles saved to the database via the Site Editor—stored as wp_template, wp_template_part, and wp_global_styles custom post types. The database version takes priority over file versions. Even if you update the file, as long as a database record exists, WordPress uses that instead.

Solution

Different file types require different cleanup:

Modified contentCleanup method
templates/*.htmlClear wp_template
parts/*.htmlClear wp_template_part
theme.jsonClear wp_global_styles + flush cache
patterns/*.phpTakes effect immediately, no cleanup needed

One-command cleanup for all database template caches:

# Local Docker environment
docker exec wp_cli bash -c 'wp post delete $(wp post list --post_type=wp_template --format=ids --allow-root) --force --allow-root'
docker exec wp_cli bash -c 'wp post delete $(wp post list --post_type=wp_template_part --format=ids --allow-root) --force --allow-root'
docker exec wp_cli bash -c 'wp post delete $(wp post list --post_type=wp_global_styles --format=ids --allow-root) --force --allow-root'
docker exec wp_cli wp cache flush --allow-root

If theme.json changes still don't apply, verify the JSON has no syntax errors (e.g., trailing commas are invalid in JSON):

docker exec wp_cli wp eval 'echo json_encode(json_decode(file_get_contents(get_template_directory() . "/theme.json")));' --allow-root
# Empty output → JSON syntax error

Important

  • Never save in the Site Editor during development—this prevents database overrides
  • Before clearing in production, verify no custom modifications need to be preserved from the Site Editor
  • patterns/*.php is unaffected—Pattern registration runs through PHP code, not the database
  • Corrupted wp_global_styles JSON from the Site Editor can cause site-wide WP_Theme_JSON_Resolver errors, breaking all page styles

Scenario 2: Site Editor Shows "Attempt Recovery" and the Layout Breaks

Problem

The Site Editor shows "Attempt Recovery" for a Pattern. After saving, the page layout is completely broken. Some blocks are incorrectly nested inside others, and the hierarchy doesn't match the source code.

Root Cause

WordPress block editor uses HTML comments to mark block boundaries:

<!-- wp:group {"layout":{"type":"constrained"}} -->
<div class="wp-block-group">
<!-- content -->
<!-- /wp:group -->

When a container block (like wp:group or wp:columns) is missing its closing comment <!-- /wp:group -->, Gutenberg's parse_blocks() treats all subsequent blocks as children of that container. The consequences:

  1. The parent block's save output is empty
  2. Block validation fails
  3. All subsequent blocks have incorrect nesting

Solution

Diagnose: Check block tree hierarchy in the browser console:

wp.data.select('core/block-editor').getBlocks()

Inspect the returned block tree. If a group block contains blocks that shouldn't be inside it, a previous container is likely missing its closing comment.

Fix: Open the Pattern source file and verify every <!-- wp:xxx --> has a matching <!-- /wp:xxx -->. Search for <!-- wp: and <!-- /wp: and count to confirm the numbers match.

Prevention Tip

Use an editor with bracket matching (like VS Code) with a Block Comment highlight extension to catch unclosed comments immediately. For complex Patterns, write the skeleton first (all open/close comment pairs), then fill in the content.

Scenario 3: Child Theme Override Hides Page Editor Content

Problem

After creating a child theme to override a parent template, content entered in the WordPress page editor (text, images, etc.) is completely blank on the frontend. However, hardcoded Patterns in the template (hero sections, CTAs) display normally.

Root Cause

FSE templates render the page editor's post_content through the <!-- wp:post-content /--> block. If the child theme's template file doesn't include this block, WordPress has nowhere to output the page content.

The result: the template's fixed structure (header, hero, sidebar) renders correctly, but everything written in the editor is lost.

Solution

Ensure your child theme template includes the post-content block:

<!-- wp:group {"layout":{"type":"constrained"}} -->
<div class="wp-block-group">

<!-- Template fixed structure (hero, sidebar, etc.) -->

<!-- wp:post-content {"layout":{"type":"constrained"}} /-->

<!-- More fixed structure (CTA, footer reference, etc.) -->

</div>
<!-- /wp:group -->

When troubleshooting "changes not showing", confirm in this order:

  1. Does the template file include <!-- wp:post-content /-->?
  2. Are you modifying the template file or the page content? They control different content areas
  3. Inline cover blocks in the template are controlled by the template file, not the database post_content

Scenario 4: SVG Icons Disappeared But the Files Are Still There

Problem

Your theme uses WP_Filesystem to read SVG icon files. Suddenly all SVG icons disappear. Accessing the SVG file URL directly returns normal content, but the icon positions on pages are empty.

Root Cause

WordPress's $wp_filesystem global defaults to WP_Filesystem_Direct (direct local file access). Some plugins (backup, security) replace $wp_filesystem with WP_Filesystem_ftpsockets or WP_Filesystem_SSH2 during initialization.

FTP/SSH adapters read files through remote connections. For local paths (like /var/www/html/wp-content/themes/...), they can't access the files correctly and return empty strings. Since the replacement happens in the global scope, all theme and plugin code using WP_Filesystem is affected.

Solution

Step 1 — Diagnose: Check the actual $wp_filesystem type:

# Local Docker environment
docker exec wp_cli wp eval 'global $wp_filesystem; echo get_class($wp_filesystem);' --allow-root

# Returns WP_Filesystem_Direct → normal
# Returns WP_Filesystem_ftpsockets or other → polluted

Step 2 — Identify the source: Disable plugins one by one to find which one replaces the adapter:

docker exec wp_cli wp plugin deactivate <plugin-name> --allow-root
docker exec wp_cli wp eval 'global $wp_filesystem; echo get_class($wp_filesystem);' --allow-root
# Repeat until it returns WP_Filesystem_Direct

Step 3 — Code fallback: Add file_get_contents() as a safety net in your theme:

function mytheme_get_svg( $path ) {
global $wp_filesystem;

// Prefer WP_Filesystem
if ( $wp_filesystem && method_exists( $wp_filesystem, 'get_contents' ) ) {
$content = $wp_filesystem->get_contents( $path );
if ( $content ) {
return $content;
}
}

// Fallback to direct file read
if ( file_exists( $path ) ) {
return file_get_contents( $path );
}

return '';
}

Important

  • file_get_contents() may be disabled via disable_functions on restricted hosts, but VPS and Docker environments typically support it
  • The root fix is to identify and handle the polluting plugin; the code fallback is a temporary measure
  • This issue is deceptive—the SVG files are intact and accessible via URL, but return empty only when read through WP_Filesystem in PHP

Scenario 5: Email Sending Fails from Command Line, Works from Browser

Problem

Calling wp_mail() via wp eval on the command line always fails. However, emails triggered by web requests (user registration, contact forms) send normally. SMTP plugins like WP Mail SMTP are correctly configured.

Root Cause

SMTP plugins intercept wp_mail() via hooks to switch the sending channel from PHP sendmail to an SMTP service. But these hooks depend on WordPress's full bootstrap sequence—particularly stages after wp_loaded.

WP-CLI's wp eval loads the WordPress core, but some plugin hooks don't register in the CLI environment. wp_mail() falls back to PHP sendmail, and most servers don't have sendmail configured, causing the failure.

Solution

Method 1 — Web request test: Add a temporary test route in your theme, trigger via browser:

// Add to functions.php temporarily, remove after testing
add_action( 'wp_loaded', function() {
if ( ! isset( $_GET['test_mail'] ) ) return;
if ( '1' !== $_GET['test_mail'] ) return;

$result = wp_mail( '[email protected]', 'SMTP Test', 'Test email body' );
var_dump( $result ); // true = sent successfully
exit;
} );

Visit https://yoursite.com/?test_mail=1 to trigger the test.

Method 2 — eval-file with full bootstrap:

cat > /tmp/test-smtp.php << 'EOF'
<?php
require_once ABSPATH . 'wp-load.php';
do_action('wp_loaded');

$result = wp_mail('[email protected]', 'CLI SMTP Test', 'Test body');
echo $result ? "Sent\n" : "Failed\n";
EOF

docker exec wp_cli wp eval-file /tmp/test-smtp.php --allow-root

Best Practice

In production, always test email sending through web requests. WP-CLI is great for cron jobs and batch operations, but not for verifying features that depend on the full WordPress hook chain (email, cache warming, etc.).


Docker Container Can't See Host Files? Anonymous Volume Overrides Bind Mount

· 3 min read

Encountered this issue while deploying a WordPress site for a client. Here's the root cause and solution.

TL;DR

The VOLUME declaration in an image's Dockerfile creates anonymous volumes with higher mount priority than docker-compose.yml bind mounts. Solution: Stop container → Delete anonymous volume → Restart.

Problem

After CI deployment, new static files or PHP code don't exist inside the container:

  • assets/images/logo.png exists on host, missing in container → Logo doesn't display
  • inc/setup.php has new filter code, container has old version → Filter doesn't work
  • Files updated after git pull, container still has old content

Root Cause

WordPress official image's Dockerfile includes:

VOLUME /var/www/html

Even if your docker-compose.yml configures bind mount:

volumes:
- ./wordpress/wp-content:/var/www/html/wp-content

Docker still creates an anonymous volume for the VOLUME declared path. Anonymous volumes have higher mount priority than bind mounts, causing:

  1. /var/www/html is taken over by anonymous volume
  2. Your bind mount targets /var/www/html/wp-content
  3. But the anonymous volume already "occupies" the parent directory, bind mount gets overridden

Verify with docker inspect:

docker inspect prod_wordpress --format '{{range .Mounts}}{{.Type}}: {{.Source}} -> {{.Destination}}{{"\n"}}{{end}}'

Output similar to:

volume: /var/lib/docker/volumes/wp-prod_wp_html/_data -> /var/www/html  # Anonymous volume!
bind: /var/www/wp-prod/wordpress/wp-content -> /var/www/html/wp-content

Solution

# 1. Stop containers
cd /var/www/wp-prod && docker compose down

# 2. Delete anonymous volume
docker volume rm wp-prod_wp_html

# 3. Restart
docker compose up -d

Verification

docker inspect prod_wordpress --format '{{range .Mounts}}{{.Type}}: {{.Source}} -> {{.Destination}}{{"\n"}}{{end}}'

Should only show bind mount, no anonymous volume:

bind: /var/www/wp-prod/wordpress/wp-content -> /var/www/html/wp-content

Prevention

When using bind mount deployment, check if the image declares VOLUME. If declared:

  1. Before first start, confirm no residual anonymous volumes exist
  2. Or modify docker-compose.yml to make bind mount path match VOLUME path (mount to same level)

After fixing, git pull updates are automatically read by the container—no docker cp or container restart needed.

Important

  • Backup data before deleting anonymous volumes or ensure it can be restored via git pull
  • Test in staging environment before production operations
  • If container has critical data, backup with docker cp first

Installing WP-CLI in Docker WordPress Container

· 2 min read

TL;DR

The official WordPress Docker image doesn't include WP-CLI. Add a command configuration in docker-compose.yml to auto-install WP-CLI on container startup—no manual container entry required.

Problem

Running wp command inside WordPress Docker container:

docker exec -it wordpress_container wp --version

Returns:

bash: wp: command not found

Root Cause

The official WordPress Docker image is built on php:apache with a focus on minimal size. WP-CLI is a separate CLI tool that requires additional installation—it's not included in the default image.

Solution

Add a command to the WordPress service in docker-compose.yml to auto-install WP-CLI on startup:

services:
wordpress:
image: wordpress:latest
volumes:
- ./wordpress:/var/www/html
command: >
bash -c "curl -sO https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar &&
chmod +x wp-cli.phar &&
mv wp-cli.phar /usr/local/bin/wp &&
docker-entrypoint.sh apache2-foreground"

Key points:

  1. curl -sO - Silently download WP-CLI phar package
  2. chmod +x - Add execute permission
  3. mv ... /usr/local/bin/wp - Move to PATH directory for global access
  4. docker-entrypoint.sh apache2-foreground - Execute original image entrypoint to start Apache

Verify after restart:

docker-compose down
docker-compose up -d
docker exec -it wordpress_container wp --version
# Output: WP-CLI 2.x.x

Interested in similar solutions? Contact us