Skip to main content

Align Heading Icons in Docusaurus Docs with Flexbox

· 3 min read

TL;DR

To align SVG icons with text in Docusaurus document headings, use display: flex + align-items: center + gap, combined with the .theme-doc-markdown selector to target docs pages only without affecting blog.

Problem

When using inline SVG icons as heading decorations in Docusaurus docs:

## 🚀 Quick Start

Or via MDX components:

## <RocketIcon /> Quick Start

By default, SVG icons align to the text baseline, appearing visually offset upward:

🚀 Quick Start     ← Icon sits high, aligned to top of text

The traditional approach uses vertical-align: middle + margin-right, but has issues:

  1. Margin needs adjustment when icon size changes
  2. Alignment may break with different line-heights
  3. Multi-line headings have inconsistent alignment

Root Cause

SVG elements are inline by default, participating in inline layout. vertical-align: middle is calculated based on the x-height of the current line, affected by font, line-height, and icon size—making precise control difficult.

A deeper issue is selector scope. Docusaurus applies the .markdown class to both docs and blog pages, so direct modifications affect everything globally.

Solution

1. Use Flexbox Layout

Flexbox align-items: center calculates based on container height, independent of font metrics, providing more stable alignment:

/* Docs page heading icon alignment */
.theme-doc-markdown h1,
.theme-doc-markdown h2,
.theme-doc-markdown h3,
.theme-doc-markdown h4 {
display: flex;
align-items: center;
gap: 0.75rem;
}

2. Reset SVG Original Styles

Override global .markdown styles for margin-right and vertical-align:

.theme-doc-markdown h1 svg,
.theme-doc-markdown h2 svg,
.theme-doc-markdown h3 svg,
.theme-doc-markdown h4 svg {
margin-right: 0;
vertical-align: baseline;
flex-shrink: 0; /* Prevent icon compression */
}

3. Selector Scoping

Docusaurus provides page-specific class names:

SelectorScope
.markdowndocs + blog globally
.theme-doc-markdowndocs pages only
articleblog post pages only

Use .theme-doc-markdown to precisely target docs pages, leaving blog styling untouched.

Complete Code

/* ========== Docs Page Styles ========== */

/* Docs heading icon alignment */
.theme-doc-markdown h1,
.theme-doc-markdown h2,
.theme-doc-markdown h3,
.theme-doc-markdown h4 {
display: flex;
align-items: center;
gap: 0.75rem;
}

.theme-doc-markdown h1 svg,
.theme-doc-markdown h2 svg,
.theme-doc-markdown h3 svg,
.theme-doc-markdown h4 svg {
margin-right: 0;
vertical-align: baseline;
flex-shrink: 0;
}

FAQ

Q: What's the difference between .markdown and .theme-doc-markdown in Docusaurus?

.markdown is Docusaurus's global content styling class, applied to both docs and blog pages. .theme-doc-markdown is a docs-page-specific container class that only affects pages under /docs/* paths, ideal for docs-only styling.

Q: Why use gap instead of margin-right?

gap is a Flexbox/Grid spacing property that works naturally with align-items: center without depending on element margins. When an icon is hidden or absent, gap produces no extra whitespace, whereas margin-right would.

Q: What does flex-shrink: 0 do?

It prevents flex children from shrinking when container space is insufficient. SVG icons typically have fixed dimensions—shrinking would cause distortion and blurriness. Setting flex-shrink: 0 ensures icons maintain their original size.

修复 FastAPI SSE 客户端断开时的 CancelledError

· 2 min read

在为客户构建 AI 客服自动化系统时遇到此问题,记录根因与解法。

TL;DR

FastAPI 的 StreamingResponse 在客户端断开连接时会取消生成器任务,导致 asyncio.CancelledError。正确做法是在生成器中捕获该异常并 re-raise,否则会导致异常日志污染和资源泄漏。

问题现象

使用 SSE(Server-Sent Events)实现流式对话时,客户端断开连接后,服务端日志出现大量异常:

ERROR:    Exception in ASGI application
...
asyncio.CancelledError

代码原本写法:

async def event_stream():
async for event in engine.execute(body.message):
yield event

return StreamingResponse(event_stream(), media_type="text/event-stream")

根因

FastAPI/Starlette 的 StreamingResponse 在客户端断开时,会取消正在执行的生成器任务。Python 的 async for 循环被取消时会抛出 asyncio.CancelledError

如果不处理这个异常,它会向上传播,被 ASGI 服务器捕获并记录为错误日志。更严重的是,生成器内的资源(如数据库连接、HTTP 客户端)可能无法正确释放。

解决方案

在生成器内部捕获 CancelledError,记录日志后 必须 re-raise

import asyncio
import logging

logger = logging.getLogger(__name__)

async def event_stream():
try:
async for event in engine.execute(body.message):
yield event
except asyncio.CancelledError:
# 客户端断开连接,正常行为
logger.info("Client disconnected")
raise # 必须 re-raise 以正确终止生成器

return StreamingResponse(event_stream(), media_type="text/event-stream")

为什么必须 re-raise?

CancelledError 是 Python 取消协程的标准机制。捕获后如果不 re-raise:

  1. 生成器不会正确终止
  2. StreamingResponse 认为响应正常完成
  3. 可能导致资源泄漏

FAQ

Q: FastAPI SSE 客户端断开后为什么报 CancelledError?

A: 这是 Python asyncio 的设计行为。客户端断开时,Starlette 取消生成器任务,触发 CancelledError。正确处理方式是捕获并 re-raise。

Q: 捕获 CancelledError 后不 re-raise 会怎样?

A: 生成器无法正确终止,可能导致数据库连接、HTTP 客户端等资源泄漏。同时 StreamingResponse 会误认为响应正常完成。

Q: 如何区分正常断开和异常断开?

A: CancelledError 本身就是正常断开的信号。如果需要在断开时执行清理逻辑(如更新状态),在 except 块中处理后再 re-raise。

Complete Guide to Google Analytics 4 in React SPA

· 3 min read

TL;DR

Key points for GA4 in React SPA: 1) Set send_page_view: false to prevent duplicate counts; 2) Use useLocation to track route changes and send pageviews manually; 3) Set user_id after login for cross-device tracking.

Problem

Using GA4 default configuration in React SPA causes:

  1. Duplicate page_view counts on initial load
  2. No page_view triggered on route changes
  3. Unable to track logged-in users across devices

Root Cause

GA4 automatically sends a page_view event when the script loads. But SPA route changes don't refresh the page, so GA4 can't detect URL changes. Also, User-ID must be set manually after login - default config can't identify users.

Solution

1. Disable Auto Page View

When loading GA4 in index.html, set send_page_view: false:

<script async src="https://www.googletagmanager.com/gtag/js?id=G-XXXXXXXXXX"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-XXXXXXXXXX', { send_page_view: false });
</script>

2. Create Analytics Component for Route Tracking

// src/components/Analytics.tsx
import { useEffect } from 'react'
import { useLocation } from 'react-router-dom'
import { useAuthStore } from '@/stores/authStore'

declare global {
interface Window {
gtag: (
command: 'config' | 'event' | 'js' | 'set',
targetIdOrDate: string | Date,
params?: Record<string, unknown>
) => void
}
}

export function Analytics() {
const location = useLocation()
const user = useAuthStore((state) => state.user)

useEffect(() => {
if (typeof window.gtag === 'function') {
const params: Record<string, unknown> = {
page_path: location.pathname + location.search,
}
// Add user_id for logged-in users
if (user?.id) {
params.user_id = user.id
}
window.gtag('config', 'G-XXXXXXXXXX', params)
}
}, [location, user?.id])

return null
}

3. Wrap Router Root

// src/app/routes.tsx
import { Outlet } from 'react-router-dom'
import { Analytics } from '@/components/Analytics'

function RootLayout() {
return (
<>
<Analytics />
<Outlet />
</>
)
}

export const router = createBrowserRouter([
{
element: <RootLayout />,
children: [
// your route config...
],
},
])

4. Set User-ID on Login (Optional Enhancement)

// src/hooks/useAuth.ts
import { supabase } from '@/services/supabase'

export function useAuth() {
useEffect(() => {
const { data: { subscription } } = supabase.auth.onAuthStateChange(
(event, session) => {
if (event === 'SIGNED_IN' && session) {
// Set GA4 User-ID
if (typeof window.gtag === 'function') {
window.gtag('config', 'G-XXXXXXXXXX', {
user_id: session.user.id
})
}
}
}
)
return () => subscription.unsubscribe()
}, [])
}

FAQ

Q: Why doesn't GA4 track route changes in React SPA?

A: GA4 only sends page_view on page load by default. SPA route changes don't refresh the page, so you need to manually call gtag('config', ...) to send pageviews.

Q: What is GA4 User-ID used for?

A: User-ID links user behavior across different devices, enabling cross-device analytics, user retention analysis, and other advanced features. You need to enable User-ID in GA4 admin settings.

Q: How to verify GA4 configuration is correct?

A: Use Chrome extension "Google Tag Assistant" or GA4 DebugView (requires debug_mode). Check if page_view events fire on each route change and if user_id is set correctly.

Fix Tailwind Preflight Resetting Docusaurus Breadcrumbs Styles

· 2 min read

TL;DR

After adding Tailwind CSS to a Docusaurus project, Preflight's CSS Reset strips <ul> elements of their list-style, margin, and padding, breaking the breadcrumbs navigation. Fix by adding explicit override styles in custom.css.

Problem

After integrating Tailwind CSS into Docusaurus, the breadcrumbs navigation on doc pages displays incorrectly:

  • List styles are lost (list-style reset to none)
  • Spacing disappears (margin, padding reset to 0)
  • Layout may break (display may be affected)

Checking browser DevTools, the .breadcrumbs computed styles show these properties are reset by Preflight:

/* Tailwind Preflight reset */
ul, ol {
list-style: none;
margin: 0;
padding: 0;
}

Root Cause

Tailwind Preflight is a CSS Reset based on modern-normalize, injected during the @tailwind base stage. It provides a consistent cross-browser baseline.

The problem: Docusaurus's .breadcrumbs component uses a <ul> element, relying on browser-default flex layout and spacing. Preflight's reset rules have higher specificity and override Docusaurus's default styles.

Since Preflight is injected globally, any third-party component using <ul>/<ol> may be affected.

Solution

Add explicit override styles in src/css/custom.css, using !important for specificity:

/* ========== Breadcrumbs ========== */
.theme-doc-breadcrumbs {
margin-bottom: 1.5rem;
}

.breadcrumbs {
display: flex !important;
flex-wrap: wrap;
align-items: center;
list-style: none;
margin: 0;
padding: 0;
}

.breadcrumbs__item {
display: flex !important;
align-items: center;
gap: 0.5rem;
}

Key points:

  1. .breadcrumbs uses display: flex !important to ensure horizontal layout
  2. list-style: none is expected behavior (breadcrumbs don't need bullets)
  3. .breadcrumbs__item adds gap: 0.5rem for element spacing

FAQ

Q: Why is !important needed?

Tailwind Preflight is injected during @tailwind base, and its selector specificity may match Docusaurus default styles. Using !important ensures custom styles take effect, avoiding specificity wars.

Q: What other components might be affected?

Any component using <ul>/<ol> may be affected, such as:

  • Navigation menus
  • Pagination components
  • Custom lists

How to check: In browser DevTools, search for list-style: none sources and confirm if it comes from Preflight.

Q: Can I disable Preflight?

Yes, but not recommended. In tailwind.config.js:

module.exports = {
corePlugins: {
preflight: false,
},
}

Disabling it means you'll need to handle cross-browser consistency yourself, which may cause more issues.

Fix the Hidden Pitfall of httpx async with client.post()

· 2 min read

Encountered this issue while building a multi-service SaaS system. Documenting the root cause and solution.

TL;DR

Don't use async with client.post() pattern with httpx.AsyncClient. Create the client first, then call methods: response = await client.post().

Problem Symptoms

import httpx

async def call_api():
async with httpx.AsyncClient() as client:
async with client.post(url, json=data) as response: # Problem code
return response.json()

This code sometimes works, sometimes errors:

httpx.RemoteProtocolError: cannot write to closing transport
RuntimeError: Session is closed

Root Cause

The async with client.post() Trap

client.post() returns a Response object, not a context manager. Wrapping it with async with causes:

  1. Premature connection closure: The connection closes immediately when the async with block ends, but the response may still be reading
  2. Resource contention: With concurrent requests, connection pool state becomes chaotic

Understanding httpx Context Managers Correctly

# ✅ Correct: client is the context manager
async with httpx.AsyncClient() as client:
response = await client.post(url, json=data)
return response.json()

# ❌ Wrong: treating response as context manager
async with client.post(url) as response:
...

Solution

Option 1: Single Request (Simple Scenarios)

async def call_api(url: str, data: dict) -> dict:
async with httpx.AsyncClient() as client:
response = await client.post(url, json=data)
response.raise_for_status()
return response.json()

Option 2: Reuse Client (High-Frequency Requests)

# Global or dependency injection
_client = httpx.AsyncClient(timeout=30.0)

async def call_api(url: str, data: dict) -> dict:
response = await _client.post(url, json=data)
response.raise_for_status()
return response.json()

# On app shutdown
async def shutdown():
await _client.aclose()

Option 3: FastAPI Dependency Injection

from fastapi import Depends
from httpx import AsyncClient

async def get_http_client() -> AsyncClient:
async with AsyncClient(timeout=30.0) as client:
yield client

@router.post("/proxy")
async def proxy(
data: dict,
client: AsyncClient = Depends(get_http_client)
):
response = await client.post("https://external.api/endpoint", json=data)
return response.json()

FAQ

Q: How should httpx async with be used correctly?

A: async with is only for managing AsyncClient lifecycle, not wrapping individual requests. Correct pattern: async with AsyncClient() as client: response = await client.post(...).

Q: Why does async with client.post() sometimes work?

A: It may work by chance in single-threaded, low-concurrency scenarios, but will fail under high concurrency or network latency. This is a hidden bug—don't rely on it.

Q: How to configure httpx timeout?

A: AsyncClient(timeout=30.0) or AsyncClient(timeout=httpx.Timeout(connect=5.0, read=30.0)).

Fix Pydantic v2 ORM Mode model_config Override Error

· 2 min read

TL;DR

Pydantic v2 no longer supports class Config. Use model_config = ConfigDict(from_attributes=True) instead. If your model has a field named model_config, you must rename it to avoid conflict with the reserved attribute.

Problem Symptoms

Error 1: class Config Not Working

from pydantic import BaseModel

class AgentResponse(BaseModel):
id: str
name: str

class Config:
orm_mode = True # v1 style
PydanticUserError: `orm_mode` is not a valid config option. Did you mean `from_attributes`?

Error 2: model_config Field Conflict

class Agent(BaseModel):
id: str
model_config: dict # Business field storing LLM config

model_config = ConfigDict(from_attributes=True)
# TypeError: 'dict' object is not callable

Your model has a business field called model_config (storing LLM configuration), which conflicts with Pydantic v2's reserved name.

Root Cause

1. Pydantic v2 Configuration Syntax Change

Pydantic v2 uses model_config as the configuration attribute name, no longer supporting nested class Config:

Pydantic v1Pydantic v2
class Config: orm_mode = Truemodel_config = ConfigDict(from_attributes=True)
class Config: schema_extra = {...}model_config = ConfigDict(json_schema_extra={...})

2. model_config is a Reserved Name

model_config is a special attribute in Pydantic v2 and cannot be used as a business field name simultaneously.

Solution

1. Update ORM Mode Configuration

from pydantic import BaseModel, ConfigDict

class AgentResponse(BaseModel):
model_config = ConfigDict(from_attributes=True) # New syntax

id: str
name: str

2. Rename Conflicting Field

Rename the business field model_config to llm_config (or any non-reserved name):

# models/agent.py
class Agent(BaseModel):
__tablename__ = "agent_agents"

id: str
llm_config: dict # Renamed to avoid conflict

# schemas/agent.py
class AgentResponse(BaseModel):
model_config = ConfigDict(from_attributes=True)

agent_id: str
llm_config: LlmConfig # Keep consistent with model

3. Database Migration (If Needed)

If the database column also needs renaming:

# alembic/versions/xxx_rename_model_config.py
def upgrade():
op.alter_column('agent_agents', 'model_config', new_column_name='llm_config')

def downgrade():
op.alter_column('agent_agents', 'llm_config', new_column_name='model_config')

FAQ

Q: What did Pydantic v2 replace orm_mode with?

A: It's now from_attributes=True, and the configuration syntax changed from class Config to model_config = ConfigDict(...).

Q: Why is my model_config field causing errors?

A: model_config is a reserved attribute name in Pydantic v2 for configuring model behavior. If your business code has a field with the same name, you need to rename it.

Q: What other common ConfigDict options exist?

A: from_attributes (ORM mode), json_schema_extra (schema extension), str_strip_whitespace (auto strip whitespace), validate_assignment (validate on assignment).

Vite Path Alias Configuration - Why You Need Two Configs

· 2 min read

TL;DR

Vite path aliases require simultaneous configuration in both vite.config.ts and tsconfig.json—neither works alone. Vite handles bundler resolution, TypeScript handles type checking and IDE intellisense.

Problem Symptoms

Only Configured vite.config.ts

// vite.config.ts
import { defineConfig } from 'vite'
import path from 'path'

export default defineConfig({
resolve: {
alias: {
'@': path.resolve(__dirname, './src')
}
}
})

Build runs fine, but IDE shows errors:

Cannot find module '@/components/Button' or its corresponding type declarations.

Only Configured tsconfig.json

// tsconfig.json
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@/*": ["src/*"]
}
}
}

IDE is happy, but Vite build fails:

[vite] Internal server error: Failed to resolve import "@/services/api"

Root Cause

Two Configs, Two Responsibilities

Config FileOwnerPurpose
vite.config.tsVite/esbuildPath resolution during build
tsconfig.jsonTypeScriptType checking, IDE intellisense

Configuring only one:

  • Vite can build, but IDE shows red lines everywhere, no go-to-definition
  • IDE works, but vite dev / vite build can't find modules

Solution

Complete Configuration (Both Required)

// vite.config.ts
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import path from 'path'

export default defineConfig({
plugins: [react()],
resolve: {
alias: {
'@': path.resolve(__dirname, './src')
}
}
})
// tsconfig.json
{
"compilerOptions": {
"target": "ES2020",
"module": "ESNext",
"moduleResolution": "bundler",
"baseUrl": ".",
"paths": {
"@/*": ["src/*"]
}
},
"include": ["src"]
}

Verify Configuration Works

// src/services/api.ts
export const api = { ... }

// src/App.tsx - Should have go-to-definition, intellisense, and build correctly
import { api } from '@/services/api'

Multiple Aliases Example

// vite.config.ts
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
'@components': path.resolve(__dirname, './src/components'),
'@hooks': path.resolve(__dirname, './src/hooks'),
}
}

// tsconfig.json
"paths": {
"@/*": ["src/*"],
"@components/*": ["src/components/*"],
"@hooks/*": ["src/hooks/*"]
}

FAQ

Q: Why does Vite path alias need two configurations?

A: Vite (based on esbuild/rollup) and TypeScript are independent tools. Vite handles module resolution during bundling, TypeScript handles compile-time type checking and IDE support. They don't share configuration.

Q: Still getting errors after configuration?

A: Restart IDE and Vite dev server. VSCode: Cmd+Shift+P → "TypeScript: Restart TS Server", Terminal: Ctrl+C to restart npm run dev.

Q: path.resolve __dirname error?

A: Make sure to import for ES Module: import path from 'path', or add "type": "module" in package.json. Or use import.meta.url instead of __dirname.

Enable VSCode Copilot Agent Mode for Automated Programming

· 3 min read

TL;DR

VSCode Copilot Agent Mode is an experimental feature that lets AI automatically execute multi-step tasks (including editing files and running terminal commands). Enable it by adding "github.copilot.chat.agent.enabled": true to your settings.json. Perfect for repetitive refactoring and batch file modifications.

Problem

Traditional Copilot Chat only suggests code snippets, requiring you to:

  1. Manually copy the code
  2. Switch to the target file
  3. Paste and adjust
  4. Repeat for each change

This workflow becomes extremely inefficient when modifying multiple files.

Root Cause

Copilot's Ask Mode is designed as a "suggester": it outputs code but doesn't execute actions. This is a safety feature, but for developers who trust AI, it adds significant manual overhead.

Agent Mode acts as an "executor": AI can directly edit files and run commands, enabling true automated programming.

Solution

1. Enable Agent Mode

Add to VSCode settings.json:

{
"github.copilot.chat.agent.enabled": true
}

Or search for @id:github.copilot.chat.agent.enabled in Settings and check the box.

2. Switch to Agent Mode

In the Copilot Chat panel, click the mode dropdown and switch from "Ask" to "Agent":

┌─────────────────────────────┐
│ Ask ▼ │ Agent ▼ │ Edit │
└─────────────────────────────┘

3. Usage Examples

Scenario: Batch Rename Function

Rename getUserName to fetchUserProfile in all files under src/utils

Agent Mode will automatically:

  1. Scan the src/utils directory
  2. Find all files containing getUserName
  3. Modify each file and save

Scenario: Add TypeScript Types

Add return type annotations to all exported functions in src/api/*.ts

4. Tool Permission Control

Agent Mode requests confirmation before executing sensitive operations. Adjust in settings:

{
"github.copilot.chat.agent.autoToolConfirmation": {
"readFile": true, // Auto-allow file reading
"editFile": false, // Require confirmation for edits
"runInTerminal": false // Require confirmation for commands
}
}

5. Available Tools

Agent Mode can call these tools:

ToolFunction
readFileRead file contents
editFileEdit files
createFileCreate new files
deleteFileDelete files
runInTerminalExecute terminal commands
listDirectoryList directory contents
searchSearch code

FAQ

Q: What's the difference between Agent Mode and Ask Mode?

Ask Mode only suggests code, requiring manual copy-paste; Agent Mode can directly execute file edits and terminal commands for automation.

Q: Is Agent Mode safe?

Agent Mode requests confirmation before sensitive operations (like deleting files or running commands). Always use it in version-controlled repositories for easy rollback.

Q: Why can't I find the Agent Mode option?

Ensure you have the latest Copilot Chat extension (v0.15+) and enable github.copilot.chat.agent.enabled in settings.

Q: What terminal commands can Agent Mode execute?

Theoretically any command, but stick to safe development commands (like npm install, npm run build). Avoid high-risk operations like deletions or deployments.

Implementing Cascade Select Dropdowns in React

· 3 min read

TL;DR

The key to cascade selection: when parent changes, reset child to a valid value. Use Record<string, Option[]> for type-safe data mapping, and update child state inside onValueChange callback.

Problem

When implementing Provider → Model cascade selection, after switching Provider:

// Before: provider = "openai", model = "gpt-4o"
// After: provider = "anthropic", model = "gpt-4o" ❌

// Model dropdown shows blank because "gpt-4o" is not in anthropic's model list
<Select value={model}> // value not in options, displays blank

Or when submitting the form, Model value is from the previous Provider, causing backend validation to fail.

Root Cause

In React controlled components, the value must exist in options. When Provider changes, Model's options list updates, but model state retains the old value. If the old value isn't in the new options, the Select component displays blank.

The key issue: only updated the options data, didn't sync the state value.

Solution

1. Define Data Structure

const AVAILABLE_PROVIDERS = [
{ value: 'deepseek', label: 'DeepSeek' },
{ value: 'openai', label: 'OpenAI' },
{ value: 'anthropic', label: 'Anthropic' },
]

// Use Record type for mapping
const AVAILABLE_MODELS: Record<string, { value: string; label: string }[]> = {
deepseek: [
{ value: 'deepseek-chat', label: 'DeepSeek Chat' },
{ value: 'deepseek-reasoner', label: 'DeepSeek Reasoner' },
],
openai: [
{ value: 'gpt-4o', label: 'GPT-4o' },
{ value: 'gpt-4o-mini', label: 'GPT-4o Mini' },
],
anthropic: [
{ value: 'claude-sonnet-4-20250514', label: 'Claude Sonnet 4' },
{ value: 'claude-3-5-sonnet-20241022', label: 'Claude 3.5 Sonnet' },
],
}

2. Initialize State

const [provider, setProvider] = useState('deepseek')
const [model, setModel] = useState('deepseek-chat') // Must be valid for initial provider

3. Key: Reset Model When Provider Changes

const handleProviderChange = (value: string | null) => {
if (value) {
setProvider(value)
// Core: reset model to first option of new provider
const models = AVAILABLE_MODELS[value]
if (models && models.length > 0) {
setModel(models[0].value)
}
}
}

4. Complete Component Example

import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/select'

function CascadeSelect() {
const [provider, setProvider] = useState('deepseek')
const [model, setModel] = useState('deepseek-chat')

const handleProviderChange = (value: string | null) => {
if (value) {
setProvider(value)
const models = AVAILABLE_MODELS[value]
if (models && models.length > 0) {
setModel(models[0].value)
}
}
}

return (
<>
{/* Provider Select */}
<Select value={provider} onValueChange={handleProviderChange}>
<SelectTrigger>
<SelectValue placeholder="Select provider" />
</SelectTrigger>
<SelectContent>
{AVAILABLE_PROVIDERS.map((p) => (
<SelectItem key={p.value} value={p.value}>
{p.label}
</SelectItem>
))}
</SelectContent>
</Select>

{/* Model Select - dynamic options based on provider */}
<Select value={model} onValueChange={(v) => v && setModel(v)}>
<SelectTrigger>
<SelectValue placeholder="Select model" />
</SelectTrigger>
<SelectContent>
{(AVAILABLE_MODELS[provider] || []).map((m) => (
<SelectItem key={m.value} value={m.value}>
{m.label}
</SelectItem>
))}
</SelectContent>
</Select>
</>
)
}

5. Form Reset

Reset form when closing Dialog to avoid stale state:

const resetForm = () => {
setProvider('deepseek')
setModel('deepseek-chat') // Reset to default for provider
}

const handleOpenChange = (newOpen: boolean) => {
if (!newOpen) {
resetForm()
}
onOpenChange(newOpen)
}

FAQ

Q: Why does my cascade select child dropdown show blank after parent changes?

A: In the parent's onValueChange callback, sync the child state to the first value of the new options list. In controlled components, value must exist in options.

Q: How to type cascade select data in TypeScript?

A: Use Record<string, Option[]> to map parent to children, e.g., Record<string, { value: string; label: string }[]>. This is type-safe and easy to extend.

Q: What happens when Select value doesn't match any option?

A: Most UI libraries (Radix, MUI, Ant Design) display blank or placeholder without errors. This is expected behavior for controlled components—ensure value is always a valid option.

集成 Supabase Auth 到 FastAPI 的三个坑

· 4 min read

在为客户构建 SaaS 认证系统时遇到此问题,记录根因与解法。

TL;DR

Supabase Auth + FastAPI 集成有三个常见坑:JWKS 路径不是标准路径、ES256 签名需转换为 DER 格式、用户首次登录时本地数据库无记录。本文提供完整解决方案。

问题现象

坑 1:JWKS 路径 404

GET https://xxx.supabase.co/.well-known/jwks.json
# 404 Not Found

所有 JWT 验证请求返回 401 Invalid Token。

坑 2:ES256 签名验证失败

from jose import jwt
payload = jwt.decode(token, key, algorithms=["ES256"])
# JWTError: Signature verification failed

明明公钥是对的,但签名验证总是失败。

坑 3:用户首次登录无本地记录

# 创建 Agent 时
agent = Agent(user_id=current_user["user_id"], ...)
db.add(agent)
# ForeignKeyViolation: user_id 不存在

Supabase Auth 用户通过了 JWT 验证,但本地 agent_users 表没有该用户记录。

根因

坑 1:Supabase 非标准 JWKS 路径

标准 OAuth/OIDC 服务器 JWKS 在 /.well-known/jwks.json,但 Supabase 把认证服务放在 /auth/v1/ 子路径下:

标准路径Supabase 路径
/.well-known/jwks.json/auth/v1/.well-known/jwks.json

坑 2:ES256 原始签名 vs DER 格式

Supabase JWT 使用 ES256(P-256 曲线)签名。JWT 中的签名是 raw 格式r || s 拼接,64 字节),但 Python cryptography 库的 verify() 方法需要 DER-encoded ASN.1 格式

Raw:     r (32 bytes) || s (32 bytes) = 64 bytes
DER: 0x30 <len> 0x02 <r_len> <r> 0x02 <s_len> <s>

python-josejwt.decode() 在处理 ES256 时有兼容性问题,需要手动验证签名。

坑 3:认证与数据分离

Supabase Auth 是独立服务,用户注册/登录后只存在于 Supabase 的 auth.users 表。本地数据库的 agent_users 表需要手动同步。

解决方案

1. 正确的 JWKS URL

# config.py
class Settings(BaseSettings):
supabase_url: str = "https://xxx.supabase.co"

@property
def jwks_url(self) -> str:
# 关键:/auth/v1/ 前缀
return f"{self.supabase_url}/auth/v1/.well-known/jwks.json"

2. ES256 签名验证(完整代码)

import json
import base64
from cryptography.hazmat.primitives.asymmetric import ec
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric.utils import encode_dss_signature

def _base64url_decode(data: str) -> bytes:
"""Base64url 解码,自动补 padding"""
rem = len(data) % 4
if rem > 0:
data += "=" * (4 - rem)
return base64.urlsafe_b64decode(data)

def _raw_to_der_signature(raw_sig: bytes) -> bytes:
"""将 raw ECDSA 签名 (r||s) 转为 DER 格式"""
# P-256: r 和 s 各 32 字节
r = int.from_bytes(raw_sig[:32], "big")
s = int.from_bytes(raw_sig[32:], "big")
return encode_dss_signature(r, s)

def verify_es256_signature(token: str, public_key_jwk: dict) -> dict:
"""验证 ES256 JWT 签名,返回 payload"""
parts = token.split(".")
if len(parts) != 3:
raise ValueError("Invalid JWT format")

header_b64, payload_b64, signature_b64 = parts

# 1. 构建 EC 公钥
x = _base64url_decode(public_key_jwk["x"])
y = _base64url_decode(public_key_jwk["y"])
x_int = int.from_bytes(x, "big")
y_int = int.from_bytes(y, "big")

public_key = ec.EllipticCurvePublicNumbers(
x_int, y_int, ec.SECP256R1()
).public_key(default_backend())

# 2. 验证签名
message = f"{header_b64}.{payload_b64}".encode()
raw_signature = _base64url_decode(signature_b64)
der_signature = _raw_to_der_signature(raw_signature)

public_key.verify(
der_signature,
message,
ec.ECDSA(hashes.SHA256())
)

# 3. 返回 payload
return json.loads(_base64url_decode(payload_b64))

3. 用户同步服务

# app/services/user_service.py
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.models.user import AgentUser

async def ensure_user_exists(
db: AsyncSession,
user_id: str,
email: str,
plan: str = "free"
) -> AgentUser:
"""确保用户存在于本地数据库(从 Supabase Auth 同步)"""
# 检查是否存在
result = await db.execute(
select(AgentUser).where(AgentUser.user_id == user_id)
)
user = result.scalar_one_or_none()

if user:
return user

# 创建新用户
user = AgentUser(
user_id=user_id,
email=email,
plan=plan,
role="user"
)
db.add(user)
await db.commit()
await db.refresh(user)
return user

4. 在创建资源前调用

# app/routers/agents.py
@router.post("/")
async def create_agent(
input: CreateAgentInput,
db: AsyncSession = Depends(get_db),
current_user: dict = Depends(get_current_user)
):
# 关键:确保用户存在
user = await ensure_user_exists(
db,
user_id=current_user["user_id"],
email=current_user["email"],
plan=current_user["plan"]
)

# 现在可以安全创建 Agent
agent = Agent(
user_id=user.user_id,
name=input.name,
llm_config=input.llm_config.model_dump()
)
...

FAQ

Q: Supabase JWT 验证返回 404 怎么办?

A: Supabase 的 JWKS 路径是 /auth/v1/.well-known/jwks.json,不是标准的 /.well-known/jwks.json。检查你的 JWKS URL 配置。

Q: python-jose 验证 ES256 签名失败怎么解决?

A: python-jose 对 ES256 支持不完善。使用 cryptography 库手动验证,需要将 JWT 的 raw 签名(r||s 64字节)转换为 DER 格式。

Q: FastAPI 如何同步 Supabase Auth 用户到本地数据库?

A: 在需要用户记录的 API(如创建资源)入口处调用 ensure_user_exists(),从 JWT 提取用户信息并同步到本地表。

Q: Supabase JWT 中的 user_id 在哪个字段?

A: sub 字段包含用户 UUID,email 字段包含邮箱,app_metadata.plan 包含订阅计划(自定义字段)。