Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 15 additions & 12 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,10 @@ PGPASSWORD=testpwd1
**Core Packages**:
- `ir/` - Intermediate Representation (IR) package - separate Go module
- Schema objects (tables, indexes, functions, procedures, triggers, policies, etc.)
- SQL parser using pg_query_go
- Database inspector using pgx
- Database inspector using pgx (queries pg_catalog for schema extraction)
- Schema normalizer
- Identifier quoting utilities
- Note: Parser removed in favor of embedded-postgres approach

**Internal Packages** (`internal/`):
- `diff/` - Schema comparison and migration DDL generation
Expand All @@ -105,13 +105,15 @@ PGPASSWORD=testpwd1

### Key Architecture Patterns

**Schema Representation**: Uses an Intermediate Representation (IR) to normalize schema objects from both parsed SQL files and live database introspection. This allows comparing schemas from different sources.
**Schema Representation**: Uses an Intermediate Representation (IR) to normalize schema objects from database introspection. Both desired state (from user SQL files) and current state (from target database) are extracted by inspecting PostgreSQL databases.

**Embedded Postgres for Desired State**: The `plan` command spins up a temporary embedded PostgreSQL instance, applies the user's SQL files to it, then inspects that database to get the desired state IR. This ensures both desired and current states come from the same source (database inspection), eliminating parser/inspector format differences.

**Migration Planning**: The `diff` package compares IR representations to generate a sequence of migration steps with proper dependency ordering (topological sort).

**Database Integration**: Uses `pgx/v5` for database connections and `embedded-postgres` for integration testing against real PostgreSQL instances (no Docker required).
**Database Integration**: Uses `pgx/v5` for database connections and `embedded-postgres` (v1.29.0) for both the plan command (temporary instances) and integration testing (no Docker required).

**SQL Parsing**: Leverages `pg_query_go/v6` (libpg_query bindings) for parsing PostgreSQL DDL statements. For understanding PostgreSQL syntax, see the **PostgreSQL Syntax Reference** skill.
**SQL Parsing**: Uses `pg_query_go/v6` (libpg_query bindings) for limited SQL expression parsing within the inspector (e.g., view definitions, CHECK constraints). The parser module was removed in favor of the embedded-postgres approach.

**Modular Architecture**: The IR package is a separate Go module that can be versioned and used independently.

Expand All @@ -121,10 +123,11 @@ PGPASSWORD=testpwd1

1. Add IR representation in `ir/ir.go`
2. Add database introspection logic in `ir/inspector.go` (consult **pg_dump Reference** skill for system catalog queries)
3. Add parsing logic in `ir/parser.go` (consult **PostgreSQL Syntax Reference** skill for grammar)
4. Add diff logic in `internal/diff/`
5. Add test cases in `testdata/diff/create_[object_type]/` (see **Run Tests** skill)
6. Validate with live database (see **Validate with Database** skill)
3. Add diff logic in `internal/diff/`
4. Add test cases in `testdata/diff/create_[object_type]/` (see **Run Tests** skill)
5. Validate with live database (see **Validate with Database** skill)

Note: Parser logic is no longer needed - both desired and current states come from database inspection.

### Debugging Schema Extraction

Expand Down Expand Up @@ -197,10 +200,10 @@ The tool supports comprehensive PostgreSQL schema objects (see `ir/ir.go` for co

**IR Package** (separate Go module at `./ir`):
- `ir/ir.go` - Core IR data structures for all schema objects
- `ir/parser.go` - SQL DDL parsing using pg_query_go
- `ir/inspector.go` - Database introspection using pgx
- `ir/normalizer.go` - Schema normalization
- `ir/inspector.go` - Database introspection using pgx (queries pg_catalog)
- `ir/normalize.go` - Schema normalization (version-specific differences, type mappings)
- `ir/quote.go` - Identifier quoting utilities
- Note: `ir/parser.go` removed - now using embedded-postgres for desired state

**Diff Package** (`internal/diff/`):
- `diff.go` - Main diff logic, topological sorting
Expand Down
200 changes: 139 additions & 61 deletions cmd/apply/apply.go
Original file line number Diff line number Diff line change
Expand Up @@ -66,65 +66,60 @@ func init() {
ApplyCmd.MarkFlagsMutuallyExclusive("file", "plan")
}

// RunApply executes the apply command logic. Exported for testing.
func RunApply(cmd *cobra.Command, args []string) error {
// Validate that either --file or --plan is provided
if applyFile == "" && applyPlan == "" {
return fmt.Errorf("either --file or --plan must be specified")
}

// Derive final password: use provided password or check environment variable
finalPassword := applyPassword
if finalPassword == "" {
if envPassword := os.Getenv("PGPASSWORD"); envPassword != "" {
finalPassword = envPassword
}
}
// ApplyConfig holds configuration for apply execution
type ApplyConfig struct {
Host string
Port int
DB string
User string
Password string
Schema string
File string // Desired state file (optional, used with embeddedPG)
Plan *plan.Plan // Pre-generated plan (optional, alternative to File)
AutoApprove bool
NoColor bool
LockTimeout string
ApplicationName string
}

// ApplyMigration applies a migration plan to update a database schema.
// The caller must provide either:
// - A pre-generated plan in config.Plan, OR
// - A desired state file in config.File with a non-nil embeddedPG instance
//
// If config.File is provided, embeddedPG is used to generate the plan.
// The caller is responsible for managing the embeddedPG lifecycle (creation and cleanup).
func ApplyMigration(config *ApplyConfig, embeddedPG *util.EmbeddedPostgres) error {
var migrationPlan *plan.Plan
var err error

if applyPlan != "" {
// Load plan from JSON file
planData, err := os.ReadFile(applyPlan)
if err != nil {
return fmt.Errorf("failed to read plan file: %w", err)
}

migrationPlan, err = plan.FromJSON(planData)
if err != nil {
return fmt.Errorf("failed to load plan: %w", err)
// Either use provided plan or generate from file
if config.Plan != nil {
migrationPlan = config.Plan
} else if config.File != "" {
// Generate plan from file (requires embeddedPG)
if embeddedPG == nil {
return fmt.Errorf("embeddedPG is required when generating plan from file")
}

// Validate that the plan was generated by the same pgschema version
currentVersion := version.App()
if migrationPlan.PgschemaVersion != currentVersion {
return fmt.Errorf("plan version mismatch: plan was generated by pgschema version %s, but current version is %s. Please regenerate the plan with the current version", migrationPlan.PgschemaVersion, currentVersion)
}

// Validate that the plan format version is supported (forward compatibility)
supportedPlanVersion := version.PlanFormat()
if migrationPlan.Version != supportedPlanVersion {
return fmt.Errorf("unsupported plan format version: plan uses format version %s, but this pgschema version only supports format version %s. Please upgrade pgschema to apply this plan", migrationPlan.Version, supportedPlanVersion)
}
} else {
// Generate plan from file (existing logic)
config := &planCmd.PlanConfig{
Host: applyHost,
Port: applyPort,
DB: applyDB,
User: applyUser,
Password: finalPassword,
Schema: applySchema,
File: applyFile,
ApplicationName: applyApplicationName,
planConfig := &planCmd.PlanConfig{
Host: config.Host,
Port: config.Port,
DB: config.DB,
User: config.User,
Password: config.Password,
Schema: config.Schema,
File: config.File,
ApplicationName: config.ApplicationName,
}

// Generate plan using shared logic
migrationPlan, err = planCmd.GeneratePlan(config)
migrationPlan, err = planCmd.GeneratePlan(planConfig, embeddedPG)
if err != nil {
return err
}
} else {
return fmt.Errorf("either config.Plan or config.File must be provided")
}

// Load ignore configuration for fingerprint validation
Expand All @@ -135,7 +130,7 @@ func RunApply(cmd *cobra.Command, args []string) error {

// Validate schema fingerprint if plan has one
if migrationPlan.SourceFingerprint != nil {
err := validateSchemaFingerprint(migrationPlan, applyHost, applyPort, applyDB, applyUser, finalPassword, applySchema, applyApplicationName, ignoreConfig)
err := validateSchemaFingerprint(migrationPlan, config.Host, config.Port, config.DB, config.User, config.Password, config.Schema, config.ApplicationName, ignoreConfig)
if err != nil {
return err
}
Expand All @@ -148,10 +143,10 @@ func RunApply(cmd *cobra.Command, args []string) error {
}

// Display the plan
fmt.Print(migrationPlan.HumanColored(!applyNoColor))
fmt.Print(migrationPlan.HumanColored(!config.NoColor))

// Prompt for approval if not auto-approved
if !applyAutoApprove {
if !config.AutoApprove {
fmt.Print("\nDo you want to apply these changes? (yes/no): ")
reader := bufio.NewReader(os.Stdin)
response, err := reader.ReadString('\n')
Expand All @@ -171,13 +166,13 @@ func RunApply(cmd *cobra.Command, args []string) error {

// Build database connection for applying changes
connConfig := &util.ConnectionConfig{
Host: applyHost,
Port: applyPort,
Database: applyDB,
User: applyUser,
Password: finalPassword,
Host: config.Host,
Port: config.Port,
Database: config.DB,
User: config.User,
Password: config.Password,
SSLMode: "prefer",
ApplicationName: applyApplicationName,
ApplicationName: config.ApplicationName,
}

conn, err := util.Connect(connConfig)
Expand All @@ -189,19 +184,19 @@ func RunApply(cmd *cobra.Command, args []string) error {
ctx := context.Background()

// Set lock timeout before executing changes
if applyLockTimeout != "" {
_, err = conn.ExecContext(ctx, fmt.Sprintf("SET lock_timeout = '%s'", applyLockTimeout))
if config.LockTimeout != "" {
_, err = conn.ExecContext(ctx, fmt.Sprintf("SET lock_timeout = '%s'", config.LockTimeout))
if err != nil {
return fmt.Errorf("failed to set lock timeout: %w", err)
}
}

// Set search_path to target schema for unqualified table references
if applySchema != "" && applySchema != "public" {
quotedSchema := ir.QuoteIdentifier(applySchema)
if config.Schema != "" && config.Schema != "public" {
quotedSchema := ir.QuoteIdentifier(config.Schema)
_, err = conn.ExecContext(ctx, fmt.Sprintf("SET search_path TO %s, public", quotedSchema))
if err != nil {
return fmt.Errorf("failed to set search_path to target schema '%s': %w", applySchema, err)
return fmt.Errorf("failed to set search_path to target schema '%s': %w", config.Schema, err)
}
fmt.Printf("Set search_path to: %s, public\n", quotedSchema)
}
Expand Down Expand Up @@ -229,6 +224,89 @@ func RunApply(cmd *cobra.Command, args []string) error {
return nil
}

// RunApply executes the apply command logic. Exported for testing.
func RunApply(cmd *cobra.Command, args []string) error {
// Validate that either --file or --plan is provided
if applyFile == "" && applyPlan == "" {
return fmt.Errorf("either --file or --plan must be specified")
}

// Derive final password: use provided password or check environment variable
finalPassword := applyPassword
if finalPassword == "" {
if envPassword := os.Getenv("PGPASSWORD"); envPassword != "" {
finalPassword = envPassword
}
}

// Build configuration
config := &ApplyConfig{
Host: applyHost,
Port: applyPort,
DB: applyDB,
User: applyUser,
Password: finalPassword,
Schema: applySchema,
AutoApprove: applyAutoApprove,
NoColor: applyNoColor,
LockTimeout: applyLockTimeout,
ApplicationName: applyApplicationName,
}

var embeddedPG *util.EmbeddedPostgres
var err error

// If using --plan flag, load plan from JSON file
if applyPlan != "" {
planData, err := os.ReadFile(applyPlan)
if err != nil {
return fmt.Errorf("failed to read plan file: %w", err)
}

migrationPlan, err := plan.FromJSON(planData)
if err != nil {
return fmt.Errorf("failed to load plan: %w", err)
}

// Validate that the plan was generated by the same pgschema version
currentVersion := version.App()
if migrationPlan.PgschemaVersion != currentVersion {
return fmt.Errorf("plan version mismatch: plan was generated by pgschema version %s, but current version is %s. Please regenerate the plan with the current version", migrationPlan.PgschemaVersion, currentVersion)
}

// Validate that the plan format version is supported (forward compatibility)
supportedPlanVersion := version.PlanFormat()
if migrationPlan.Version != supportedPlanVersion {
return fmt.Errorf("unsupported plan format version: plan uses format version %s, but this pgschema version only supports format version %s. Please upgrade pgschema to apply this plan", migrationPlan.Version, supportedPlanVersion)
}

config.Plan = migrationPlan
} else {
// Using --file flag, will need embedded postgres
config.File = applyFile

// Create embedded PostgreSQL for desired state validation
planConfig := &planCmd.PlanConfig{
Host: applyHost,
Port: applyPort,
DB: applyDB,
User: applyUser,
Password: finalPassword,
Schema: applySchema,
File: applyFile,
ApplicationName: applyApplicationName,
}
embeddedPG, err = planCmd.CreateEmbeddedPostgresForPlan(planConfig)
if err != nil {
return err
}
defer embeddedPG.Stop()
}

// Apply the migration
return ApplyMigration(config, embeddedPG)
}

// validateSchemaFingerprint validates that the current database schema matches the expected fingerprint
func validateSchemaFingerprint(migrationPlan *plan.Plan, host string, port int, db, user, password, schema, applicationName string, ignoreConfig *ir.IgnoreConfig) error {
// Get current state from target database with ignore config
Expand Down
Loading