Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
143 changes: 143 additions & 0 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
# Copilot Instructions for Repogen

## Project Overview

Repogen is a CLI tool written in Go that generates static repository structures for multiple package managers. It scans directories for packages, generates appropriate metadata files, and signs repositories with GPG/RSA keys.

### Supported Package Types

- **Debian/APT** (.deb packages)
- **Yum/RPM** (.rpm packages)
- **Alpine/APK** (.apk packages)
- **Arch Linux/Pacman** (.pkg.tar.zst, .pkg.tar.xz, .pkg.tar.gz)
- **Homebrew** (bottle files)
- **systemd-sysext** (.raw, .raw.zst, .raw.xz, .raw.gz)

### Project Structure

```
cmd/repogen/ # CLI entry point
internal/
cli/ # Command-line interface (Cobra commands)
generator/ # Repository generators for each package type
apk/ # Alpine APK repository generator
deb/ # Debian APT repository generator
homebrew/ # Homebrew bottle repository generator
pacman/ # Arch Linux Pacman repository generator
rpm/ # RPM/Yum repository generator
sysext/ # systemd-sysext repository generator
models/ # Data models (Package, RepositoryConfig, errors)
scanner/ # Package detection and file scanning
signer/ # GPG and RSA signing utilities
utils/ # Shared utilities (checksums, compression, file ops)
test/ # Integration tests and fixtures
```

## Go Development Best Practices

### Code Style

- Follow standard Go conventions and idioms
- Use meaningful variable and function names
- Keep functions focused and small
- Add comments for exported functions and types (godoc style)
- Use `context.Context` for cancellation where appropriate
- Handle errors explicitly; never ignore errors
- Use structured logging with logrus

### Error Handling

- Return errors rather than panicking
- Wrap errors with context using `fmt.Errorf("context: %w", err)`
- Define custom error types in `models/errors.go` when appropriate

### Testing

- Write unit tests for new functionality
- Use table-driven tests where appropriate
- Test both success and error cases
- Place tests in the same package as the code being tested (`*_test.go`)

## After Every Code Change

After making any code changes, you MUST run:

```bash
make build
```

Then format the code using:

```bash
make fmt
```

Then run the linter:

```bash
make lint
```

Fix any linting errors before considering the task complete. Common linting issues include:

- Unused imports or variables
- Missing error checks
- Ineffective assignments
- Formatting issues

## Documentation Requirements

When making changes, update relevant documentation:

1. **README.md**: Update if you add/modify:

- New package type support
- New CLI flags or commands
- New features or workflows
- Repository structure changes

2. **Code Comments**: Add/update godoc comments for:

- Exported functions and types
- Complex logic that needs explanation
- Configuration options

3. **Inline Comments**: Add brief comments for:
- Non-obvious code decisions
- Workarounds or edge cases

## Adding New Package Type Support

When adding support for a new package type:

1. Add detection logic in `internal/scanner/detector.go`
2. Add the new `PackageType` constant in `internal/scanner/scanner.go`
3. Create a new generator package under `internal/generator/<type>/`
4. Implement the `generator.Generator` interface:
- `Generate(ctx, config, packages) error`
- `ValidatePackages(packages) error`
- `GetSupportedType() scanner.PackageType`
- `ParseExistingMetadata(config) ([]Package, error)`
5. Register the generator in `internal/cli/generate.go`
6. Add package identity support in `internal/utils/package_identity.go`
7. Write comprehensive tests
8. Update README.md with new package type documentation

## Common Make Targets

- `make build` - Build the repogen binary
- `make test-unit` - Run unit tests (fast)
- `make test` - Run all tests including integration
- `make fmt` - Format code with go fmt
- `make lint` - Run golangci-lint
- `make install` - Install to /usr/local/bin
- `make clean` - Clean build artifacts

## Checklist Before Completing a Task

- [ ] Code compiles (`make build`)
- [ ] Code is formatted (`make fmt`)
- [ ] Linter passes (`make lint`)
- [ ] Tests pass (`make test-unit`)
- [ ] Documentation updated if needed
- [ ] New features have tests
48 changes: 33 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -437,38 +437,56 @@ repo/
└── ext/
└── docker/
├── SHA256SUMS # Checksum file for systemd-sysupdate
├── docker.transfer # systemd-sysupdate transfer configuration
├── docker_24.0.5_x86-64.raw.zst
└── docker_25.0.0_x86-64.raw.zst
```

**Note:** The `--base-url` flag is required when generating sysext repositories. This is used to generate the `.transfer` configuration files with the correct source URL.

```bash
repogen generate \
--input-dir ./extensions \
--output-dir ./repo \
--base-url https://example.com/repo
```

**Using with systemd-sysupdate:**

Create a transfer configuration file at `/etc/sysupdate.d/50-docker.conf`:
Repogen generates a `.transfer` file for each extension that can be copied to `/etc/sysupdate.d/`:

```bash
# Copy the generated transfer file
sudo cp repo/ext/docker/docker.transfer /etc/sysupdate.d/50-docker.conf

# Check for updates
systemd-sysupdate list

# Download and apply updates
systemd-sysupdate update
```

The generated transfer file looks like:

```ini
[Transfer]
Verify=false

[Source]
Type=url-file
Path=http://your-server.com/repo/ext/docker/
MatchPattern=docker_@v_@a.raw.zst
Path=https://example.com/repo/ext/docker/
MatchPattern=docker_@v_@a.raw.zst \
docker_@v_@a.raw.xz \
docker_@v_@a.raw.gz \
docker_@v_@a.raw

[Target]
Type=regular-file
Path=/var/lib/extensions/
MatchPattern=docker_@v_@a.raw \
docker_@v_@a.raw.zst
```

Then run:

```bash
# Check for updates
systemd-sysupdate list

# Download and apply updates
systemd-sysupdate update
MatchPattern=docker_@v_@a.raw.zst \
docker_@v_@a.raw.xz \
docker_@v_@a.raw.gz \
docker_@v_@a.raw
```

## GPG Key Setup
Expand Down
4 changes: 2 additions & 2 deletions internal/cli/generate.go
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ structures with appropriate metadata files and signatures.`,
cmd.Flags().StringSliceVar(&config.Arches, "arch", []string{"amd64"}, "Architectures to support")

// Type-specific options
cmd.Flags().StringVar(&config.BaseURL, "base-url", "", "Base URL for Homebrew bottles and RPM .repo files")
cmd.Flags().StringVar(&config.BaseURL, "base-url", "", "Base URL for Homebrew bottles, RPM .repo files, and sysext transfer files (required for sysext)")
cmd.Flags().StringVar(&config.GPGKeyURL, "gpg-key-url", "", "GPG key URL for RPM .repo files (supports $releasever/$basearch variables)")
cmd.Flags().StringVar(&config.DistroVariant, "distro", "fedora", "Distribution variant for RPM repos (fedora, centos, rhel)")
cmd.Flags().StringVar(&config.Version, "version", "", "Release version for RPM repos (e.g., 40 for Fedora 40). Auto-detected from RPM metadata if not provided")
Expand Down Expand Up @@ -222,7 +222,7 @@ func runGeneration(ctx context.Context, config *models.RepositoryConfig) error {
generators[scanner.TypeApk] = apk.NewGenerator(rsaSigner, config.RSAKeyName)
generators[scanner.TypePacman] = pacman.NewGenerator(gpgSigner)
generators[scanner.TypeHomebrewBottle] = homebrew.NewGenerator(config.BaseURL)
generators[scanner.TypeSysext] = sysext.NewGenerator()
generators[scanner.TypeSysext] = sysext.NewGenerator(config.BaseURL)

for pkgType, newPackages := range packagesByType {
gen, ok := generators[pkgType]
Expand Down
20 changes: 14 additions & 6 deletions internal/generator/apk/generator_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,16 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
defer func() { _ = os.RemoveAll(tmpDir) }()

inputDir := filepath.Join(tmpDir, "input")
outputDir := filepath.Join(tmpDir, "output")
os.MkdirAll(inputDir, 0755)
os.MkdirAll(outputDir, 0755)
if err := os.MkdirAll(inputDir, 0755); err != nil {
t.Fatalf("Failed to create input dir: %v", err)
}
if err := os.MkdirAll(outputDir, 0755); err != nil {
t.Fatalf("Failed to create output dir: %v", err)
}

gen := NewGenerator(nil, "")
config := &models.RepositoryConfig{
Expand All @@ -38,7 +42,9 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {

// Step 1: Create initial repo with package A
initialPkg := filepath.Join(inputDir, "pkga-1.0-r1.apk")
os.WriteFile(initialPkg, []byte("fake apk package A"), 0644)
if err := os.WriteFile(initialPkg, []byte("fake apk package A"), 0644); err != nil {
t.Fatalf("Failed to write initial package: %v", err)
}

packagesA := []models.Package{
{
Expand Down Expand Up @@ -70,7 +76,7 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {
files, _ := os.ReadDir(archDir)
for _, file := range files {
if strings.HasSuffix(file.Name(), ".apk") {
os.Remove(filepath.Join(archDir, file.Name()))
_ = os.Remove(filepath.Join(archDir, file.Name()))
}
}

Expand All @@ -81,7 +87,9 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {

// Step 3: Create new package B
newPkg := filepath.Join(inputDir, "pkgb-1.0-r1.apk")
os.WriteFile(newPkg, []byte("fake apk package B"), 0644)
if err := os.WriteFile(newPkg, []byte("fake apk package B"), 0644); err != nil {
t.Fatalf("Failed to write new package: %v", err)
}

// Step 4: Parse existing metadata (simulating incremental mode)
existingPackages, err := gen.ParseExistingMetadata(config)
Expand Down
8 changes: 4 additions & 4 deletions internal/generator/apk/parser.go
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,14 @@ func extractPKGINFO(path string) ([]byte, error) {
if err != nil {
return nil, err
}
defer f.Close()
defer func() { _ = f.Close() }()

// APK files are gzipped tar archives
gr, err := gzip.NewReader(f)
if err != nil {
return nil, err
}
defer gr.Close()
defer func() { _ = gr.Close() }()

tr := tar.NewReader(gr)

Expand Down Expand Up @@ -162,13 +162,13 @@ func parseAPKINDEX(path string) ([]models.Package, error) {
if err != nil {
return nil, err
}
defer f.Close()
defer func() { _ = f.Close() }()

gz, err := gzip.NewReader(f)
if err != nil {
return nil, err
}
defer gz.Close()
defer func() { _ = gz.Close() }()

tr := tar.NewReader(gz)

Expand Down
32 changes: 22 additions & 10 deletions internal/generator/deb/generator_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ func TestGenerateReleaseUnsigned(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
defer func() { _ = os.RemoveAll(tmpDir) }()

// Create generator without signer (unsigned)
gen := NewGenerator(nil)
Expand All @@ -33,14 +33,20 @@ func TestGenerateReleaseUnsigned(t *testing.T) {

// Create required directory structure
distsDir := filepath.Join(tmpDir, "dists", "testing", "main", "binary-amd64")
os.MkdirAll(distsDir, 0755)
if err := os.MkdirAll(distsDir, 0755); err != nil {
t.Fatalf("Failed to create dists dir: %v", err)
}

// Create dummy Packages file
packagesPath := filepath.Join(distsDir, "Packages")
os.WriteFile(packagesPath, []byte("Package: test\n"), 0644)
if err := os.WriteFile(packagesPath, []byte("Package: test\n"), 0644); err != nil {
t.Fatalf("Failed to write Packages: %v", err)
}

packagesGzPath := filepath.Join(distsDir, "Packages.gz")
os.WriteFile(packagesGzPath, []byte{}, 0644)
if err := os.WriteFile(packagesGzPath, []byte{}, 0644); err != nil {
t.Fatalf("Failed to write Packages.gz: %v", err)
}

// Generate repository files
err = gen.Generate(context.Background(), config, []models.Package{})
Expand Down Expand Up @@ -95,12 +101,16 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpDir)
defer func() { _ = os.RemoveAll(tmpDir) }()

inputDir := filepath.Join(tmpDir, "input")
outputDir := filepath.Join(tmpDir, "output")
os.MkdirAll(inputDir, 0755)
os.MkdirAll(outputDir, 0755)
if err := os.MkdirAll(inputDir, 0755); err != nil {
t.Fatalf("Failed to create input dir: %v", err)
}
if err := os.MkdirAll(outputDir, 0755); err != nil {
t.Fatalf("Failed to create output dir: %v", err)
}

gen := NewGenerator(nil)
config := &models.RepositoryConfig{
Expand All @@ -115,7 +125,9 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {

// Step 1: Create initial repo with package A
initialPkg := filepath.Join(inputDir, "pkga_1.0_amd64.deb")
os.WriteFile(initialPkg, []byte("fake deb package A"), 0644)
if err := os.WriteFile(initialPkg, []byte("fake deb package A"), 0644); err != nil {
t.Fatalf("Failed to write initial package: %v", err)
}

packagesA := []models.Package{
{
Expand Down Expand Up @@ -145,7 +157,7 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {
// Step 2: Simulate S3 sync - keep only metadata, remove package files
// Remove pool directory to simulate only having metadata
poolDir := filepath.Join(outputDir, "pool")
os.RemoveAll(poolDir)
_ = os.RemoveAll(poolDir)

// Verify package A is gone (simulating S3 scenario)
if _, err := os.Stat(pkgAPath); !os.IsNotExist(err) {
Expand All @@ -154,7 +166,7 @@ func TestIncrementalModeCopiesNewPackages(t *testing.T) {

// Step 3: Create new package B
newPkg := filepath.Join(inputDir, "pkgb_1.0_amd64.deb")
os.WriteFile(newPkg, []byte("fake deb package B"), 0644)
_ = os.WriteFile(newPkg, []byte("fake deb package B"), 0644)

// Step 4: Parse existing metadata (simulating incremental mode)
existingPackages, err := gen.ParseExistingMetadata(config)
Expand Down
Loading
Loading