Pupsourcing¶
A production-ready Event Sourcing library for Go with clean architecture principles.
What is Event Sourcing?¶
Event sourcing is a powerful architectural pattern that stores state changes as an immutable sequence of events rather than maintaining only the current state. Instead of updating records (CRUD), your system appends events that describe what happened.
Think of it Like a Bank Statement¶
Imagine your bank account. The bank doesn't just store your current balance - they keep a complete record of every transaction:
Jan 1: Deposit +$1000 β Balance: $1000
Jan 5: Withdraw -$200 β Balance: $800
Jan 10: Deposit +$500 β Balance: $1300
Jan 15: Withdraw -$300 β Balance: $1000
If you wanted to know your balance on January 10th, the bank could replay all transactions up to that date. This is exactly how event sourcing works - instead of storing the final balance, you store every transaction (event) and calculate the current state by replaying them.
How Does This Work in Software?¶
In event sourcing, you never update or delete data. Instead, you:
- Write events when something happens (UserRegistered, EmailChanged, OrderPlaced)
- Store events in an append-only log (events can't be changed or deleted)
- Read events and replay them to reconstruct the current state
- Build projections (read models) by processing events into formats optimized for querying
The Traditional Approach vs Event Sourcing¶
Traditional CRUD - Updates destroy history:
User table:
ββββββ¬ββββββββββββββββββββββ¬ββββββββ¬βββββββββ
β id β email β name β status β
ββββββΌββββββββββββββββββββββΌββββββββΌβββββββββ€
β 1 β alice@example.com β Alice β active β
ββββββ΄ββββββββββββββββββββββ΄ββββββββ΄βββββββββ
-- UPDATE loses history - no way to know what changed
UPDATE users SET email='new@email.com' WHERE id=1;
Event Sourcing - Preserves complete history:
Events (append-only log):
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. UserCreated β
β {id: 1, email: "alice@example.com", name: "Alice"} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β 2. EmailVerified β
β {id: 1} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β 3. EmailChanged β
β {id: 1, from: "alice@example.com", β
β to: "alice@newdomain.com"} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β 4. UserDeactivated β
β {id: 1, reason: "account closed"} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Current state = Apply events 1-4 in sequence
Historical state = Apply events up to any point in time
Real-World Example: E-commerce Order¶
Let's see a concrete example of how event sourcing works with an online shopping order:
// Traditional approach - single row gets updated repeatedly
Order {
ID: "order-123",
Status: "delivered", // Lost history: was it created? paid? shipped?
Items: [...],
Total: 99.99
}
// Event sourcing - complete audit trail
Events for order-123:
1. OrderCreated { items: [...], total: 99.99 }
2. PaymentProcessed { method: "credit_card", amount: 99.99 }
3. OrderShipped { carrier: "FedEx", tracking: "123456789" }
4. OrderDelivered { deliveredAt: "2024-01-15T14:30:00Z" }
// Replay events to get current state
Order = empty order object
FOR EACH event IN events:
IF event is OrderCreated:
Order.Items = event.items
Order.Total = event.total
Order.Status = "created"
ELSE IF event is PaymentProcessed:
Order.Status = "paid"
ELSE IF event is OrderShipped:
Order.Status = "shipped"
Order.TrackingNumber = event.tracking
ELSE IF event is OrderDelivered:
Order.Status = "delivered"
// Result: Final state
Order {
Items: [...],
Total: 99.99,
Status: "delivered",
TrackingNumber: "123456789"
}
How to Query Orders?¶
This is a common question for newcomers: "If everything is stored as events, how do I get a simple list of orders?"
The answer is projections. You process events to build read modelsβtables optimized for queries:
Events (append-only):
1. OrderCreated { id: 1, items: [...], total: 99.99 }
2. OrderCreated { id: 2, items: [...], total: 49.99 }
3. PaymentProcessed { id: 1, method: "credit_card" }
4. OrderShipped { id: 1, tracking: "123456" }
5. OrderDelivered { id: 2 }
Projection (orders_view table):
Process each event and update a regular database table:
ββββββ¬βββββββββ¬βββββββββββββββ¬βββββββββ
β id β total β tracking β status β
ββββββΌβββββββββΌβββββββββββββββΌβββββββββ€
β 1 β 99.99 β 123456 β shippedβ
β 2 β 49.99 β - β deliverβ
ββββββ΄βββββββββ΄βββββββββββββββ΄βββββββββ
Now you can query: SELECT * FROM orders_view WHERE status = 'shipped' - fast and simple!
Key insight: You keep both the events (for history and replaying) and projections (for fast queries). The projections are built by processing events and can be rebuilt at any time.
Why Event Sourcing?¶
Event sourcing provides powerful capabilities that are difficult or impossible with traditional CRUD:
β Complete Audit Trail - Every state change is recorded with full context - Perfect for compliance (financial, healthcare, legal) - Natural debugging: see exactly what happened and when
β Temporal Queries - "What was the user's email on January 1st?" - "Show me all orders that were pending last week" - Reconstruct past state at any point in time
β Flexible Read Models - Build new views from existing events without migrations - Multiple projections from the same event stream - Add new read models without touching write side
β Event Replay - Fix bugs by replaying events with corrected logic - Test new features on production data - Generate new projections from historical events
β Business Intelligence - Rich analytics from complete event history - Answer questions that weren't anticipated - "How many users changed their email in the last month?"
When to Use Event Sourcing¶
β Great fit: - Systems requiring audit trails (finance, healthcare, legal) - Complex business domains with rich behavior - Applications needing temporal queries - Microservices publishing domain events - Multiple read models from the same data
β οΈ Consider carefully: - Simple CRUD applications (may be overkill) - Prototypes without event sourcing requirements - Teams new to event sourcing (learning curve) - Strict low-latency requirements everywhere
How Pupsourcing Helps¶
Pupsourcing makes event sourcing in Go simple, clean, and production-ready. Here's what sets it apart:
π― Clean Architecture¶
No infrastructure creep into your domain model:
// Your domain events are plain Go structs
type UserCreated struct {
Email string
Name string
}
// No annotations, no framework inheritance
// Pure domain logic
π Database Flexibility¶
Support for multiple databases with the same API:
- PostgreSQL (recommended for production)
- SQLite (perfect for testing and development)
- MySQL/MariaDB
Switch databases without changing your application code:
// PostgreSQL
store := postgres.NewStore(postgres.DefaultStoreConfig())
// SQLite
store := sqlite.NewStore(sqlite.DefaultStoreConfig())
// MySQL
store := mysql.NewStore(mysql.DefaultStoreConfig())
ποΈ Bounded Context Support¶
Align with Domain-Driven Design (DDD):
// Events are scoped to bounded contexts
event := es.Event{
BoundedContext: "Identity", // Clear domain boundaries
AggregateType: "User",
AggregateID: userID,
EventType: "UserCreated",
// ...
}
π Optimistic Concurrency¶
Automatic conflict detection prevents lost updates:
// Append with expected version
result, err := store.Append(ctx, tx,
es.Exact(3), // Expects version 3
[]es.Event{event},
)
// If another process already wrote version 4, this fails
π Powerful Projections¶
Transform events into query-optimized read models:
// Scoped projection - only User events from Identity context
type UserReadModel struct{}
func (p *UserReadModel) AggregateTypes() []string {
return []string{"User"}
}
func (p *UserReadModel) BoundedContexts() []string {
return []string{"Identity"}
}
func (p *UserReadModel) Handle(ctx context.Context, tx *sql.Tx, event es.PersistedEvent) error {
// Update your read model using the processor's transaction
switch event.EventType {
case "UserCreated":
// Use tx for atomic updates
_, err := tx.ExecContext(ctx, "INSERT INTO user_read_model ...")
return err
case "EmailChanged":
// All changes in the same transaction as checkpoint
_, err := tx.ExecContext(ctx, "UPDATE user_read_model ...")
return err
}
return nil
}
π Horizontal Scaling¶
Built-in support for scaling projections across multiple workers:
// Partition projections across 4 workers
config := projection.ProcessorConfig{
PartitionKey: 0, // This worker handles partition 0
TotalPartitions: 4,
}
processor := postgres.NewProcessor(db, store, &config)
π οΈ Code Generation¶
Optional type-safe event mapping:
# Generate strongly-typed event mappers
go run github.com/getpup/pupsourcing/cmd/eventmap-gen \
-input internal/domain/events \
-output internal/infrastructure/generated
π Minimal Dependencies¶
- Go standard library
- Database driver (your choice)
- That's it!
Quick Start¶
Installation¶
go get github.com/getpup/pupsourcing
# Choose your database driver
go get github.com/lib/pq # PostgreSQL
Your First Event¶
import (
"github.com/getpup/pupsourcing/es"
"github.com/getpup/pupsourcing/es/adapters/postgres"
"github.com/google/uuid"
)
// Create store
store := postgres.NewStore(postgres.DefaultStoreConfig())
// Create event
event := es.Event{
BoundedContext: "Identity",
AggregateType: "User",
AggregateID: uuid.New().String(),
EventID: uuid.New(),
EventType: "UserCreated",
EventVersion: 1,
Payload: []byte(`{"email":"alice@example.com","name":"Alice"}`),
Metadata: []byte(`{}`),
CreatedAt: time.Now(),
}
// Append to event store
tx, _ := db.BeginTx(ctx, nil)
result, err := store.Append(ctx, tx, es.NoStream(), []es.Event{event})
if err != nil {
tx.Rollback()
log.Fatal(err)
}
tx.Commit()
fmt.Printf("Event stored at position: %d\n", result.GlobalPositions[0])
Read Events¶
// Read all events for an aggregate
stream, err := store.ReadAggregateStream(
ctx, tx,
"Identity", // bounded context
"User", // aggregate type
aggregateID, // aggregate ID
nil, nil, // from/to version
)
// Process events
for _, event := range stream.Events {
fmt.Printf("Event: %s at version %d\n",
event.EventType, event.AggregateVersion)
}
Documentation Structure¶
Getting Started¶
Start here if you're new to pupsourcing:
- Getting Started - Installation, setup, and your first event-sourced application
- Prerequisites and installation
- Database schema generation
- Creating and storing your first event
- Reading events back
- Building a simple projection
Core Concepts¶
Understand the fundamentals:
- Core Concepts - Deep dive into event sourcing principles with pupsourcing
- Event sourcing fundamentals
- Core components (Events, Aggregates, Event Store, Projections)
- Key concepts (Optimistic Concurrency, Global Position, Idempotency)
- Design principles (Library vs Framework, Explicit Dependencies)
- Common patterns (Read-Your-Writes, Event Upcasting, Aggregate Reconstruction)
Database Adapters¶
Choose and configure your database:
- Database Adapters - PostgreSQL, SQLite, and MySQL adapter documentation
- PostgreSQL (production-ready, recommended)
- SQLite (embedded, perfect for testing)
- MySQL/MariaDB (production-ready)
- Adapter comparison and migration strategies
Projections and Scaling¶
Build read models and scale your system:
- Projections - Building and managing projections
- Scoped vs Global projections
- Basic implementation
-
Idempotency patterns
-
Scaling - Horizontal scaling patterns for projections
- When and how to scale
- Hash-based partitioning
- Running multiple projections
- Performance tuning (batch size, connection pooling, poll interval)
- Production patterns (gradual scaling, prioritization, hot/cold separation)
- Advanced topics (projection rebuilding, database partitioning)
Code Generation¶
Type-safe event mapping:
- Event Mapping Code Generation - Strongly-typed conversion between domain and ES events
- Why this tool exists
- Installation and quick start
- Versioned events (schema evolution)
- Clean architecture integration
- Repository adapter pattern
Production Operations¶
Deploy and monitor:
- Deployment - Production deployment patterns and operational best practices
- Deployment patterns (Docker Compose, Kubernetes, systemd)
- Configuration management
- Monitoring and metrics
- Graceful shutdown
- Security considerations
-
Troubleshooting
-
Observability - Logging, tracing, and monitoring
- Logger interface and integration
- Distributed tracing (TraceID, CorrelationID, CausationID)
- Metrics integration (Prometheus examples)
- Best practices
API Reference¶
Complete API documentation:
- API Reference - Complete API documentation
- Core types (Event, PersistedEvent, Stream, ExpectedVersion)
- Event Store interface
- Projection interfaces
- PostgreSQL adapter
- Error types
About¶
Learn more about the project:
- About - Project philosophy, history, and community
Quick Links¶
For Beginners¶
- Start with Getting Started
- Understand Core Concepts
- Explore Examples
For Production¶
- Review Database Adapters and choose your database
- Plan your Scaling Strategy
- Set up Observability
- Follow Deployment Guide
For Advanced Users¶
- Implement Event Mapping Code Generation
- Study Scaling Patterns
- Review API Reference
Examples Repository¶
The pupsourcing repository includes comprehensive working examples:
- basic - Complete PostgreSQL example with projections
- sqlite-basic - SQLite embedded database example
- mysql-basic - MySQL/MariaDB example
- single-worker - Single projection worker
- multiple-projections - Running multiple projections together
- worker-pool - Worker pool with partitioned projections
- partitioned - Separate processes with partitioning
- scaling - Dynamic scaling demonstration
- scoped-projections - Scoped projection filtering
- stop-resume - Checkpoint management and resumption
- with-logging - Observability integration
- eventmap-codegen - Type-safe event mapping generation
Each example includes a README with setup instructions and explanation of concepts.
Community and Support¶
- GitHub Repository: github.com/getpup/pupsourcing
- Issues & Discussions: GitHub Issues
- Documentation: getpup.github.io/pupsourcing-website
What's Next?¶
- Getting Started - Complete setup guide and first steps
- Core Concepts - Deep dive into event sourcing principles
- Database Adapters - Choosing and configuring your database
- Scaling & Projections - Building read models and horizontal scaling
- API Reference - Complete API documentation
Production Ready¶
Pupsourcing is designed for production use with:
- Comprehensive test coverage - Unit and integration tests
- Battle-tested patterns - Based on proven event sourcing principles
- Clear documentation - Extensive guides and examples
- Active maintenance - Regular updates and bug fixes
- Clean codebase - Easy to understand and extend
License¶
This project is licensed under the MIT License - see the LICENSE file for details.