Lustra
Lustra is an ORM built specifically for PostgreSQL in Crystal.
It's probably the most advanced ORM for PG on Crystal in term of features offered. It features Active Record pattern models, and low-level SQL builder.
You can deal out of the box with jsonb, tsvectors, cursors, CTE, bcrypt password,
array, uuid primary key, foreign constraints... and other things !
It also has a powerful DSL to construct where
and having
clauses.
The philosophy beneath is to please me (and you !) with emphasis made on business code readability and minimal setup.
The project is quite active and well maintened, too !
Lustra started as a fork of Clear at version 0.8, and it is not compatible with later Clear releases. Over time it evolved into an independent project. To keep it compatible with newer Crystal versions, I continued development, added missing features, improved existing ones, and expanded test coverage. Today Lustra is far beyond its upstream origins — a distinct, mature project in its own right.
Why to use Lustra ?
In few seconds, you want to use Lustra if:
- [x] You want an expressive ORM. Put straight your thought to your code !
- [x] You'd like to use advanced Postgres features without hassle
- [x] You are at aware of the pro and cons of Active Records pattern
You don't want to use Lustra if:
- [ ] You're not willing to use on PostgreSQL
- [ ] You look for a minimalist ORM / Data Mapper
Features
- Active Record pattern based ORM
- Expressiveness as mantra - even with advanced features like jsonb, regexp... -
# Like ...
Product.query.where { ( type == "Book" ) & ( metadata.jsonb("author.full_name") == "Philip K. Dick" ) }
# ^--- will use @> operator, to relay on your gin index. For real.
Product.query.where { ( products.type == "Book" ) & ( products.metadata.jsonb("author.full_name") != "Philip K. Dick" ) }
# ^--- this time will use -> notation, because no optimizations possible :/
# Or...
User.query.where { created_at.in? 5.days.ago .. 1.day.ago }
# Or even...
ORM.query.where { ( description =~ /(^| )awesome($| )/i ) }.first!.name
Core ORM Features
Model & Database Management
- Complete migration system with versioning and rollbacks
- Comprehensive validation system with custom validators
- Model lifecycle hooks (before/after callbacks for create, update, delete, validate, save)
- Primary key support (auto-incrementing integers, UUIDs)
- Timestamps (created_at, updated_at) with automatic touch functionality
Associations & Relations
- belongs_to, has_many, has_one relationships with full support
- Through associations for complex relationships
- Polymorphic associations and Single Table Inheritance (STI)
- Counter cache with atomic updates and reset functionality
- Touch functionality for automatic timestamp updates on related models
Query Interface
- Chainable and expressive query builder
- Scopes for reusable query fragments with parameter support
- Advanced WHERE clauses with complex conditions
- JOIN operations (inner, left, right, full outer)
- Subqueries and CTEs (Common Table Expressions)
- Window functions and advanced SQL features
- Pagination with limit/offset
- Ordering and grouping
- Aggregation functions (count, sum, avg, etc.)
Performance & Optimization
- N+1 query avoidance with eager loading
- Query result caching
- Database connection pooling
- Lazy loading and batch processing
- Query optimization and SQL analysis
Data Types & Storage
- Full PostgreSQL JSON and JSONB support with complex queries
- Array columns (strings, integers, booleans)
- UUID columns and primary keys
- Enum support with type-safe database integration
- Custom data type converters
- Null handling and presence validation
Advanced Features
Full-Text Search
- PostgreSQL TSVector integration
- Natural language query parsing
- Weighted search with relevance scoring
- Complex search operators (AND, OR, NOT, phrases)
Database Features
- Transaction support with rollback and savepoint
- Database locking (optimistic and pessimistic)
- Multiple database connections
- Database views as models
- Raw SQL execution when needed
- Stored procedures and functions support
Developer Experience
- Comprehensive error messages and debugging
- Query logging with colorized output
- Crash reporting with last executed query
- Compile-time type checking and validation
- Intuitive API design following ActiveRecord conventions
Data Management
- Database seeding utilities
- Model factories for testing
- Bulk operations and batch processing
- Data import/export capabilities
Installation
In shards.yml
dependencies:
lustra:
github: crystal-garage/lustra
branch: develop
version: ">= 0.12.0"
Then:
require "lustra"
Model definition
Lustra offers some mixins, just include them in your classes:
Column mapping
class User
include Lustra::Model
column id : Int64, primary: true
column email : String
column first_name : String?
column last_name : String?
column encrypted_password : Crypto::Bcrypt::Password
def password=(x)
self.encrypted_password = Crypto::Bcrypt::Password.create(password)
end
end
Column types
Number
,String
,Time
,Boolean
andJsonb
structures are already mapped.Array
of primitives too. For other type of data, just create your own converter !
class Lustra::Model::Converter::MyClassConversion
def self.to_column(x) : MyClass?
case x
when String
MyClass.from_string(x)
when Slice(UInt8)
MyClass.from_slice(x)
else
raise "Cannot convert from #{x.class} to MyClass"
end
end
def self.to_db(x : UUID?)
x.to_s
end
end
Lustra::Model::Converter.add_converter("MyClass", Lustra::Model::Converter::MyClassConversion)
Column presence
Most of the ORM for Crystal are mapping column type as Type | Nil
union.
It makes sens so we allow selection of some columns only of a model.
However, this have a caveats: columns are still accessible, and will return nil,
even if the real value of the column is not null !
Moreover, most of the developers will enforce nullity only on their programming language level via validation, but not on the database, leading to inconsistency.
Therefore, we choose to throw exception whenever a column is accessed before it has been initialized and to enforce presence through the union system of Crystal.
Lustra offers this through the use of column wrapper.
Wrapper can be of the type of the column as in postgres, or in UNKNOWN
state.
This approach offers more flexibility:
User.query.select("last_name").each do |usr|
puts usr.first_name # Will raise an exception, as first_name hasn't been fetched.
end
u = User.new
u.first_name_column.defined? # Return false
u.first_name_column.value("") # Call the value or empty string if not defined :-)
u.first_name = "bonjour"
u.first_name_column.defined? # Return true now !
Wrapper give also some pretty useful features:
u = User.new
u.email = "me@myaddress.com"
u.email_column.changed? # TRUE
u.email_column.revert
u.email_column.defined? # No more
Associations
Lustra offers has_many
, has_one
, belongs_to
and has_many through
associations:
class Security::Action
belongs_to role : Role
end
class Security::Role
has_many user : User
end
class User
include Lustra::Model
has_one user_info : UserInfo
has_many posts : Post
belongs_to role : Security::Role
# Use of the standard keys (users_id <=> security_role_id)
has_many actions : Security::Action, through: Security::Role
end
Querying
Lustra offers a collection system for your models. The collection system
takes origin to the lower API Lustra::SQL
, used to build requests.
Simple query
Fetch a model
To fetch one model:
# 1. Get the first user
User.query.first # Get the first user, ordered by primary key
# Get a specific user by primary key
User.find!(1) # Returns user with id=1, or raises exception if not found
User.find(1) # Returns user with id=1, or nil if not found
# Find multiple users by array of IDs
users = User.find([1, 2, 3]) # Returns Array(User), may be partial if some IDs don't exist
users = User.find!([1, 2, 3]) # Raises error if ANY ID is not found
# Find by other columns
user = User.find_by(email: "test@example.com") # Returns nil if not found
user = User.find_by!(email: "test@example.com") # Raises error if not found
# Find by multiple columns
user = User.find_by(first_name: "John", last_name: "Doe")
# Using query with expression engine
u : User? = User.query.find { email =~ /yacine/i }
Fetch multiple models
To prepare a collection, just use Model#query
.
Collections include SQL::Select
object, so all the low level API
(where
, where.not
, where.or
, join
, group_by
, lock
...) can be used in this context.
# Basic filtering with where
User.query.where { (id >= 100) & (id <= 200) }.each do |user|
# Do something with user !
end
# Negative filtering with where.not
User.query.where.not { active == false }.each do |user|
# Get all users that are not inactive
end
# Chaining where, where.not, and where.or conditions
User.query
.where { active == true }
.not { role == "admin" }
.or(id: [1, 2, 3])
.each do |user|
# Complex filtering with chained conditions
end
# Generated SQL:
# SELECT * FROM "users" WHERE ((("active" = TRUE) AND NOT ("role" = 'admin')) OR "id" IN (1, 2, 3))
# Check if any records exist
if User.query.where { active == true }.exists?
puts "There are active users!"
end
# Extract specific column values
user_names = User.query.pluck_col("first_name")
user_data = User.query.pluck("first_name", "last_name")
# Get array of IDs (shortcut for pluck_col primary key)
active_user_ids = User.query.where { active == true }.ids # => [1, 2, 3, 4, 5]
# Bulk update without loading models (bypasses validations and callbacks)
affected = User.query.where { active == false }.update_all(status: "inactive")
puts "Updated #{affected} users"
# Update multiple columns at once
User.query.where { role == "guest" }.update_all(role: "user", verified: true)
# In case you know there's millions of rows, use a cursor to avoid memory issues!
User.query.where { (id >= 1) & (id <= 20_000_000) }.each_cursor(batch: 100) do |user|
# Do something with user; only 100 users will be stored in memory
# This method is using pg cursor, so it's 100% transaction-safe
end
JOIN operations
Lustra supports automatic join detection from associations, as well as manual joins with custom conditions.
class User
include Lustra::Model
has_many posts : Post
has_many categories : Category, through: Post
has_one info : UserInfo
end
class Post
include Lustra::Model
belongs_to user : User
belongs_to category : Category
end
class UserInfo
include Lustra::Model
belongs_to user : User
end
class Category
include Lustra::Model
has_many posts : Post
has_many users : User, through: Post
end
Auto-joins
Simply pass the association name and Lustra will auto-detect the join conditions:
# has_many association
User.query.join(:posts)
# Equivalent to:
User.query.join(:posts) { posts.user_id == users.id }
# SQL: `INNER JOIN "posts" ON ("posts"."user_id" = "users"."id")`
# belongs_to association
Post.query.join(:user)
# Equivalent to:
Post.query.join(:users) { posts.user_id == users.id }
# SQL: `INNER JOIN "users" ON ("posts"."user_id" = "users"."id")`
# has_one association
User.query.join(:info)
# Equivalent to:
User.query.join(:user_infos) { user_infos.user_id == users.id }
# SQL: `INNER JOIN "user_infos" ON ("user_infos"."user_id" = "users"."id")`
# has_many through (automatically generates TWO joins!)
User.query.join(:categories) # has_many :categories, through: Post
# Equivalent to:
User.query
.join(:posts) { posts.user_id == users.id }
.join(:categories) { categories.id == posts.category_id }
# SQL: `INNER JOIN "posts" ON ("posts"."user_id" = "users"."id")
# INNER JOIN "categories" ON ("categories"."id" = "posts"."category_id")`
# All join types supported
User.query.left_join(:posts)
User.query.right_join(:posts)
User.query.full_outer_join(:posts)
User.query.inner_join(:posts)
# Chaining joins
Post.query
.join(:user)
.join(:category)
.where { (users.active == true) & (categories.name == "Tech") }
# Equivalent to:
Post.query
.join(:user) { posts.user_id == users.id }
.join(:categories) { posts.category_id == categories.id }
.where { (users.active == true) & (categories.name == "Tech") }
Manual joins with custom conditions
For complex joins or when you need custom conditions, use the block syntax:
# Custom join condition
User.query.join("infos") { infos.user_id == users.id }
# Multiple joins with custom conditions
Post.query
.join("users") { users.id == posts.user_id }
.join("categories") { categories.id == posts.category_id }
.where { users.active == true }
# Mix auto-join and manual joins
User.query
.join(:posts)
.join("custom_table") { custom_table.user_id == users.id }
Aggregate functions
Call aggregate functions from the query is possible. For complex aggregation,
I would recommend to use the SQL::View
API (note: Not yet developed),
and keep the model query for fetching models only
# count
user_on_gmail = User.query.where { email.ilike "@gmail.com%" }.count # Note: count return is Int64
# min/max
max_id = User.query.where { email.ilike "@gmail.com%" }.max("id", Int32)
# your own aggregate
weighted_avg = User.query.agg("SUM(performance_weight * performance_score) / SUM(performance_weight)", Float64)
Fetching associations
Associations are basically getter which create predefined SQL. To access to an association, just call it !
User.query.each do |user|
puts "User #{user.id} posts:"
user.posts.each do |post| # Works, but will trigger a request for each user.
puts "• #{post.id}"
end
end
Caching association for N+1 request
For every association, you can tell Lustra to encache the results to avoid
N+1 queries, using with_XXX
on the collection:
# Will call two requests only.
User.query.with_posts.each do |user|
puts "User #{user.id} posts:"
user.posts.each do |post|
puts "• #{post.id}"
end
end
Note: For association eager loading (like with_posts
), Lustra uses separate queries with the IN
operator rather than JOINs for optimal performance.
In the case above:
- The first request will be
SELECT * FROM users;
- Thanks to the cache, a second request will be called before fetching the users:
SELECT * FROM posts WHERE user_id IN ( SELECT id FROM users )
I have plan in a late future to offer different query strategies for the cache (e.g. joins, unions...)
Associations caching examples
When you use the caching system of the association, using filters on association will invalidate the cache, and N+1 query will happens.
For example:
User.query.with_posts.each do |user|
puts "User #{user.id} published posts:"
# Here: The cache system will not work. The cache on association
# is invalidated by the filter `where`.
user.posts.where({published: true}).each do |post|
puts "• #{post.id}"
end
end
The way to fix it is to filter on the association itself:
User.query.with_posts(&.where({published: true})).each do |user|
puts "User #{user.id} published posts:"
# The posts collection of user is already encached with the published filter
user.posts.each do |post|
puts "• #{post.id}"
end
end
Note than, of course in this example user.posts
are not ALL the posts but only the
published
posts
Thanks to this system, we can stack it to encache long distance relations:
# Will cache users<=>posts & posts<=>category
# Total: 3 requests !
User.query.with_posts(&.with_category).each do |user|
#...
end
Querying computed or foreign columns
In case you want columns computed by postgres, or stored in another table, you can use fetch_column
.
By default, for performance reasons, fetch_columns
option is set to false.
users = User.query.select(email: "users.email", remark: "infos.remark")
.join("infos") { infos.user_id == users.id }.to_a(fetch_columns: true)
# Now the column "remark" will be fetched into each user object.
# Access can be made using `[]` operator on the model.
users.each do |u|
puts "email: `#{u.email}`, remark: `#{u["remark"]?}`"
end
Scopes and Default Scopes
Scopes
Scopes allow you to define reusable query fragments that make your code more readable and maintainable:
class Post
include Lustra::Model
column title : String
column published : Bool
column view_count : Int32
# Simple scope
scope("published") { where(published: true) }
# Scope with parameter
scope("popular") { |min_views| where { view_count >= min_views } }
# Scope that chains multiple conditions
scope("recent") { where { created_at > 7.days.ago }.order_by(created_at: :desc) }
end
# Usage
Post.published # All published posts
Post.popular(100) # Posts with 100+ views
Post.published.recent # Published posts from last 7 days
Post.published.popular(50).first # Most popular published post
Default Scopes
Default scopes automatically apply conditions to all queries on a model. This is particularly useful for soft deletes and multi-tenancy:
class Post
include Lustra::Model
column title : String
column deleted_at : Time?
# This filter is applied to ALL queries automatically
default_scope { where { deleted_at == nil } }
end
# All these queries automatically exclude deleted posts:
Post.find(1) # WHERE id = 1 AND deleted_at IS NULL
Post.query.first # WHERE deleted_at IS NULL ORDER BY id LIMIT 1
Post.query.where(...) # WHERE ... AND deleted_at IS NULL
# To bypass the default scope when needed:
Post.query.unscoped.count # Count all posts including deleted
Post.query.unscoped.first # Get first record ignoring scope
Post.query.unscoped.where(...) # Build query without default scope
Warning: Default scopes can be confusing because they're implicit. Use them sparingly and document them clearly. Always provide an unscoped
escape hatch when you need to bypass them.
Inspection & SQL logging
Inspection
inspect
over model offers debugging insights:
p # => #<Post:0x10c5f6720
@attributes={},
@cache=
#<Lustra::Model::QueryCache:0x10c6e8100
@cache={},
@cache_activation=Set{}>,
@content_column=
"...",
@errors=[],
@id_column=38,
@persisted=true,
@published_column=true,
@read_only=false,
@title_column="Lorem ipsum torquent inceptos"*,
@user_id_column=5>
In this case, the *
means a column is changed and the object is dirty and diverge from the database.
Query Performance Analysis
Lustra provides PostgreSQL EXPLAIN
support to analyze and optimize your queries:
# Get query execution plan (doesn't modify data, but may read for planning)
plan = User.query.where { active == true }.explain
puts plan
# Output:
# Seq Scan on users (cost=0.00..35.50 rows=10 width=116)
# Filter: (active = true)
# Get actual execution statistics (RUNS the query)
plan = User.query.where { active == true }.join(:posts).explain_analyze
puts plan
# Output includes:
# - Actual execution time
# - Rows processed
# - Memory usage
# - Index usage details
# - Join algorithms used
# Optimize complex queries
slow_query = Post.query
.join(:user)
.join(:category)
.where { published == true }
.order_by(created_at: :desc)
# Analyze to find bottlenecks
puts slow_query.explain_analyze
# Use insights to add indexes, rewrite query, etc.
Common use cases:
- Finding missing indexes: Look for "Seq Scan" on large tables
- Understanding join performance: See which join algorithms are used
- Debugging slow queries: Get actual timing and row counts
- Capacity planning: Understand query costs before deploying
Difference between explain
and explain_analyze
:
| Method | Modifies Data | Shows Actual Stats | Use When |
|--------|---------------|-------------------|----------|
| explain
| No | No (estimated) | Safe for all queries, get execution plan |
| explain_analyze
| Yes* | Yes (actual) | Need actual performance data |
explain
is safe - shows estimated plan without modifying dataexplain_analyze
EXECUTES the query fully, including INSERT/UPDATE/DELETE- For write operations with
explain_analyze
, wrap in a transaction with rollback if testing
Safe pattern for testing write operations:
# Analyze a DELETE or UPDATE without permanently modifying data
Lustra::SQL.transaction do
plan = User.query.where { inactive == true }.to_delete.explain_analyze
puts plan # See actual execution statistics
# Rollback to undo changes
raise Lustra::SQL::RollbackError.new
end
# Data is unchanged - safe for production analysis!
SQL Logging
One thing very important for a good ORM is to offer vision of the SQL called under the hood. Lustra is offering SQL logging tools, with SQL syntax colorizing in your terminal.
For activation, simply setup the logger to DEBUG
level !
Log.builder.bind "lustra.*", :debug, Log::IOBackend.new(STDOUT)
Save & validation
Save
Object can be persisted, saved, updated:
u = User.new
u.email = "test@example.com"
u.save! # Save or throw if unsavable (validation failed).
Attribute Change Tracking
Lustra automatically tracks changes to model attributes:
user = User.create!({first_name: "John", last_name: "Doe", email: "john@test.com"})
# Make some changes
user.first_name = "Jane"
user.email = "jane@test.com"
# Get change for specific attribute as {old, new} tuple
user.first_name_column.change # => {"John", "Jane"}
user.email_column.change # => {"john@test.com", "jane@test.com"}
user.last_name_column.change # => nil (not changed)
# Get all changes at once
user.changes
# => {"first_name" => {"John", "Jane"}, "email" => {"john@test.com", "jane@test.com"}}
# Get list of changed attribute names
user.changed # => ["first_name", "email"]
# After saving, changes are cleared
user.save!
user.changed # => []
user.changes # => {}
Column-level access:
# Access change tracking via column objects
user.email_column.changed? # Check if changed
user.email_column.old_value # Get raw old value
user.email_column.change # Get {old, new} tuple (nil if not changed)
user.email_column.revert # Revert to old value
Atomic Counter Updates
Increment or decrement numeric columns atomically without running validations or callbacks:
# Increment/decrement with immediate save (bypasses validations and callbacks)
user.increment!(:login_count) # Increment by 1
user.increment!(:score, 10) # Increment by custom amount
user.decrement!(:attempts_left) # Decrement by 1
user.decrement!(:balance, 5.5) # Decrement by custom amount
# Increment/decrement in-memory only (requires save! to persist)
user.increment(:login_count)
user.decrement(:attempts_left, 2)
user.save! # Persist both changes together
# Thread-safe atomic updates (using SQL: SET column = column + amount)
user.increment!(:view_count) # Safe for concurrent requests
The !
versions update the database immediately using atomic SQL operations, making them thread-safe for counters like views, likes, or login counts.
Direct Column Updates
Update columns directly without running validations or callbacks. Useful for performance-critical updates when you know the data is valid:
# Update single column (bypasses validations, callbacks, and timestamp updates)
user.update_column(:login_count, 10)
user.update_column(:last_login_at, Time.utc)
# Update multiple columns at once
user.update_columns(login_count: 5, status: "active", verified: true)
# With NamedTuple or Hash
user.update_columns({admin: true, role: "superuser"})
Warning: update_column
and update_columns
bypass:
- Validations
- Callbacks (before/after hooks)
- Automatic timestamp updates (
updated_at
will NOT change)
Use these methods only when you need raw performance and are certain the data is valid.
Deleting Records
Lustra provides two ways to delete records:
# delete - Fast deletion WITHOUT callbacks (just removes from DB)
user.delete
user.persisted? # => false
# destroy - Safe deletion WITH callbacks (triggers before/after :destroy hooks)
user.destroy
user.persisted? # => false
# Bulk operations on collections
User.query.where { active == false }.delete_all # Fast bulk delete, NO callbacks
User.query.where { active == false }.destroy_all # Loads each record, calls destroy, HAS callbacks
Example with callbacks:
class Post
include Lustra::Model
belongs_to user : User, counter_cache: true
after(:destroy) do |model|
post = model.as(Post)
# Clean up associated data
Comment.query.where(post_id: post.id).delete_all
end
end
# This will trigger the callback and clean up comments
post.destroy
# This will NOT trigger callbacks - comments remain orphaned
post.delete
Validation
Presence validator
Presence validator is done using the type of the column:
class User
include Lustra::Model
column first_name : String # Must be present
column last_name : String? # Can be null
end
NOT NULL DEFAULT ...
CASE
There's a case when a column CAN be null inside Crystal, if not persisted, but CANNOT be null inside Postgres.
It's for example the case of the id
column, which take value after saving !
In this case, you can write:
class User
column id : Int64, primary: true, presence: false # id will be set using pg serial !
end
Thus, in all case this will fail:
u = User.new
u.id # raise error
Other validators
When you save your model, Lustra will call first the presence validators, then
call your custom made validators. All you have to do is to reimplement
the validate
method:
class MyModel
#...
def validate
# Your code goes here
end
end
Validation fails if model#errors
is not empty:
class MyModel
#...
def validate
if first_name_column.defined? && first_name != "ABCD" # < See below why `defined?` must be called.
add_error("first_name", "must be ABCD!")
end
end
end
Unique validator
Please use unique
feature of postgres. Unique validator at crystal level is a
non-go and lead to terrible race concurrency issues if your deploy on multiple nodes/pods.
It's an anti-pattern and must be avoided at any cost.
The validation and the presence system
In the case you try validation on a column which has not been initialized, Lustra will complain, telling you you cannot access to the column. Let's see an example here:
class MyModel
#...
def validate
add_error("first_name", "should not be empty") if first_name == ""
end
end
MyModel.new.save! # < Raise unexpected exception, not validation failure :(
This validator will raise an exception, because first_name has never been initialized. To avoid this, we have many way:
# 1. Check presence:
def validate
if first_name_column.defined? # Ensure we have a value here.
add_error("first_name", "should not be empty") if first_name == ""
end
end
# 2. Use column object + default value
def validate
add_error("first_name", "should not be empty") if first_name_column.value("") == ""
end
# 3. Use the helper macro `on_presence`
def validate
on_presence(first_name) do
add_error("first_name", "should not be empty") if first_name == ""
end
end
# 4. Use the helper macro `ensure_than`
def validate
ensure_than(first_name, "should not be empty", &.!=(""))
end
# 5. Use the `ensure_than` helper (but with block notation) !
def validate
ensure_than(first_name, "should not be empty") do |column|
column != ""
end
end
I recommend the 4th method in most of the cases you will faces. Simple to write and easy to read !
Lifecycle Callbacks
Lustra provides a comprehensive callback system to hook into model lifecycle events. Callbacks allow you to execute code at specific points during a model's lifecycle.
Available Callback Events
:validate
- Triggered during validation:save
- Triggered for any save operation (wraps create/update):create
- Triggered when a new record is inserted:update
- Triggered when an existing record is updated:destroy
- Triggered when a record is destroyed (viadestroy
method, notdelete
)
Callback Directions
before
- Executed before the eventafter
- Executed after the event
Basic Usage
class User
include Lustra::Model
column email : String
column name : String
# Block syntax
before(:validate) do |model|
user = model.as(User)
user.email = user.email.downcase
end
after(:create) do |model|
user = model.as(User)
puts "New user created: #{user.first_name}"
end
# Method syntax (cleaner for complex logic - auto-casts for you)
before(:save, :normalize_email)
after(:destroy, :cleanup_user_data)
def normalize_email
self.email = email.strip.downcase
end
def cleanup_user_data
# Custom cleanup logic
puts "User #{id} deleted, cleaning up..."
end
end
Callback Execution Order
Before callbacks: Last defined → First defined (reverse order)
before(:save) { puts "1" }
before(:save) { puts "2" }
before(:save) { puts "3" }
# Execution order: 3, 2, 1
After callbacks: First defined → Last defined (normal order)
after(:save) { puts "1" }
after(:save) { puts "2" }
after(:save) { puts "3" }
# Execution order: 1, 2, 3
Common Patterns
Sanitizing data before validation:
before(:validate) do |model|
user = model.as(User)
user.email = user.email.strip.downcase if user.email_column.defined?
end
Sending notifications after creation:
after(:create) do |model|
user = model.as(User)
WelcomeMailer.send(user.email)
end
Cleanup after deletion:
after(:destroy) do |model|
user = model.as(User)
FileStorage.delete(user.avatar_path) if user.avatar_path
end
Auditing changes:
after(:update) do |model|
user = model.as(User)
AuditLog.create!(
model_type: "User",
model_id: user.id,
action: "update"
)
end
Callbacks with Associations
Callbacks work seamlessly with associations like counter_cache
and touch
:
class Post
include Lustra::Model
belongs_to user : User, counter_cache: true # Uses after(:create) and after(:destroy)
belongs_to category : Category, touch: true # Uses after(:create) and after(:update)
end
The counter_cache
automatically registers after(:create)
and after(:destroy)
callbacks to increment/decrement the counter on the parent model.
Migration
Lustra offers of course a migration system.
Migration should have an order
column set.
This number can be wrote at the end of the class itself:
class Migration1
include Lustra::Migration
def change(dir)
#...
end
end
Using filename
Another way is to write down all your migrations one file per migration, and
naming the file using the [number]_migration_description.cr
pattern.
In this case, the migration class name doesn't need to have a number at the end of the class name.
# in src/db/migrations/1234_create_table.cr
class CreateTable
include Lustra::Migration
def change(dir)
#...
end
end
Migration examples
Migration must implement the method change(dir : Migration::Direction)
Direction is the current direction of the migration (up or down).
It provides few methods: up?
, down?
, up(&block)
, down(&block)
You can create a table:
def change(dir)
create_table(:test) do |t|
t.column :first_name, :string, index: true
t.column :last_name, :string, unique: true
t.index "lower(first_name || ' ' || last_name)", using: :btree
t.timestamps
end
end
Constraints
I strongly encourage to use the foreign key constraints of postgres for your references:
t.references to: "users", on_delete: "cascade", null: false
There's no plan to offer on Crystal level the on_delete
feature, like
dependent
in ActiveRecord. That's a standard PG feature, just set it
up in migration
Licensing
This shard is provided under the MIT license.
Contribution
All contributions are welcome ! As a specialized ORM for PostgreSQL, be sure a great contribution on a very specific PG feature will be incorporated to this shard. I hope one day we will cover all the features of PG here !
Running Tests
In order to run the test suite, you will need to have the PostgresSQL service locally available via a socket for access with psql. psql will attempt to use the 'postgres' user to create the test database. If you are working with a newly installed database that may not have the postgres user, this can be created with createuser -s postgres
.