query(query, params...)
select(query, params...)
insert(table, values, returning...)
update(table, values, conditions, params...)
delete(table, conditions, params...)
raw(str)
list({values...})
array({values...})
escape_literal(value)
escape_identifier(str)
add_column(table_name, column_name, column_type)
drop_column(table_name, column_name)
rename_column(table_name, old_name, new_name)
rename_table(old_name, new_name)
Lapis comes with a set of classes and functions for working with either PostgreSQL or MySQL. In the future other databases will be directly supported. In the meantime, you're free to use other OpenResty database drivers, you just won’t have access to Lapis' query API.
Every query is performed asynchronously through the OpenResty cosocket API. A request will yield and resume automatically so there’s no need to code with callbacks, queries can be written sequentially as if they were in a synchronous environment. Additionally connections to the server are automatically pooled for optimal performance.
Depending on which database you use, a different library is used:
pgmoon is the driver used to run PostgreSQL queries. It has the advantage of being able to be used within OpenResty’s cosocket API in addition to on the command line using LuaSocket’s synchronous API.
When in the context of the server, lua-resty-mysql is the driver used to run MySQL queries. When on the command line, LuaSQL with MySQL is used.
You'll need to configure Lapis so it can connect to the database.
If you're using PostgreSQL create a postgres
block in our config.moon
config.lua
file.
-- config.lua
local config = require("lapis.config")
config("development", {
postgres = {
host = "127.0.0.1",
user = "pg_user",
password = "the_password",
database = "my_database"
}
})
-- config.moon
import config from require "lapis.config"
config "development", ->
postgres ->
host "127.0.0.1"
user "pg_user"
password "the_password"
database "my_database"
host
defaults to 127.0.0.1
and user
defaults to postgres
, so you can
leave those fields out if they aren’t different from the defaults. If a
non-default port is required it can be appended to the host
with colon
syntax: my_host:1234
(Otherwise 5432
, the PostgreSQL default, is used).
If you're using MySQL the approach is similar, but you will define a mysql
block:
-- config.lua
local config = require("lapis.config")
config("development", {
mysql = {
host = "127.0.0.1",
user = "mysql_user",
password = "the_password",
database = "my_database"
}
})
-- config.moon
import config from require "lapis.config"
config "development", ->
mysql ->
host "127.0.0.1"
user "mysql_user"
password "the_password"
database "my_database"
You're now ready to start making queries.
There are two ways to make queries:
Model
class is a wrapper around a Lua table that helps you synchronize it with a row in a database table.The Model
class is the preferred way to interact with the database. The raw
query interface is for achieving things the Model
class is unable to do
easily.
Here’s an example of the raw query interface:
local lapis = require("lapis")
local db = require("lapis.db")
local app = lapis.Application()
app:match("/", function()
local res = db.query("select * from my_table where id = ?", 10)
return "ok!"
end)
return app
lapis = require "lapis"
db = require "lapis.db"
class extends lapis.Application
"/": =>
res = db.query "select * from my_table where id = ?", 10
"ok!"
And the same query represented with the Model
class:
local lapis = require("lapis")
local Model = require("lapis.db.model").Model
local app = lapis.Application()
local MyTable = Model:extend("my_table")
app:match("/", function()
local row = MyTable:find(10)
return "ok!"
end)
return app
lapis = require "lapis"
import Model from require "lapis.db.model"
class MyTable extends Model
class extends lapis.Application
"/": =>
row = MyTable\find 10
"ok!"
By default all queries will log to the Nginx notice log. You'll be able to see each query as it happens.
local db = require("lapis.db")
db = require "lapis.db"
The db
module provides the following functions:
query(query, params...)
Performs a raw query. Returns the result set if successful, returns nil
if
failed.
The first argument is the query to perform. If the query contains any ?
s then
they are replaced in the order they appear with the remaining arguments. The
remaining arguments are escaped with escape_literal
before being
interpolated, making SQL injection impossible.
local res
res = db.query("SELECT * FROM hello")
res = db.query("UPDATE things SET color = ?", "blue")
res = db.query("INSERT INTO cats (age, name, alive) VALUES (?, ?, ?)", 25, "dogman", true)
res = db.query "SELECT * FROM hello"
res = db.query "UPDATE things SET color = ?", "blue"
res = db.query "INSERT INTO cats (age, name, alive) VALUES (?, ?, ?)", 25, "dogman", true
SELECT * FROM hello
UPDATE things SET color = 'blue'
INSERT INTO cats (age, name, alive) VALUES (25, 'dogman', TRUE)
A query that fails to execute will raise a Lua error. The error will contain the message from the database along with the query.
select(query, params...)
The same as query
except it appends "SELECT"
to the front of the query.
local res = db.select("* from hello where active = ?", db.FALSE)
res = db.select "* from hello where active = ?", db.FALSE
SELECT * from hello where active = FALSE
insert(table, values, returning...)
Inserts a row into table
. values
is a Lua table of column names and values.
db.insert("my_table", {
age = 10,
name = "Hello World"
})
db.insert "my_table", {
age: 10
name: "Hello World"
}
INSERT INTO "my_table" ("age", "name") VALUES (10, 'Hello World')
A list of column names to be returned can be given after the value table:
local res = db.insert("some_other_table", {
name = "Hello World"
}, "id")
res = db.insert "some_other_table", {
name: "Hello World"
}, "id"
INSERT INTO "some_other_table" ("name") VALUES ('Hello World') RETURNING "id"
RETURNING
is a PostgreSQL feature, and is not available when using MySQL
update(table, values, conditions, params...)
Updates table
with values
on all rows that match conditions
.
db.update("the_table", {
name = "Dogbert 2.0",
active = true
}, {
id = 100
})
db.update "the_table", {
name: "Dogbert 2.0"
active: true
}, {
id: 100
}
UPDATE "the_table" SET "name" = 'Dogbert 2.0', "active" = TRUE WHERE "id" = 100
conditions
can also be a string, and params
will be interpolated into it:
db.update("the_table", {
count = db.raw("count + 1")
}, "count < ?", 10)
db.update "the_table", {
count: db.raw"count + 1"
}, "count < ?", 10
UPDATE "the_table" SET "count" = count + 1 WHERE count < 10
When using the table form for conditions, all the extra arguments are used for
the RETURNING
clause:
db.update("cats", {
count = db.raw("count + 1")
}, {
id = 1200
}, "count")
db.update "cats", {
count: db.raw "count + 1"
}, {
id: 1200
}, "count"
UPDATE "cats" SET "count" = count + 1, WHERE "id" = 1200 RETURNING count
RETURNING
is a PostgreSQL feature, and is not available when using MySQL
delete(table, conditions, params...)
Deletes rows from table
that match conditions
.
db.delete("cats", { name = "Roo" })
db.delete "cats", name: "Roo"
DELETE FROM "cats" WHERE "name" = 'Roo'
conditions
can also be a string
db.delete("cats", "name = ?", "Gato")
db.delete "cats", "name = ?", "Gato"
DELETE FROM "cats" WHERE name = 'Gato'
raw(str)
Returns a special value that will be inserted verbatim into the query without being escaped:
db.update("the_table", {
count = db.raw("count + 1")
})
db.select("* from another_table where x = ?", db.raw("now()"))
db.update "the_table", {
count: db.raw"count + 1"
}
db.select "* from another_table where x = ?", db.raw"now()"
UPDATE "the_table" SET "count" = count + 1
SELECT * from another_table where x = now()
list({values...})
Returns a special value that will be inserted into the query using SQL’s list syntax. It takes a single argument of an array table.
The return value of this function can be used in place of any regular value
passed to a SQL query function. Each item in the list will be escaped with
escape_literal
before being inserted into the query.
Note we can use it both in interpolation and in the clause to a db.update
call:
local ids = db.list({3,2,1,5})
local res = db.select("* from another table where id in ?", ids)
db.update("the_table", {
height = 55
}, {
id = ids
})
ids = db.list {3,2,1,5}
res = db.select "* from another table where id in ?", ids
db.update "the_table", {
height: 55
}, { :ids }
SELECT * from another table where id in (3, 2, 1, 5)
UPDATE "the_table" SET "height" = 55 WHERE "ids" IN (3, 2, 1, 5)
array({values...})
Converts the argument passed to an array type that will be inserted/updated using PostgreSQL’s array syntax. This function does not exist for MySQL.
The return value of this function can be used in place of any regular value
passed to a SQL query function. Each item in the list will be escaped with
escape_literal
before being inserted into the query.
The argument is converted, not copied. If you need to avoid modifying the argument then create a copy before passing it to this function.
db.insert("some_table", {
tags = db.array({"hello", "world"})
})
db.insert "some_table", {
tags: db.array {"hello", "world"}
}
INSERT INTO "some_table" ("tags") VALUES (ARRAY['hello','world'])
escape_literal(value)
Escapes a value for use in a query. A value is any type that can be stored in a column. Numbers, strings, and booleans will be escaped accordingly.
local escaped = db.escape_literal(value)
local res = db.query("select * from hello where id = " .. escaped)
escaped = db.escape_literal value
res = db.query "select * from hello where id = #{escaped}"
escape_literal
is not appropriate for escaping column or table names. See
escape_identifier
.
escape_identifier(str)
Escapes a string for use in a query as an identifier. An identifier is a column or table name.
local table_name = db.escape_identifier("table")
local res = db.query("select * from " .. table_name)
table_name = db.escape_identifier "table"
res = db.query "select * from #{table_name}"
escape_identifier
is not appropriate for escaping values. See
escape_literal
for escaping values.
interpolate_query(query, ...)
Interpolates a query containing ?
markers with the rest of the
arguments escaped via escape_literal
.
local q = "select * from table"
q = q .. db.interpolate_query("where value = ?", 42)
local res = db.query(q)
q = "select * from table"
q ..= db.interpolate_query "where value = ?", 42
res = db.query q
The following constants are also available:
NULL
— represents NULL
in SQLTRUE
— represents TRUE
in SQLFALSE
— represents FALSE
in SQLdb.update("the_table", {
name = db.NULL
})
db.update "the_table", {
name: db.NULL
}
Lapis comes with a collection of tools for creating your database schema inside
of the lapis.db.schema
module.
create_table(table_name, { table_declarations... })
The first argument to create_table
is the name of the table and the second
argument is an array table that describes the table.
local schema = require("lapis.db.schema")
local types = schema.types
schema.create_table("users", {
{"id", types.serial},
{"username", types.varchar},
"PRIMARY KEY (id)"
})
schema = require "lapis.db.schema"
import create_table, types from schema
create_table "users", {
{"id", types.serial}
{"username", types.varchar}
"PRIMARY KEY (id)"
}
In MySQL you should use
types.id
to get an autoincrementing primary key ID. Additionally you should not specifyPRIMARY KEY (id)
either.
This will generate the following SQL:
CREATE TABLE IF NOT EXISTS "users" (
"id" serial NOT NULL,
"username" character varying(255) NOT NULL,
PRIMARY KEY (id)
);
The items in the second argument to create_table
can either be a table, or a
string. When the value is a table it is treated as a column/type tuple:
{ column_name, column_type }
They are both plain strings. The column name will be escaped automatically.
The column type will be inserted verbatim after it is passed through
tostring
. schema.types
has a collection of common types that can be used.
For example, schema.types.varchar
evaluates to character varying(255) NOT
NULL
. See more about types below.
If the value to the second argument is a string then it is inserted directly
into the CREATE TABLE
statement, that’s how we create the primary key above.
drop_table(table_name)
Drops a table.
schema.drop_table("users")
import drop_table from schema
drop_table "users"
DROP TABLE IF EXISTS "users";
create_index(table_name, col1, col2..., [options])
create_index
is used to add new indexes to a table. The first argument is a
table, the rest of the arguments are the ordered columns that make up the
index. Optionally the last argument can be a Lua table of options.
There are two options unique: BOOL
, where: clause_string
.
create_index
will also check if the index exists before attempting to create
it. If the index exists then nothing will happen.
Here are some example indexes:
local create_index = schema.create_index
create_index("users", "created_at")
create_index("users", "username", { unique = true })
create_index("posts", "category", "title")
create_index("uploads", "name", { where = "not deleted" })
import create_index from schema
create_index "users", "created_at"
create_index "users", "username", unique: true
create_index "posts", "category", "title"
create_index "uploads", "name", where: "not deleted"
This will generate the following SQL:
CREATE INDEX ON "users" (created_at);
CREATE UNIQUE INDEX ON "users" (username);
CREATE INDEX ON "posts" (category, title);
CREATE INDEX ON "uploads" (name) WHERE not deleted;
drop_index(table_name, col1, col2...)
Drops an index from a table. It calculates the name of the index from the table name and columns. This is the same as the default index name generated by database on creation.
local drop_index = schema.drop_index
drop_index("users", "created_at")
drop_index("posts", "title", "published")
import drop_index from schema
drop_index "users", "created_at"
drop_index "posts", "title", "published"
This will generate the following SQL:
DROP INDEX IF EXISTS "users_created_at_idx"
DROP INDEX IF EXISTS "posts_title_published_idx"
add_column(table_name, column_name, column_type)
Adds a column to a table.
schema.add_column("users", "age", types.integer)
import add_column, types from schema
add_column "users", "age", types.integer
Generates the SQL:
ALTER TABLE "users" ADD COLUMN "age" integer NOT NULL DEFAULT 0
drop_column(table_name, column_name)
Removes a column from a table.
schema.drop_column("users", "age")
import drop_column from schema
drop_column "users", "age"
Generates the SQL:
ALTER TABLE "users" DROP COLUMN "age"
rename_column(table_name, old_name, new_name)
Changes the name of a column.
schema.rename_column("users", "age", "lifespan")
import rename_column from schema
rename_column "users", "age", "lifespan"
Generates the SQL:
ALTER TABLE "users" RENAME COLUMN "age" TO "lifespan"
rename_table(old_name, new_name)
Changes the name of a table.
schema.rename_table("users", "members")
import rename_table from schema
rename_table "users", "members"
Generates the SQL:
ALTER TABLE "users" RENAME TO "members"
All of the column type generators are stored in schema.types
. All the types
are special objects that can either be turned into a type declaration string
with tostring
, or called like a function to be customized.
Here are all the default values:
local types = require("lapis.db.schema").types
print(types.boolean) --> boolean NOT NULL DEFAULT FALSE
print(types.date) --> date NOT NULL
print(types.double) --> double precision NOT NULL DEFAULT 0
print(types.foreign_key) --> integer NOT NULL
print(types.integer) --> integer NOT NULL DEFAULT 0
print(types.numeric) --> numeric NOT NULL DEFAULT 0
print(types.real) --> real NOT NULL DEFAULT 0
print(types.serial) --> serial NOT NULL
print(types.text) --> text NOT NULL
print(types.time) --> timestamp without time zone NOT NULL
print(types.varchar) --> character varying(255) NOT NULL
print(types.enum) --> smallint NOT NULL
import types from require "lapis.db.schema"
types.boolean --> boolean NOT NULL DEFAULT FALSE
types.date --> date NOT NULL
types.double --> double precision NOT NULL DEFAULT 0
types.foreign_key --> integer NOT NULL
types.integer --> integer NOT NULL DEFAULT 0
types.numeric --> numeric NOT NULL DEFAULT 0
types.real --> real NOT NULL DEFAULT 0
types.serial --> serial NOT NULL
types.text --> text NOT NULL
types.time --> timestamp without time zone NOT NULL
types.varchar --> character varying(255) NOT NULL
types.enum --> smallint NOT NULL
You'll notice everything is NOT NULL
by default, and the numeric types have
defaults of 0 and boolean false.
When a type is called like a function it takes one argument, a table of options. The options include:
default: value
— sets default valuenull: boolean
— determines if the column is NOT NULL
unique: boolean
— determines if the column has a unique indexprimary_key: boolean
— determines if the column is the primary keyarray: bool|number
— makes the type an array (PostgreSQL Only), pass number to set how many dimensions the array is, true
== 1
Here are some examples:
types.integer({ default = 1, null = true }) --> integer DEFAULT 1
types.integer({ primary_key = true }) --> integer NOT NULL DEFAULT 0 PRIMARY KEY
types.text({ null = true }) --> text
types.varchar({ primary_key = true }) --> character varying(255) NOT NULL PRIMARY KEY
types.real({ array = true }) --> real[]
types.integer default: 1, null: true --> integer DEFAULT 1
types.integer primary_key: true --> integer NOT NULL DEFAULT 0 PRIMARY KEY
types.text null: true --> text
types.varchar primary_key: true --> character varying(255) NOT NULL PRIMARY KEY
types.real array: true --> real[]
types.text array: 2 --> real[][]
MySQL has a complete different type set than PostgreSQL, see MySQL types
Because requirements typically change over the lifespan of a web application it’s useful to have a system to make incremental schema changes to the database.
We define migrations in our code as a table of functions where the key of each function in the table is the name of the migration. You are free to name the migrations anything but it’s suggested to give them Unix timestamps as names:
local schema = require("lapis.db.schema")
return {
[1368686109] = function()
schema.add_column("my_table", "hello", schema.types.integer)
end,
[1368686843] = function()
schema.create_index("my_table", "hello")
end
}
import add_column, create_index, types from require "lapis.db.schema"
{
[1368686109]: =>
add_column "my_table", "hello", types.integer
[1368686843]: =>
create_index "my_table", "hello"
}
A migration function is a plain function. Generally they will call the schema functions described above, but they don’t have to.
Only the functions that haven’t already been executed will be called when we tell our migrations to run. The migrations that have already been run are stored in the migrations table, a database table that holds the names of the migrations that have already been run. Migrations are run in the order of their keys sorted ascending.
The Lapis command line tool has a special command for running migrations. It’s
called lapis migrate
.
This command expects a module called migrations
that returns a table of
migrations in the format described above.
Let’s create this file with a single migration as an example.
-- migrations.lua
local schema = require("lapis.db.schema")
local types = schema.types
return {
[1] = function()
schema.create_table("articles", {
{ "id", types.serial },
{ "title", types.text },
{ "content", types.text },
"PRIMARY KEY (id)"
})
end
}
-- migrations.moon
import create_table, types from require "lapis.db.schema"
{
[1]: =>
create_table "articles", {
{ "id", types.serial }
{ "title", types.text }
{ "content", types.text }
"PRIMARY KEY (id)"
}
}
After creating the file, ensure that it is compiled to Lua and run lapis
migrate
. The command will first create the migrations table if it doesn’t
exist yet then it will run every migration that hasn’t been executed yet.
Read more about the migrate command.
We can manually create the migrations table using the following code:
local migrations = require("lapis.db.migrations")
migrations.create_migrations_table()
migrations = require "lapis.db.migrations"
migrations.create_migrations_table!
It will execute the following SQL:
CREATE TABLE IF NOT EXISTS "lapis_migrations" (
"name" character varying(255) NOT NULL,
PRIMARY KEY(name)
);
Then we can manually run migrations with the following code:
local migrations = require("lapis.db.migrations")
migrations.run_migrations(require("migrations"))
import run_migrations from require "lapis.db.migrations"
run_migrations require "migrations"
These are additional helper functions from the db
module that
aren’t directly related to the query interface.
format_date(time)
Returns a date string formatted properly for insertion in the database.
The time
argument is optional, will default to the current UTC time.
local date = db.format_date()
db.query("update things set published_at = ?", date)
date = db.format_date!
db.query "update things set published_at = ?", date