I built this little helper so I can have a way of easily describing a database schema and automate the process of database initialization for an installation of a project - or migration of the database to newer versions. It works on top of knex, and most functions require passing a knex instance.
The concept is like this:
- You have a folder in your project containing the description of the schema.
schema.json
- Contains the full schema for a fresh installation (used also for data in upgrade schemas)upgrade.####.json
- Contains an upgrade script for a single versionversion.json
- Contains the current DB version. A version is a whole number, and is 1-based. You could start with any number you like.
- You call either
install
orupgrade
in order to install a fresh database or migrate. - You can call
isInstallNeeded
andisUpgradeNeeded
to determine if you need to callinstall
orupgrade
. Maybe use it to automatically redirect to a screen telling the admin that a fresh installation or an upgrade process is about to begin... - You can manually call individual helper functions to create a table, column etc. (i.e. when you create tables dynamically with a predefined schema...)
- Call
setTablePrefix(prefix)
before anyinstall
orupgrade
, if you want to prefix the table names with something. (Use{table_prefix}
as a placeholder in raw statements).
Usage example:
var knex = require('knex'),
schemaInstaller = require('knex-schema-builder'),
db = knex(require('./db-config.js')),
schemaPath = path.join(__dirname, './db_schema');
// In order to initialize a fresh db
schemaInstaller.install(db, schemaPath, function (err) { ... });
// In order to upgrade a db... We can call "upgrade" directly,
// or we can tell the user that an upgrade is needed and that
// he should authorize the upgrade process.
schemaInstaller.isUpgradeNeeded(db, schemaPath, function (err, required) {
if (err) {
// Handle error...
} else {
if (required) {
installer.upgrade(db, schemaPath, function(err){
if (err) {
// An error occurred...
// Please take care of the problem manually,
// and then try to run the upgrade routine again.
} else {
// Your database has been upgraded successfully!
}
});
} else {
// Your database is up to date! No upgrade needed
}
}
});
Simply
{ "version": 1 }
- Name of each file contains the version to which you upgrade to. i.e upgrade.2.json contains the schema for upgrading from version 1 to version 2.
- The schema is an array of actions, each action has the key
action
set, and its options. - Each action can optionally have a
min_version
and/ormax_version
to specify limits for specific action, whether or not it will be executed. (i.e if upgrading from an older version, you might not want to create certain columns as they have already been created due to acreateTable
action.) - Each action can optionally have a
"ignore_errors": true
specified to ignore errors on the specific action.
upgrade.2.json
[
{ "action": "dropForeign", "table": "user", "name": "fk_user_to_some_old_table" },
{ "action": "dropTable", "table": "some_old_table" },
{ "action": "createTable", "table": "user_special_data" },
{ "action": "createTableIndexes", "table": "user_special_data" },
{ "action": "createTableForeignKeys", "table": "user_special_data" },
{ "action": "execute", "query": "INSERT INTO user_special_data (id) SELECT id FROM user" },
]
upgrade.8.json
[
{ "action": "addColumn", "table": "user_special_data", "column": "estimated_age", "min_version": 2 },
]
What happens here, is that in the upgrade to version 2, the table user_special_data
was created. And if a user tries to upgrade from version 1 to version 10, then the table will be created with all of it's columns, and the addColumn
in upgrade step 8 will fail because it already exists.
The solution is the min_version
, which means that this action will only take place if the version you are upgrading from is at least 2, which means that the table already existed and does not have the new column yet.
execute (query)
: Execute the query inquery
keycreateTable (table)
: Create the table namedtable
, without its indexes and foreign keys. You usually want to postpone those to the end of the script.createTableIndexes (table)
: Creates the indexes for table namedtable
createTableForeignKeys (table)
: Creates the foreign keys for table namedtable
addColumn (table, column)
: Creates the specified column (column
) in table namedtable
alterColumn (table, column)
: Alters the specified column (column
) in table namedtable
renameColumn (table, from, to)
: Renames thefrom
column toto
in table namedtable
createIndex (table, name, columns, unique)
: Creates an index on the specifiedtable
, using the same syntax as in the schema filecreateForeign (table, columns, foreign_table, foreign_columns, on_delete, on_update)
: Creates a foreign key on the specifiedtable
, using the same syntax as in the schema filedropColumn (table, column)
: Drops the specified column (column
) in table namedtable
dropTable (table)
: Drops the table namedtable
dropPrimary (table)
: Drops the primary key in the table namedtable
dropIndex (table, column)
: Drops the index on the specified column/columns (column
can be an array or a single string) on table namedtable
dropIndex (table, name)
: Drops the index namedname
in table namedtable
dropForeign (table, column)
: Drops the foreign key on the specified column/columns (column
can be an array or a single string) on table namedtable
dropForeign (table, name)
: Drops the foreign key namedname
in table namedtable
dropUnique (table, column)
: Drops the unique constraint the specified column/columns (column
can be an array or a single string) in table namedtable
dropUnique (table, name)
: Drops the unique constraint namedname
in table namedtable
addTimestamps (table)
: Adds the timestamps (created_at and updated_at) in the table namedtable
dropTimestamps (table)
: Drops the timestamps (created_at and updated_at) in the table namedtable
{
"schema": {
"<TABLE_NAME>": {
"columns": [
{
"name": "<COLUMN_NAME>",
"type": "<TYPE>",
"length": <LENGTH>,
"text_type": "<TEXT_TYPE>",
"precision": <PRECISION>,
"scale": <SCALE>,
"default": <DEFAULT VALUE>,
"raw_default": <RAW DEFAULT VALUE>,
"unique": true/false,
"primary_key": true/false,
"nullable": true/false,
"enum_values": ['option1', 'options2', ...],
"collate": String,
},
...
],
"indexes": [
{
"name": "<INDEX_NAME>",
"columns": "<COLUMN_NAME>" or ["<COLUMN_NAME>".. ],
"unique": true/false
}
],
"foreign_keys": [
{
"columns": "<COLUMN_NAME>" or ["<COLUMN_NAME>".. ],
"foreign_table": "<FOREIGN_TABLE_NAME>",
"foreign_columns": "<COLUMN_NAME>" or ["<COLUMN_NAME>".. ],
"on_delete": "<FOREIGN_COMMAND>",
"on_update": "<FOREIGN_COMMAND>"
}
],
"primary_key": ["<COLUMN_NAME>", ...],
"engine": "<MYSQL_ENGINE_TYPE>",
"charset": "<CHARSET>",
"collate": "<COLLATION>",
"timestamps": true/false // Adds a *created_at* and *updated_at* column on the database, setting these each to `dateTime` types.
}
},
"raw": [
"A raw query here",
[
"multiline raw query",
"separated by a comma"
]
]
}
unsigned <TYPE>
(Makes an unsigned type)increments
/bigIncrements
(These are unsigned!)integer
bigInteger
text
(Use the optionalTEXT_TYPE
for specifying a specific type. Default istext
)tinytext
,mediumtext
,longtext
char
string
,varchar
(Is aVARCHAR
, and defaults to 255 in length)float
double
decimal
boolean
date
dateTime
time
timestamp
/timestamptz
binary
enum
(Use withenum_values
)json
/jsonb
uuid
:<OTHER_TYPE>
will use a db-specific type that is not in the predefined list above
Specifies the length of a string
column. Defaults to 255.
Specifies a specific native text type for the text
column. Defaults to "text".
Precision means how many digits can this number hold. i.e a precision of 5 will be able to hold "12345" or "123.45" or "0.1234"
Scale means how many digits (out of the precision) will be used for decimal point i.e a precision of 5 and a scale of 2 will be able to hold "123.45" or "123.4", but not "123.456" and not "12.345"
- Only applicable to MySql databases
InnoDB
MyISAM
Memory
Merge
Archive
Federated
NDB
If you have anything to contribute, or functionality that you luck - you are more than welcome to participate in this! If anyone wishes to contribute unit tests - that also would be great :-)
- Hi! I am Daniel Cohen Gindi. Or in short- Daniel.
- danielgindi@gmail.com is my email address.
- That's all you need to know.
All the code here is under MIT license. Which means you could do virtually anything with the code. I will appreciate it very much if you keep an attribution where appropriate.
The MIT License (MIT)
Copyright (c) 2013 Daniel Cohen Gindi (danielgindi@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.