✨ Add SeaORM integration for database access
This commit is contained in:
parent
b2b0d8bd3e
commit
3160a9e8dc
18 changed files with 3587 additions and 14 deletions
1488
Cargo.lock
generated
1488
Cargo.lock
generated
File diff suppressed because it is too large
Load diff
|
|
@ -11,6 +11,7 @@ members = [
|
||||||
# Packages
|
# Packages
|
||||||
"packages/pagetop-aliner",
|
"packages/pagetop-aliner",
|
||||||
"packages/pagetop-bootsier",
|
"packages/pagetop-bootsier",
|
||||||
|
"packages/pagetop-seaorm",
|
||||||
|
|
||||||
# App
|
# App
|
||||||
"drust",
|
"drust",
|
||||||
|
|
|
||||||
67
packages/pagetop-seaorm/CREDITS.md
Normal file
67
packages/pagetop-seaorm/CREDITS.md
Normal file
|
|
@ -0,0 +1,67 @@
|
||||||
|
# 🔃 Dependencies
|
||||||
|
|
||||||
|
PageTop is developed in the [Rust programming language](https://www.rust-lang.org/) and stands on
|
||||||
|
the shoulders of true giants, using some of the most stable and renowned libraries (*crates*) from
|
||||||
|
the [Rust ecosystem](https://lib.rs), such as:
|
||||||
|
|
||||||
|
* [Actix Web](https://actix.rs/) for web services and server management.
|
||||||
|
* [Tracing](https://github.com/tokio-rs/tracing) for the diagnostic system and structured logging.
|
||||||
|
* [Fluent templates](https://github.com/XAMPPRocky/fluent-templates) that incorporate
|
||||||
|
[Fluent](https://projectfluent.org/) for project internationalization.
|
||||||
|
* [SeaORM](https://www.sea-ql.org/SeaORM/) which employs [SQLx](https://docs.rs/sqlx/latest/sqlx/)
|
||||||
|
for database access and modeling.
|
||||||
|
* Among others, which you can review in the PageTop
|
||||||
|
[`Cargo.toml`](https://github.com/manuelcillero/pagetop/blob/main/Cargo.toml) file.
|
||||||
|
|
||||||
|
|
||||||
|
# ⌨️ Code
|
||||||
|
|
||||||
|
PageTop integrates code from various renowned crates to enhance functionality:
|
||||||
|
|
||||||
|
* [**Config (v0.11.0)**](https://github.com/mehcode/config-rs/tree/0.11.0): Includes code from
|
||||||
|
[config-rs](https://crates.io/crates/config) by [Ryan Leckey](https://crates.io/users/mehcode),
|
||||||
|
chosen for its advantages in reading configuration settings and delegating assignment to safe
|
||||||
|
types, tailored to the specific needs of each package, theme, or application.
|
||||||
|
|
||||||
|
* [**Maud (v0.25.0)**](https://github.com/lambda-fairy/maud/tree/v0.25.0/maud): An adapted version
|
||||||
|
of the excellent [maud](https://crates.io/crates/maud) crate by
|
||||||
|
[Chris Wong](https://crates.io/users/lambda-fairy) is incorporated to leverage its functionalities without requiring a reference to `maud` in the `Cargo.toml` files.
|
||||||
|
|
||||||
|
* **SmartDefault (v0.7.1)**: Embedded [SmartDefault](https://crates.io/crates/smart_default) by
|
||||||
|
[Jane Doe](https://crates.io/users/jane-doe) as `AutoDefault`to simplify the documentation of
|
||||||
|
Default implementations and also removes the need to explicitly list `smart_default` in the
|
||||||
|
`Cargo.toml` files.
|
||||||
|
|
||||||
|
* **Database Operations**: PageTop employs [SQLx](https://github.com/launchbadge/sqlx) and
|
||||||
|
[SeaQuery](https://github.com/SeaQL/sea-query), complemented by a custom version of
|
||||||
|
[SeaORM Migration](https://github.com/SeaQL/sea-orm/tree/master/sea-orm-migration) (version
|
||||||
|
[0.12.8](https://github.com/SeaQL/sea-orm/tree/0.12.8/sea-orm-migration/src)). This modification
|
||||||
|
ensures migration processes are confined to specific packages, enhancing modularity and
|
||||||
|
maintainability.
|
||||||
|
|
||||||
|
|
||||||
|
# 🗚 FIGfonts
|
||||||
|
|
||||||
|
PageTop uses the [figlet-rs](https://crates.io/crates/figlet-rs) package by *yuanbohan* to display a
|
||||||
|
presentation banner in the terminal with the application's name using
|
||||||
|
[FIGlet](http://www.figlet.org) characters. The fonts included in `src/app` are:
|
||||||
|
|
||||||
|
* [slant.flf](http://www.figlet.org/fontdb_example.cgi?font=slant.flf) by *Glenn Chappell*
|
||||||
|
* [small.flf](http://www.figlet.org/fontdb_example.cgi?font=small.flf) by *Glenn Chappell* (default)
|
||||||
|
* [speed.flf](http://www.figlet.org/fontdb_example.cgi?font=speed.flf) by *Claude Martins*
|
||||||
|
* [starwars.flf](http://www.figlet.org/fontdb_example.cgi?font=starwars.flf) by *Ryan Youck*
|
||||||
|
|
||||||
|
|
||||||
|
# 📰 Templates
|
||||||
|
|
||||||
|
* The default welcome homepage design is based on the
|
||||||
|
[Zinc](https://themewagon.com/themes/free-bootstrap-5-html5-business-website-template-zinc)
|
||||||
|
template created by [inovatik](https://inovatik.com/) and distributed by
|
||||||
|
[ThemeWagon](https://themewagon.com).
|
||||||
|
|
||||||
|
|
||||||
|
# 🎨 Icon
|
||||||
|
|
||||||
|
"The creature" smiling is a fun creation by [Webalys](https://www.iconfinder.com/webalys). It can be
|
||||||
|
found in their [Nasty Icons](https://www.iconfinder.com/iconsets/nasty) collection available on
|
||||||
|
[ICONFINDER](https://www.iconfinder.com).
|
||||||
35
packages/pagetop-seaorm/Cargo.toml
Normal file
35
packages/pagetop-seaorm/Cargo.toml
Normal file
|
|
@ -0,0 +1,35 @@
|
||||||
|
[package]
|
||||||
|
name = "pagetop-seaorm"
|
||||||
|
version = "0.0.1"
|
||||||
|
edition = "2021"
|
||||||
|
|
||||||
|
description = """\
|
||||||
|
Integrate SeaORM as the database framework for PageTop applications.\
|
||||||
|
"""
|
||||||
|
categories = ["web-programming", "database"]
|
||||||
|
keywords = ["pagetop", "database", "sql", "orm"]
|
||||||
|
|
||||||
|
homepage = { workspace = true }
|
||||||
|
repository = { workspace = true }
|
||||||
|
authors = { workspace = true }
|
||||||
|
license = { workspace = true }
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
pagetop.workspace = true
|
||||||
|
|
||||||
|
async-trait = "0.1.83"
|
||||||
|
futures = "0.3.31"
|
||||||
|
serde.workspace = true
|
||||||
|
static-files.workspace = true
|
||||||
|
url = "2.5.4"
|
||||||
|
|
||||||
|
[dependencies.sea-orm]
|
||||||
|
version = "1.1.1"
|
||||||
|
features = [
|
||||||
|
"debug-print", "macros", "runtime-async-std-native-tls",
|
||||||
|
"sqlx-mysql", "sqlx-postgres", "sqlx-sqlite",
|
||||||
|
]
|
||||||
|
default-features = false
|
||||||
|
|
||||||
|
[dependencies.sea-schema]
|
||||||
|
version = "0.16.0"
|
||||||
127
packages/pagetop-seaorm/README.md
Normal file
127
packages/pagetop-seaorm/README.md
Normal file
|
|
@ -0,0 +1,127 @@
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|
<img src="https://raw.githubusercontent.com/manuelcillero/pagetop/main/static/banner.png" />
|
||||||
|
|
||||||
|
<h1>PageTop</h1>
|
||||||
|
|
||||||
|
<p>An opinionated web framework to build modular <em>Server-Side Rendering</em> web solutions.</p>
|
||||||
|
|
||||||
|
[](#-license)
|
||||||
|
[](https://docs.rs/pagetop)
|
||||||
|
[](https://crates.io/crates/pagetop)
|
||||||
|
[](https://crates.io/crates/pagetop)
|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The PageTop core API provides a comprehensive toolkit for extending its functionalities to specific
|
||||||
|
requirements and application scenarios through actions, components, packages, and themes:
|
||||||
|
|
||||||
|
* **Actions** serve as a mechanism to customize PageTop's internal behavior by intercepting its
|
||||||
|
execution flow.
|
||||||
|
* **Components** encapsulate HTML, CSS, and JavaScript into functional, configurable, and
|
||||||
|
well-defined units.
|
||||||
|
* **Packages** extend or customize existing functionality by interacting with PageTop APIs or
|
||||||
|
third-party package APIs.
|
||||||
|
* **Themes** enable developers to alter the appearance of pages and components without affecting
|
||||||
|
their functionality.
|
||||||
|
|
||||||
|
|
||||||
|
# ⚡️ Quick start
|
||||||
|
|
||||||
|
```rust
|
||||||
|
use pagetop::prelude::*;
|
||||||
|
|
||||||
|
struct HelloWorld;
|
||||||
|
|
||||||
|
impl PackageTrait for HelloWorld {
|
||||||
|
fn configure_service(&self, scfg: &mut service::web::ServiceConfig) {
|
||||||
|
scfg.route("/", service::web::get().to(hello_world));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn hello_world(request: HttpRequest) -> ResultPage<Markup, ErrorPage> {
|
||||||
|
Page::new(request)
|
||||||
|
.with_component(Html::with(html! { h1 { "Hello World!" } }))
|
||||||
|
.render()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[pagetop::main]
|
||||||
|
async fn main() -> std::io::Result<()> {
|
||||||
|
Application::prepare(&HelloWorld).run()?.await
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This program features a `HelloWorld` package, providing a service that serves a greeting web page
|
||||||
|
accessible via `http://localhost:8088` under default settings.
|
||||||
|
|
||||||
|
|
||||||
|
# 📂 Repository Structure
|
||||||
|
|
||||||
|
This repository is organized into a workspace that includes several subprojects, each serving a
|
||||||
|
distinct role within the PageTop ecosystem:
|
||||||
|
|
||||||
|
## Application
|
||||||
|
|
||||||
|
* [drust](https://github.com/manuelcillero/pagetop/tree/latest/drust):
|
||||||
|
A simple Content Management System (CMS) built on PageTop, which enables the creation, editing,
|
||||||
|
and maintenance of dynamic, fast, and modular websites. It uses the following essential packages
|
||||||
|
to provide standard CMS functionalities.
|
||||||
|
|
||||||
|
## Helpers
|
||||||
|
|
||||||
|
* [pagetop-macros](https://github.com/manuelcillero/pagetop/tree/latest/helpers/pagetop-macros):
|
||||||
|
A collection of procedural macros that enhance the development experience within PageTop.
|
||||||
|
|
||||||
|
* [pagetop-build](https://github.com/manuelcillero/pagetop/tree/latest/helpers/pagetop-build):
|
||||||
|
Simplifies the process of embedding resources directly into binary files for PageTop applications.
|
||||||
|
|
||||||
|
## Packages
|
||||||
|
|
||||||
|
* [pagetop-user](https://github.com/manuelcillero/pagetop/tree/latest/packages/pagetop-user):
|
||||||
|
Facilitates user management, including roles, permissions, and session handling, for applications
|
||||||
|
built on PageTop.
|
||||||
|
|
||||||
|
* [pagetop-admin](https://github.com/manuelcillero/pagetop/tree/latest/packages/pagetop-admin):
|
||||||
|
Provides a unified interface for administrators to configure and manage package settings.
|
||||||
|
|
||||||
|
* [pagetop-node](https://github.com/manuelcillero/pagetop/tree/latest/packages/pagetop-node):
|
||||||
|
Enables the creation and customization of content types, enhancing website content management.
|
||||||
|
|
||||||
|
## Themes
|
||||||
|
|
||||||
|
* [pagetop-bootsier](https://github.com/manuelcillero/pagetop/tree/latest/packages/pagetop-bootsier):
|
||||||
|
Utilizes the *[Bootstrap](https://getbootstrap.com/)* framework to offer versatile page layouts
|
||||||
|
and component stylings.
|
||||||
|
|
||||||
|
* [pagetop-bulmix](https://github.com/manuelcillero/pagetop/tree/latest/packages/pagetop-bulmix):
|
||||||
|
Utilizes the *[Bulma](https://bulma.io/)* framework for sleek, responsive design elements.
|
||||||
|
|
||||||
|
|
||||||
|
# 🚧 Warning
|
||||||
|
|
||||||
|
**PageTop** framework is currently in active development. The API is unstable and subject to
|
||||||
|
frequent changes. Production use is not recommended until version **0.1.0**.
|
||||||
|
|
||||||
|
|
||||||
|
# 📜 License
|
||||||
|
|
||||||
|
PageTop is free, open source and permissively licensed! Except where noted (below and/or in
|
||||||
|
individual files), all code in this project is dual-licensed under either:
|
||||||
|
|
||||||
|
* MIT License
|
||||||
|
([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT)
|
||||||
|
|
||||||
|
* Apache License, Version 2.0,
|
||||||
|
([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0)
|
||||||
|
|
||||||
|
at your option. This means you can select the license you prefer! This dual-licensing approach is
|
||||||
|
the de-facto standard in the Rust ecosystem.
|
||||||
|
|
||||||
|
|
||||||
|
# ✨ Contributions
|
||||||
|
|
||||||
|
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the
|
||||||
|
work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
|
||||||
|
additional terms or conditions.
|
||||||
70
packages/pagetop-seaorm/src/config.rs
Normal file
70
packages/pagetop-seaorm/src/config.rs
Normal file
|
|
@ -0,0 +1,70 @@
|
||||||
|
//! Configuration settings for SeaORM PageTop package.
|
||||||
|
//!
|
||||||
|
//! Example:
|
||||||
|
//!
|
||||||
|
//! ```toml
|
||||||
|
//! [database]
|
||||||
|
//! db_type = "mysql"
|
||||||
|
//! db_name = "db"
|
||||||
|
//! db_user = "user"
|
||||||
|
//! db_pass = "password"
|
||||||
|
//! db_host = "localhost"
|
||||||
|
//! db_port = 3306
|
||||||
|
//! max_pool_size = 5
|
||||||
|
//! ```
|
||||||
|
//!
|
||||||
|
//! Usage:
|
||||||
|
//!
|
||||||
|
//! ```rust
|
||||||
|
//! use pagetop_seaorm::config;
|
||||||
|
//!
|
||||||
|
//! assert_eq!(config::SETTINGS.database.db_host, "localhost");
|
||||||
|
//! ```
|
||||||
|
//! See [`pagetop::config`](pagetop::config) to learn how **PageTop** read configuration files and
|
||||||
|
//! use settings.
|
||||||
|
|
||||||
|
use pagetop::prelude::*;
|
||||||
|
|
||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
include_config!(SETTINGS: Settings => [
|
||||||
|
// [database]
|
||||||
|
"database.db_type" => "",
|
||||||
|
"database.db_name" => "",
|
||||||
|
"database.db_user" => "",
|
||||||
|
"database.db_pass" => "",
|
||||||
|
"database.db_host" => "localhost",
|
||||||
|
"database.db_port" => 0,
|
||||||
|
"database.max_pool_size" => 5,
|
||||||
|
]);
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
/// Type for HighlightJS configuration settings, section [`[hljs]`](Hljs) (used by [`SETTINGS`]).
|
||||||
|
pub struct Settings {
|
||||||
|
pub database: Database,
|
||||||
|
}
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
/// Struct for section `[database]` of [`Settings`] type.
|
||||||
|
pub struct Database {
|
||||||
|
/// Tipo de base de datos: *"mysql"*, *"postgres"* ó *"sqlite"*.
|
||||||
|
/// Por defecto: *""*.
|
||||||
|
pub db_type: String,
|
||||||
|
/// Nombre (para mysql/postgres) o referencia (para sqlite) de la base de datos.
|
||||||
|
/// Por defecto: *""*.
|
||||||
|
pub db_name: String,
|
||||||
|
/// Usuario de conexión a la base de datos (para mysql/postgres).
|
||||||
|
/// Por defecto: *""*.
|
||||||
|
pub db_user: String,
|
||||||
|
/// Contraseña para la conexión a la base de datos (para mysql/postgres).
|
||||||
|
/// Por defecto: *""*.
|
||||||
|
pub db_pass: String,
|
||||||
|
/// Servidor de conexión a la base de datos (para mysql/postgres).
|
||||||
|
/// Por defecto: *"localhost"*.
|
||||||
|
pub db_host: String,
|
||||||
|
/// Puerto de conexión a la base de datos, normalmente 3306 (para mysql) ó 5432 (para postgres).
|
||||||
|
/// Por defecto: *0*.
|
||||||
|
pub db_port: u16,
|
||||||
|
/// Número máximo de conexiones habilitadas.
|
||||||
|
/// Por defecto: *5*.
|
||||||
|
pub max_pool_size: u32,
|
||||||
|
}
|
||||||
132
packages/pagetop-seaorm/src/db.rs
Normal file
132
packages/pagetop-seaorm/src/db.rs
Normal file
|
|
@ -0,0 +1,132 @@
|
||||||
|
use pagetop::trace;
|
||||||
|
use pagetop::util::TypeInfo;
|
||||||
|
|
||||||
|
pub use url::Url as DbUri;
|
||||||
|
|
||||||
|
pub use sea_orm::error::{DbErr, RuntimeErr};
|
||||||
|
pub use sea_orm::{DatabaseConnection as DbConn, ExecResult, QueryResult};
|
||||||
|
|
||||||
|
use sea_orm::{ConnectionTrait, DatabaseBackend, Statement};
|
||||||
|
|
||||||
|
mod dbconn;
|
||||||
|
pub(crate) use dbconn::{run_now, DBCONN};
|
||||||
|
|
||||||
|
// The migration module is a customized version of the sea_orm_migration module (v1.0.0)
|
||||||
|
// https://github.com/SeaQL/sea-orm/tree/1.0.0/sea-orm-migration to avoid errors caused by the
|
||||||
|
// package paradigm of PageTop. Files integrated from original:
|
||||||
|
//
|
||||||
|
// lib.rs => db/migration.rs . . . . . . . . . . . . . . (excluding some modules and exports)
|
||||||
|
// connection.rs => db/migration/connection.rs . . . . . . . . . . . . . . (full integration)
|
||||||
|
// manager.rs => db/migration/manager.rs . . . . . . . . . . . . . . . . . (full integration)
|
||||||
|
// migrator.rs => db/migration/migrator.rs . . . . . . . . . . . .(omitting error management)
|
||||||
|
// prelude.rs => db/migration/prelude.rs . . . . . . . . . . . . . . . . . . . (avoiding CLI)
|
||||||
|
// seaql_migrations.rs => db/migration/seaql_migrations.rs . . . . . . . . (full integration)
|
||||||
|
//
|
||||||
|
mod migration;
|
||||||
|
pub use migration::prelude::*;
|
||||||
|
pub use migration::schema::*;
|
||||||
|
|
||||||
|
pub async fn query<Q: QueryStatementWriter>(stmt: &mut Q) -> Result<Vec<QueryResult>, DbErr> {
|
||||||
|
let dbconn = &*DBCONN;
|
||||||
|
let dbbackend = dbconn.get_database_backend();
|
||||||
|
dbconn
|
||||||
|
.query_all(Statement::from_string(
|
||||||
|
dbbackend,
|
||||||
|
match dbbackend {
|
||||||
|
DatabaseBackend::MySql => stmt.to_string(MysqlQueryBuilder),
|
||||||
|
DatabaseBackend::Postgres => stmt.to_string(PostgresQueryBuilder),
|
||||||
|
DatabaseBackend::Sqlite => stmt.to_string(SqliteQueryBuilder),
|
||||||
|
},
|
||||||
|
))
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn exec<Q: QueryStatementWriter>(stmt: &mut Q) -> Result<Option<QueryResult>, DbErr> {
|
||||||
|
let dbconn = &*DBCONN;
|
||||||
|
let dbbackend = dbconn.get_database_backend();
|
||||||
|
dbconn
|
||||||
|
.query_one(Statement::from_string(
|
||||||
|
dbbackend,
|
||||||
|
match dbbackend {
|
||||||
|
DatabaseBackend::MySql => stmt.to_string(MysqlQueryBuilder),
|
||||||
|
DatabaseBackend::Postgres => stmt.to_string(PostgresQueryBuilder),
|
||||||
|
DatabaseBackend::Sqlite => stmt.to_string(SqliteQueryBuilder),
|
||||||
|
},
|
||||||
|
))
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn exec_raw(stmt: String) -> Result<ExecResult, DbErr> {
|
||||||
|
let dbconn = &*DBCONN;
|
||||||
|
let dbbackend = dbconn.get_database_backend();
|
||||||
|
dbconn
|
||||||
|
.execute(Statement::from_string(dbbackend, stmt))
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub trait MigratorBase {
|
||||||
|
fn run_up();
|
||||||
|
|
||||||
|
fn run_down();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[rustfmt::skip]
|
||||||
|
impl<M: MigratorTrait> MigratorBase for M {
|
||||||
|
fn run_up() {
|
||||||
|
if let Err(e) = run_now(Self::up(SchemaManagerConnection::Connection(&DBCONN), None)) {
|
||||||
|
trace::error!("Migration upgrade failed ({})", e);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
fn run_down() {
|
||||||
|
if let Err(e) = run_now(Self::down(SchemaManagerConnection::Connection(&DBCONN), None)) {
|
||||||
|
trace::error!("Migration downgrade failed ({})", e);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<M: MigrationTrait> MigrationName for M {
|
||||||
|
fn name(&self) -> &str {
|
||||||
|
TypeInfo::NameTo(-2).of::<M>()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub type MigrationItem = Box<dyn MigrationTrait>;
|
||||||
|
|
||||||
|
#[macro_export]
|
||||||
|
macro_rules! install_migrations {
|
||||||
|
( $($migration_module:ident),+ $(,)? ) => {{
|
||||||
|
use $crate::db::{MigrationItem, MigratorBase, MigratorTrait};
|
||||||
|
|
||||||
|
struct Migrator;
|
||||||
|
impl MigratorTrait for Migrator {
|
||||||
|
fn migrations() -> Vec<MigrationItem> {
|
||||||
|
let mut m = Vec::<MigrationItem>::new();
|
||||||
|
$(
|
||||||
|
m.push(Box::new(migration::$migration_module::Migration));
|
||||||
|
)*
|
||||||
|
m
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Migrator::run_up();
|
||||||
|
}};
|
||||||
|
}
|
||||||
|
|
||||||
|
#[macro_export]
|
||||||
|
macro_rules! uninstall_migrations {
|
||||||
|
( $($migration_module:ident),+ $(,)? ) => {{
|
||||||
|
use $crate::db::{MigrationItem, MigratorBase, MigratorTrait};
|
||||||
|
|
||||||
|
struct Migrator;
|
||||||
|
impl MigratorTrait for Migrator {
|
||||||
|
fn migrations() -> Vec<MigrationItem> {
|
||||||
|
let mut m = Vec::<MigrationItem>::new();
|
||||||
|
$(
|
||||||
|
m.push(Box::new(migration::$migration_module::Migration));
|
||||||
|
)*
|
||||||
|
m
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Migrator::run_down();
|
||||||
|
}};
|
||||||
|
}
|
||||||
69
packages/pagetop-seaorm/src/db/dbconn.rs
Normal file
69
packages/pagetop-seaorm/src/db/dbconn.rs
Normal file
|
|
@ -0,0 +1,69 @@
|
||||||
|
use pagetop::trace;
|
||||||
|
|
||||||
|
use crate::config;
|
||||||
|
use crate::db::{DbConn, DbUri};
|
||||||
|
|
||||||
|
use std::sync::LazyLock;
|
||||||
|
|
||||||
|
use sea_orm::{ConnectOptions, Database};
|
||||||
|
|
||||||
|
pub use futures::executor::block_on as run_now;
|
||||||
|
|
||||||
|
pub static DBCONN: LazyLock<DbConn> = LazyLock::new(|| {
|
||||||
|
trace::info!(
|
||||||
|
"Connecting to database \"{}\" using a pool of {} connections",
|
||||||
|
&config::SETTINGS.database.db_name,
|
||||||
|
&config::SETTINGS.database.max_pool_size
|
||||||
|
);
|
||||||
|
|
||||||
|
let db_uri = match config::SETTINGS.database.db_type.as_str() {
|
||||||
|
"mysql" | "postgres" => {
|
||||||
|
let mut tmp_uri = DbUri::parse(
|
||||||
|
format!(
|
||||||
|
"{}://{}/{}",
|
||||||
|
&config::SETTINGS.database.db_type,
|
||||||
|
&config::SETTINGS.database.db_host,
|
||||||
|
&config::SETTINGS.database.db_name
|
||||||
|
)
|
||||||
|
.as_str(),
|
||||||
|
)
|
||||||
|
.unwrap();
|
||||||
|
tmp_uri
|
||||||
|
.set_username(config::SETTINGS.database.db_user.as_str())
|
||||||
|
.unwrap();
|
||||||
|
// https://github.com/launchbadge/sqlx/issues/1624
|
||||||
|
tmp_uri
|
||||||
|
.set_password(Some(config::SETTINGS.database.db_pass.as_str()))
|
||||||
|
.unwrap();
|
||||||
|
if config::SETTINGS.database.db_port != 0 {
|
||||||
|
tmp_uri
|
||||||
|
.set_port(Some(config::SETTINGS.database.db_port))
|
||||||
|
.unwrap();
|
||||||
|
}
|
||||||
|
tmp_uri
|
||||||
|
}
|
||||||
|
"sqlite" => DbUri::parse(
|
||||||
|
format!(
|
||||||
|
"{}://{}",
|
||||||
|
&config::SETTINGS.database.db_type,
|
||||||
|
&config::SETTINGS.database.db_name
|
||||||
|
)
|
||||||
|
.as_str(),
|
||||||
|
)
|
||||||
|
.unwrap(),
|
||||||
|
_ => {
|
||||||
|
trace::error!(
|
||||||
|
"Unrecognized database type \"{}\"",
|
||||||
|
&config::SETTINGS.database.db_type
|
||||||
|
);
|
||||||
|
DbUri::parse("").unwrap()
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
run_now(Database::connect::<ConnectOptions>({
|
||||||
|
let mut db_opt = ConnectOptions::new(db_uri.to_string());
|
||||||
|
db_opt.max_connections(config::SETTINGS.database.max_pool_size);
|
||||||
|
db_opt
|
||||||
|
}))
|
||||||
|
.unwrap_or_else(|_| panic!("Failed to connect to database"))
|
||||||
|
});
|
||||||
33
packages/pagetop-seaorm/src/db/migration.rs
Normal file
33
packages/pagetop-seaorm/src/db/migration.rs
Normal file
|
|
@ -0,0 +1,33 @@
|
||||||
|
//pub mod cli;
|
||||||
|
pub mod connection;
|
||||||
|
pub mod manager;
|
||||||
|
pub mod migrator;
|
||||||
|
pub mod prelude;
|
||||||
|
pub mod schema;
|
||||||
|
pub mod seaql_migrations;
|
||||||
|
//pub mod util;
|
||||||
|
|
||||||
|
pub use connection::*;
|
||||||
|
pub use manager::*;
|
||||||
|
//pub use migrator::*;
|
||||||
|
|
||||||
|
pub use async_trait;
|
||||||
|
//pub use sea_orm;
|
||||||
|
//pub use sea_orm::sea_query;
|
||||||
|
use sea_orm::DbErr;
|
||||||
|
|
||||||
|
pub trait MigrationName {
|
||||||
|
fn name(&self) -> &str;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// The migration definition
|
||||||
|
#[async_trait::async_trait]
|
||||||
|
pub trait MigrationTrait: MigrationName + Send + Sync {
|
||||||
|
/// Define actions to perform when applying the migration
|
||||||
|
async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr>;
|
||||||
|
|
||||||
|
/// Define actions to perform when rolling back the migration
|
||||||
|
async fn down(&self, _manager: &SchemaManager) -> Result<(), DbErr> {
|
||||||
|
Err(DbErr::Migration("We Don't Do That Here".to_owned()))
|
||||||
|
}
|
||||||
|
}
|
||||||
148
packages/pagetop-seaorm/src/db/migration/connection.rs
Normal file
148
packages/pagetop-seaorm/src/db/migration/connection.rs
Normal file
|
|
@ -0,0 +1,148 @@
|
||||||
|
use futures::Future;
|
||||||
|
use sea_orm::{
|
||||||
|
AccessMode, ConnectionTrait, DatabaseConnection, DatabaseTransaction, DbBackend, DbErr,
|
||||||
|
ExecResult, IsolationLevel, QueryResult, Statement, TransactionError, TransactionTrait,
|
||||||
|
};
|
||||||
|
use std::pin::Pin;
|
||||||
|
|
||||||
|
pub enum SchemaManagerConnection<'c> {
|
||||||
|
Connection(&'c DatabaseConnection),
|
||||||
|
Transaction(&'c DatabaseTransaction),
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait::async_trait]
|
||||||
|
impl<'c> ConnectionTrait for SchemaManagerConnection<'c> {
|
||||||
|
fn get_database_backend(&self) -> DbBackend {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.get_database_backend(),
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.get_database_backend(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn execute(&self, stmt: Statement) -> Result<ExecResult, DbErr> {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.execute(stmt).await,
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.execute(stmt).await,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn execute_unprepared(&self, sql: &str) -> Result<ExecResult, DbErr> {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.execute_unprepared(sql).await,
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.execute_unprepared(sql).await,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn query_one(&self, stmt: Statement) -> Result<Option<QueryResult>, DbErr> {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.query_one(stmt).await,
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.query_one(stmt).await,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn query_all(&self, stmt: Statement) -> Result<Vec<QueryResult>, DbErr> {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.query_all(stmt).await,
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.query_all(stmt).await,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_mock_connection(&self) -> bool {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.is_mock_connection(),
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.is_mock_connection(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait::async_trait]
|
||||||
|
impl<'c> TransactionTrait for SchemaManagerConnection<'c> {
|
||||||
|
async fn begin(&self) -> Result<DatabaseTransaction, DbErr> {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.begin().await,
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.begin().await,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn begin_with_config(
|
||||||
|
&self,
|
||||||
|
isolation_level: Option<IsolationLevel>,
|
||||||
|
access_mode: Option<AccessMode>,
|
||||||
|
) -> Result<DatabaseTransaction, DbErr> {
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => {
|
||||||
|
conn.begin_with_config(isolation_level, access_mode).await
|
||||||
|
}
|
||||||
|
SchemaManagerConnection::Transaction(trans) => {
|
||||||
|
trans.begin_with_config(isolation_level, access_mode).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn transaction<F, T, E>(&self, callback: F) -> Result<T, TransactionError<E>>
|
||||||
|
where
|
||||||
|
F: for<'a> FnOnce(
|
||||||
|
&'a DatabaseTransaction,
|
||||||
|
) -> Pin<Box<dyn Future<Output = Result<T, E>> + Send + 'a>>
|
||||||
|
+ Send,
|
||||||
|
T: Send,
|
||||||
|
E: std::error::Error + Send,
|
||||||
|
{
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => conn.transaction(callback).await,
|
||||||
|
SchemaManagerConnection::Transaction(trans) => trans.transaction(callback).await,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn transaction_with_config<F, T, E>(
|
||||||
|
&self,
|
||||||
|
callback: F,
|
||||||
|
isolation_level: Option<IsolationLevel>,
|
||||||
|
access_mode: Option<AccessMode>,
|
||||||
|
) -> Result<T, TransactionError<E>>
|
||||||
|
where
|
||||||
|
F: for<'a> FnOnce(
|
||||||
|
&'a DatabaseTransaction,
|
||||||
|
) -> Pin<Box<dyn Future<Output = Result<T, E>> + Send + 'a>>
|
||||||
|
+ Send,
|
||||||
|
T: Send,
|
||||||
|
E: std::error::Error + Send,
|
||||||
|
{
|
||||||
|
match self {
|
||||||
|
SchemaManagerConnection::Connection(conn) => {
|
||||||
|
conn.transaction_with_config(callback, isolation_level, access_mode)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
SchemaManagerConnection::Transaction(trans) => {
|
||||||
|
trans
|
||||||
|
.transaction_with_config(callback, isolation_level, access_mode)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub trait IntoSchemaManagerConnection<'c>: Send
|
||||||
|
where
|
||||||
|
Self: 'c,
|
||||||
|
{
|
||||||
|
fn into_schema_manager_connection(self) -> SchemaManagerConnection<'c>;
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'c> IntoSchemaManagerConnection<'c> for SchemaManagerConnection<'c> {
|
||||||
|
fn into_schema_manager_connection(self) -> SchemaManagerConnection<'c> {
|
||||||
|
self
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'c> IntoSchemaManagerConnection<'c> for &'c DatabaseConnection {
|
||||||
|
fn into_schema_manager_connection(self) -> SchemaManagerConnection<'c> {
|
||||||
|
SchemaManagerConnection::Connection(self)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'c> IntoSchemaManagerConnection<'c> for &'c DatabaseTransaction {
|
||||||
|
fn into_schema_manager_connection(self) -> SchemaManagerConnection<'c> {
|
||||||
|
SchemaManagerConnection::Transaction(self)
|
||||||
|
}
|
||||||
|
}
|
||||||
167
packages/pagetop-seaorm/src/db/migration/manager.rs
Normal file
167
packages/pagetop-seaorm/src/db/migration/manager.rs
Normal file
|
|
@ -0,0 +1,167 @@
|
||||||
|
use super::{IntoSchemaManagerConnection, SchemaManagerConnection};
|
||||||
|
use sea_orm::sea_query::{
|
||||||
|
extension::postgres::{TypeAlterStatement, TypeCreateStatement, TypeDropStatement},
|
||||||
|
ForeignKeyCreateStatement, ForeignKeyDropStatement, IndexCreateStatement, IndexDropStatement,
|
||||||
|
TableAlterStatement, TableCreateStatement, TableDropStatement, TableRenameStatement,
|
||||||
|
TableTruncateStatement,
|
||||||
|
};
|
||||||
|
use sea_orm::{ConnectionTrait, DbBackend, DbErr, StatementBuilder};
|
||||||
|
use sea_schema::{mysql::MySql, postgres::Postgres, probe::SchemaProbe, sqlite::Sqlite};
|
||||||
|
|
||||||
|
/// Helper struct for writing migration scripts in migration file
|
||||||
|
pub struct SchemaManager<'c> {
|
||||||
|
conn: SchemaManagerConnection<'c>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<'c> SchemaManager<'c> {
|
||||||
|
pub fn new<T>(conn: T) -> Self
|
||||||
|
where
|
||||||
|
T: IntoSchemaManagerConnection<'c>,
|
||||||
|
{
|
||||||
|
Self {
|
||||||
|
conn: conn.into_schema_manager_connection(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn exec_stmt<S>(&self, stmt: S) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
S: StatementBuilder,
|
||||||
|
{
|
||||||
|
let builder = self.conn.get_database_backend();
|
||||||
|
self.conn.execute(builder.build(&stmt)).await.map(|_| ())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_database_backend(&self) -> DbBackend {
|
||||||
|
self.conn.get_database_backend()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_connection(&self) -> &SchemaManagerConnection<'c> {
|
||||||
|
&self.conn
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Schema Creation
|
||||||
|
impl<'c> SchemaManager<'c> {
|
||||||
|
pub async fn create_table(&self, stmt: TableCreateStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_index(&self, stmt: IndexCreateStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_foreign_key(&self, stmt: ForeignKeyCreateStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_type(&self, stmt: TypeCreateStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Schema Mutation
|
||||||
|
impl<'c> SchemaManager<'c> {
|
||||||
|
pub async fn alter_table(&self, stmt: TableAlterStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn drop_table(&self, stmt: TableDropStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn rename_table(&self, stmt: TableRenameStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn truncate_table(&self, stmt: TableTruncateStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn drop_index(&self, stmt: IndexDropStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn drop_foreign_key(&self, stmt: ForeignKeyDropStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn alter_type(&self, stmt: TypeAlterStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn drop_type(&self, stmt: TypeDropStatement) -> Result<(), DbErr> {
|
||||||
|
self.exec_stmt(stmt).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Schema Inspection.
|
||||||
|
impl<'c> SchemaManager<'c> {
|
||||||
|
pub async fn has_table<T>(&self, table: T) -> Result<bool, DbErr>
|
||||||
|
where
|
||||||
|
T: AsRef<str>,
|
||||||
|
{
|
||||||
|
has_table(&self.conn, table).await
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn has_column<T, C>(&self, table: T, column: C) -> Result<bool, DbErr>
|
||||||
|
where
|
||||||
|
T: AsRef<str>,
|
||||||
|
C: AsRef<str>,
|
||||||
|
{
|
||||||
|
let stmt = match self.conn.get_database_backend() {
|
||||||
|
DbBackend::MySql => MySql.has_column(table, column),
|
||||||
|
DbBackend::Postgres => Postgres.has_column(table, column),
|
||||||
|
DbBackend::Sqlite => Sqlite.has_column(table, column),
|
||||||
|
};
|
||||||
|
|
||||||
|
let builder = self.conn.get_database_backend();
|
||||||
|
let res = self
|
||||||
|
.conn
|
||||||
|
.query_one(builder.build(&stmt))
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| DbErr::Custom("Failed to check column exists".to_owned()))?;
|
||||||
|
|
||||||
|
res.try_get("", "has_column")
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn has_index<T, I>(&self, table: T, index: I) -> Result<bool, DbErr>
|
||||||
|
where
|
||||||
|
T: AsRef<str>,
|
||||||
|
I: AsRef<str>,
|
||||||
|
{
|
||||||
|
let stmt = match self.conn.get_database_backend() {
|
||||||
|
DbBackend::MySql => MySql.has_index(table, index),
|
||||||
|
DbBackend::Postgres => Postgres.has_index(table, index),
|
||||||
|
DbBackend::Sqlite => Sqlite.has_index(table, index),
|
||||||
|
};
|
||||||
|
|
||||||
|
let builder = self.conn.get_database_backend();
|
||||||
|
let res = self
|
||||||
|
.conn
|
||||||
|
.query_one(builder.build(&stmt))
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| DbErr::Custom("Failed to check index exists".to_owned()))?;
|
||||||
|
|
||||||
|
res.try_get("", "has_index")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) async fn has_table<C, T>(conn: &C, table: T) -> Result<bool, DbErr>
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
T: AsRef<str>,
|
||||||
|
{
|
||||||
|
let stmt = match conn.get_database_backend() {
|
||||||
|
DbBackend::MySql => MySql.has_table(table),
|
||||||
|
DbBackend::Postgres => Postgres.has_table(table),
|
||||||
|
DbBackend::Sqlite => Sqlite.has_table(table),
|
||||||
|
};
|
||||||
|
|
||||||
|
let builder = conn.get_database_backend();
|
||||||
|
let res = conn
|
||||||
|
.query_one(builder.build(&stmt))
|
||||||
|
.await?
|
||||||
|
.ok_or_else(|| DbErr::Custom("Failed to check table exists".to_owned()))?;
|
||||||
|
|
||||||
|
res.try_get("", "has_table")
|
||||||
|
}
|
||||||
593
packages/pagetop-seaorm/src/db/migration/migrator.rs
Normal file
593
packages/pagetop-seaorm/src/db/migration/migrator.rs
Normal file
|
|
@ -0,0 +1,593 @@
|
||||||
|
use futures::Future;
|
||||||
|
use std::collections::HashSet;
|
||||||
|
use std::fmt::Display;
|
||||||
|
use std::pin::Pin;
|
||||||
|
use std::time::SystemTime;
|
||||||
|
|
||||||
|
use pagetop::trace::info;
|
||||||
|
|
||||||
|
use sea_orm::sea_query::{
|
||||||
|
self, extension::postgres::Type, Alias, Expr, ForeignKey, IntoIden, JoinType, Order, Query,
|
||||||
|
SelectStatement, SimpleExpr, Table,
|
||||||
|
};
|
||||||
|
use sea_orm::{
|
||||||
|
ActiveModelTrait, ActiveValue, Condition, ConnectionTrait, DbBackend, DbErr, DeriveIden,
|
||||||
|
DynIden, EntityTrait, FromQueryResult, Iterable, QueryFilter, Schema, Statement,
|
||||||
|
TransactionTrait,
|
||||||
|
};
|
||||||
|
use sea_schema::{mysql::MySql, postgres::Postgres, probe::SchemaProbe, sqlite::Sqlite};
|
||||||
|
|
||||||
|
use super::{seaql_migrations, IntoSchemaManagerConnection, MigrationTrait, SchemaManager};
|
||||||
|
|
||||||
|
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
|
||||||
|
/// Status of migration
|
||||||
|
pub enum MigrationStatus {
|
||||||
|
/// Not yet applied
|
||||||
|
Pending,
|
||||||
|
/// Applied
|
||||||
|
Applied,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Display for MigrationStatus {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
let status = match self {
|
||||||
|
MigrationStatus::Pending => "Pending",
|
||||||
|
MigrationStatus::Applied => "Applied",
|
||||||
|
};
|
||||||
|
write!(f, "{status}")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct Migration {
|
||||||
|
migration: Box<dyn MigrationTrait>,
|
||||||
|
status: MigrationStatus,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Migration {
|
||||||
|
/// Get migration name from MigrationName trait implementation
|
||||||
|
pub fn name(&self) -> &str {
|
||||||
|
self.migration.name()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get migration status
|
||||||
|
pub fn status(&self) -> MigrationStatus {
|
||||||
|
self.status
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Performing migrations on a database
|
||||||
|
#[async_trait::async_trait]
|
||||||
|
pub trait MigratorTrait: Send {
|
||||||
|
/// Vector of migrations in time sequence
|
||||||
|
fn migrations() -> Vec<Box<dyn MigrationTrait>>;
|
||||||
|
|
||||||
|
/// Name of the migration table, it is `seaql_migrations` by default
|
||||||
|
fn migration_table_name() -> DynIden {
|
||||||
|
seaql_migrations::Entity.into_iden()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get list of migrations wrapped in `Migration` struct
|
||||||
|
fn get_migration_files() -> Vec<Migration> {
|
||||||
|
Self::migrations()
|
||||||
|
.into_iter()
|
||||||
|
.map(|migration| Migration {
|
||||||
|
migration,
|
||||||
|
status: MigrationStatus::Pending,
|
||||||
|
})
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get list of applied migrations from database
|
||||||
|
async fn get_migration_models<C>(db: &C) -> Result<Vec<seaql_migrations::Model>, DbErr>
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
Self::install(db).await?;
|
||||||
|
let stmt = Query::select()
|
||||||
|
.table_name(Self::migration_table_name())
|
||||||
|
.columns(seaql_migrations::Column::iter().map(IntoIden::into_iden))
|
||||||
|
.order_by(seaql_migrations::Column::Version, Order::Asc)
|
||||||
|
.to_owned();
|
||||||
|
let builder = db.get_database_backend();
|
||||||
|
seaql_migrations::Model::find_by_statement(builder.build(&stmt))
|
||||||
|
.all(db)
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get list of migrations with status
|
||||||
|
async fn get_migration_with_status<C>(db: &C) -> Result<Vec<Migration>, DbErr>
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
Self::install(db).await?;
|
||||||
|
let mut migration_files = Self::get_migration_files();
|
||||||
|
let migration_models = Self::get_migration_models(db).await?;
|
||||||
|
|
||||||
|
let migration_in_db: HashSet<String> = migration_models
|
||||||
|
.into_iter()
|
||||||
|
.map(|model| model.version)
|
||||||
|
.collect();
|
||||||
|
let migration_in_fs: HashSet<String> = migration_files
|
||||||
|
.iter()
|
||||||
|
.map(|file| file.migration.name().to_string())
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let pending_migrations = &migration_in_fs - &migration_in_db;
|
||||||
|
for migration_file in migration_files.iter_mut() {
|
||||||
|
if !pending_migrations.contains(migration_file.migration.name()) {
|
||||||
|
migration_file.status = MigrationStatus::Applied;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
/*
|
||||||
|
let missing_migrations_in_fs = &migration_in_db - &migration_in_fs;
|
||||||
|
let errors: Vec<String> = missing_migrations_in_fs
|
||||||
|
.iter()
|
||||||
|
.map(|missing_migration| {
|
||||||
|
format!("Migration file of version '{missing_migration}' is missing, this migration has been applied but its file is missing")
|
||||||
|
}).collect();
|
||||||
|
|
||||||
|
if !errors.is_empty() {
|
||||||
|
Err(DbErr::Custom(errors.join("\n")))
|
||||||
|
} else { */
|
||||||
|
Ok(migration_files)
|
||||||
|
/* } */
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get list of pending migrations
|
||||||
|
async fn get_pending_migrations<C>(db: &C) -> Result<Vec<Migration>, DbErr>
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
Self::install(db).await?;
|
||||||
|
Ok(Self::get_migration_with_status(db)
|
||||||
|
.await?
|
||||||
|
.into_iter()
|
||||||
|
.filter(|file| file.status == MigrationStatus::Pending)
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get list of applied migrations
|
||||||
|
async fn get_applied_migrations<C>(db: &C) -> Result<Vec<Migration>, DbErr>
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
Self::install(db).await?;
|
||||||
|
Ok(Self::get_migration_with_status(db)
|
||||||
|
.await?
|
||||||
|
.into_iter()
|
||||||
|
.filter(|file| file.status == MigrationStatus::Applied)
|
||||||
|
.collect())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create migration table `seaql_migrations` in the database
|
||||||
|
async fn install<C>(db: &C) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
let builder = db.get_database_backend();
|
||||||
|
let table_name = Self::migration_table_name();
|
||||||
|
let schema = Schema::new(builder);
|
||||||
|
let mut stmt = schema
|
||||||
|
.create_table_from_entity(seaql_migrations::Entity)
|
||||||
|
.table_name(table_name);
|
||||||
|
stmt.if_not_exists();
|
||||||
|
db.execute(builder.build(&stmt)).await.map(|_| ())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check the status of all migrations
|
||||||
|
async fn status<C>(db: &C) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
Self::install(db).await?;
|
||||||
|
|
||||||
|
info!("Checking migration status");
|
||||||
|
|
||||||
|
for Migration { migration, status } in Self::get_migration_with_status(db).await? {
|
||||||
|
info!("Migration '{}'... {}", migration.name(), status);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Drop all tables from the database, then reapply all migrations
|
||||||
|
async fn fresh<'c, C>(db: C) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: IntoSchemaManagerConnection<'c>,
|
||||||
|
{
|
||||||
|
exec_with_connection::<'_, _, _>(db, move |manager| {
|
||||||
|
Box::pin(async move { exec_fresh::<Self>(manager).await })
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Rollback all applied migrations, then reapply all migrations
|
||||||
|
async fn refresh<'c, C>(db: C) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: IntoSchemaManagerConnection<'c>,
|
||||||
|
{
|
||||||
|
exec_with_connection::<'_, _, _>(db, move |manager| {
|
||||||
|
Box::pin(async move {
|
||||||
|
exec_down::<Self>(manager, None).await?;
|
||||||
|
exec_up::<Self>(manager, None).await
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Rollback all applied migrations
|
||||||
|
async fn reset<'c, C>(db: C) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: IntoSchemaManagerConnection<'c>,
|
||||||
|
{
|
||||||
|
exec_with_connection::<'_, _, _>(db, move |manager| {
|
||||||
|
Box::pin(async move { exec_down::<Self>(manager, None).await })
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Apply pending migrations
|
||||||
|
async fn up<'c, C>(db: C, steps: Option<u32>) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: IntoSchemaManagerConnection<'c>,
|
||||||
|
{
|
||||||
|
exec_with_connection::<'_, _, _>(db, move |manager| {
|
||||||
|
Box::pin(async move { exec_up::<Self>(manager, steps).await })
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Rollback applied migrations
|
||||||
|
async fn down<'c, C>(db: C, steps: Option<u32>) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: IntoSchemaManagerConnection<'c>,
|
||||||
|
{
|
||||||
|
exec_with_connection::<'_, _, _>(db, move |manager| {
|
||||||
|
Box::pin(async move { exec_down::<Self>(manager, steps).await })
|
||||||
|
})
|
||||||
|
.await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn exec_with_connection<'c, C, F>(db: C, f: F) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
C: IntoSchemaManagerConnection<'c>,
|
||||||
|
F: for<'b> Fn(
|
||||||
|
&'b SchemaManager<'_>,
|
||||||
|
) -> Pin<Box<dyn Future<Output = Result<(), DbErr>> + Send + 'b>>,
|
||||||
|
{
|
||||||
|
let db = db.into_schema_manager_connection();
|
||||||
|
|
||||||
|
match db.get_database_backend() {
|
||||||
|
DbBackend::Postgres => {
|
||||||
|
let transaction = db.begin().await?;
|
||||||
|
let manager = SchemaManager::new(&transaction);
|
||||||
|
f(&manager).await?;
|
||||||
|
transaction.commit().await
|
||||||
|
}
|
||||||
|
DbBackend::MySql | DbBackend::Sqlite => {
|
||||||
|
let manager = SchemaManager::new(db);
|
||||||
|
f(&manager).await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn exec_fresh<M>(manager: &SchemaManager<'_>) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
M: MigratorTrait + ?Sized,
|
||||||
|
{
|
||||||
|
let db = manager.get_connection();
|
||||||
|
|
||||||
|
M::install(db).await?;
|
||||||
|
let db_backend = db.get_database_backend();
|
||||||
|
|
||||||
|
// Temporarily disable the foreign key check
|
||||||
|
if db_backend == DbBackend::Sqlite {
|
||||||
|
info!("Disabling foreign key check");
|
||||||
|
db.execute(Statement::from_string(
|
||||||
|
db_backend,
|
||||||
|
"PRAGMA foreign_keys = OFF".to_owned(),
|
||||||
|
))
|
||||||
|
.await?;
|
||||||
|
info!("Foreign key check disabled");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop all foreign keys
|
||||||
|
if db_backend == DbBackend::MySql {
|
||||||
|
info!("Dropping all foreign keys");
|
||||||
|
let stmt = query_mysql_foreign_keys(db);
|
||||||
|
let rows = db.query_all(db_backend.build(&stmt)).await?;
|
||||||
|
for row in rows.into_iter() {
|
||||||
|
let constraint_name: String = row.try_get("", "CONSTRAINT_NAME")?;
|
||||||
|
let table_name: String = row.try_get("", "TABLE_NAME")?;
|
||||||
|
info!(
|
||||||
|
"Dropping foreign key '{}' from table '{}'",
|
||||||
|
constraint_name, table_name
|
||||||
|
);
|
||||||
|
let mut stmt = ForeignKey::drop();
|
||||||
|
stmt.table(Alias::new(table_name.as_str()))
|
||||||
|
.name(constraint_name.as_str());
|
||||||
|
db.execute(db_backend.build(&stmt)).await?;
|
||||||
|
info!("Foreign key '{}' has been dropped", constraint_name);
|
||||||
|
}
|
||||||
|
info!("All foreign keys dropped");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop all tables
|
||||||
|
let stmt = query_tables(db).await;
|
||||||
|
let rows = db.query_all(db_backend.build(&stmt)).await?;
|
||||||
|
for row in rows.into_iter() {
|
||||||
|
let table_name: String = row.try_get("", "table_name")?;
|
||||||
|
info!("Dropping table '{}'", table_name);
|
||||||
|
let mut stmt = Table::drop();
|
||||||
|
stmt.table(Alias::new(table_name.as_str()))
|
||||||
|
.if_exists()
|
||||||
|
.cascade();
|
||||||
|
db.execute(db_backend.build(&stmt)).await?;
|
||||||
|
info!("Table '{}' has been dropped", table_name);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop all types
|
||||||
|
if db_backend == DbBackend::Postgres {
|
||||||
|
info!("Dropping all types");
|
||||||
|
let stmt = query_pg_types(db);
|
||||||
|
let rows = db.query_all(db_backend.build(&stmt)).await?;
|
||||||
|
for row in rows {
|
||||||
|
let type_name: String = row.try_get("", "typname")?;
|
||||||
|
info!("Dropping type '{}'", type_name);
|
||||||
|
let mut stmt = Type::drop();
|
||||||
|
stmt.name(Alias::new(&type_name));
|
||||||
|
db.execute(db_backend.build(&stmt)).await?;
|
||||||
|
info!("Type '{}' has been dropped", type_name);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Restore the foreign key check
|
||||||
|
if db_backend == DbBackend::Sqlite {
|
||||||
|
info!("Restoring foreign key check");
|
||||||
|
db.execute(Statement::from_string(
|
||||||
|
db_backend,
|
||||||
|
"PRAGMA foreign_keys = ON".to_owned(),
|
||||||
|
))
|
||||||
|
.await?;
|
||||||
|
info!("Foreign key check restored");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reapply all migrations
|
||||||
|
exec_up::<M>(manager, None).await
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn exec_up<M>(manager: &SchemaManager<'_>, mut steps: Option<u32>) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
M: MigratorTrait + ?Sized,
|
||||||
|
{
|
||||||
|
let db = manager.get_connection();
|
||||||
|
|
||||||
|
M::install(db).await?;
|
||||||
|
/*
|
||||||
|
if let Some(steps) = steps {
|
||||||
|
info!("Applying {} pending migrations", steps);
|
||||||
|
} else {
|
||||||
|
info!("Applying all pending migrations");
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
let migrations = M::get_pending_migrations(db).await?.into_iter();
|
||||||
|
/*
|
||||||
|
if migrations.len() == 0 {
|
||||||
|
info!("No pending migrations");
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
for Migration { migration, .. } in migrations {
|
||||||
|
if let Some(steps) = steps.as_mut() {
|
||||||
|
if steps == &0 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
*steps -= 1;
|
||||||
|
}
|
||||||
|
info!("Applying migration '{}'", migration.name());
|
||||||
|
migration.up(manager).await?;
|
||||||
|
info!("Migration '{}' has been applied", migration.name());
|
||||||
|
let now = SystemTime::now()
|
||||||
|
.duration_since(SystemTime::UNIX_EPOCH)
|
||||||
|
.expect("SystemTime before UNIX EPOCH!");
|
||||||
|
seaql_migrations::Entity::insert(seaql_migrations::ActiveModel {
|
||||||
|
version: ActiveValue::Set(migration.name().to_owned()),
|
||||||
|
applied_at: ActiveValue::Set(now.as_secs() as i64),
|
||||||
|
})
|
||||||
|
.table_name(M::migration_table_name())
|
||||||
|
.exec(db)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn exec_down<M>(manager: &SchemaManager<'_>, mut steps: Option<u32>) -> Result<(), DbErr>
|
||||||
|
where
|
||||||
|
M: MigratorTrait + ?Sized,
|
||||||
|
{
|
||||||
|
let db = manager.get_connection();
|
||||||
|
|
||||||
|
M::install(db).await?;
|
||||||
|
|
||||||
|
if let Some(steps) = steps {
|
||||||
|
info!("Rolling back {} applied migrations", steps);
|
||||||
|
} else {
|
||||||
|
info!("Rolling back all applied migrations");
|
||||||
|
}
|
||||||
|
|
||||||
|
let migrations = M::get_applied_migrations(db).await?.into_iter().rev();
|
||||||
|
if migrations.len() == 0 {
|
||||||
|
info!("No applied migrations");
|
||||||
|
}
|
||||||
|
for Migration { migration, .. } in migrations {
|
||||||
|
if let Some(steps) = steps.as_mut() {
|
||||||
|
if steps == &0 {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
*steps -= 1;
|
||||||
|
}
|
||||||
|
info!("Rolling back migration '{}'", migration.name());
|
||||||
|
migration.down(manager).await?;
|
||||||
|
info!("Migration '{}' has been rollbacked", migration.name());
|
||||||
|
seaql_migrations::Entity::delete_many()
|
||||||
|
.filter(Expr::col(seaql_migrations::Column::Version).eq(migration.name()))
|
||||||
|
.table_name(M::migration_table_name())
|
||||||
|
.exec(db)
|
||||||
|
.await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn query_tables<C>(db: &C) -> SelectStatement
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
match db.get_database_backend() {
|
||||||
|
DbBackend::MySql => MySql.query_tables(),
|
||||||
|
DbBackend::Postgres => Postgres.query_tables(),
|
||||||
|
DbBackend::Sqlite => Sqlite.query_tables(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn get_current_schema<C>(db: &C) -> SimpleExpr
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
match db.get_database_backend() {
|
||||||
|
DbBackend::MySql => MySql::get_current_schema(),
|
||||||
|
DbBackend::Postgres => Postgres::get_current_schema(),
|
||||||
|
DbBackend::Sqlite => unimplemented!(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(DeriveIden)]
|
||||||
|
enum InformationSchema {
|
||||||
|
#[sea_orm(iden = "information_schema")]
|
||||||
|
Schema,
|
||||||
|
#[sea_orm(iden = "TABLE_NAME")]
|
||||||
|
TableName,
|
||||||
|
#[sea_orm(iden = "CONSTRAINT_NAME")]
|
||||||
|
ConstraintName,
|
||||||
|
TableConstraints,
|
||||||
|
TableSchema,
|
||||||
|
ConstraintType,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn query_mysql_foreign_keys<C>(db: &C) -> SelectStatement
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
let mut stmt = Query::select();
|
||||||
|
stmt.columns([
|
||||||
|
InformationSchema::TableName,
|
||||||
|
InformationSchema::ConstraintName,
|
||||||
|
])
|
||||||
|
.from((
|
||||||
|
InformationSchema::Schema,
|
||||||
|
InformationSchema::TableConstraints,
|
||||||
|
))
|
||||||
|
.cond_where(
|
||||||
|
Condition::all()
|
||||||
|
.add(Expr::expr(get_current_schema(db)).equals((
|
||||||
|
InformationSchema::TableConstraints,
|
||||||
|
InformationSchema::TableSchema,
|
||||||
|
)))
|
||||||
|
.add(
|
||||||
|
Expr::col((
|
||||||
|
InformationSchema::TableConstraints,
|
||||||
|
InformationSchema::ConstraintType,
|
||||||
|
))
|
||||||
|
.eq("FOREIGN KEY"),
|
||||||
|
),
|
||||||
|
);
|
||||||
|
stmt
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(DeriveIden)]
|
||||||
|
enum PgType {
|
||||||
|
Table,
|
||||||
|
Typname,
|
||||||
|
Typnamespace,
|
||||||
|
Typelem,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(DeriveIden)]
|
||||||
|
enum PgNamespace {
|
||||||
|
Table,
|
||||||
|
Oid,
|
||||||
|
Nspname,
|
||||||
|
}
|
||||||
|
|
||||||
|
fn query_pg_types<C>(db: &C) -> SelectStatement
|
||||||
|
where
|
||||||
|
C: ConnectionTrait,
|
||||||
|
{
|
||||||
|
let mut stmt = Query::select();
|
||||||
|
stmt.column(PgType::Typname)
|
||||||
|
.from(PgType::Table)
|
||||||
|
.join(
|
||||||
|
JoinType::LeftJoin,
|
||||||
|
PgNamespace::Table,
|
||||||
|
Expr::col((PgNamespace::Table, PgNamespace::Oid))
|
||||||
|
.equals((PgType::Table, PgType::Typnamespace)),
|
||||||
|
)
|
||||||
|
.cond_where(
|
||||||
|
Condition::all()
|
||||||
|
.add(
|
||||||
|
Expr::expr(get_current_schema(db))
|
||||||
|
.equals((PgNamespace::Table, PgNamespace::Nspname)),
|
||||||
|
)
|
||||||
|
.add(Expr::col((PgType::Table, PgType::Typelem)).eq(0)),
|
||||||
|
);
|
||||||
|
stmt
|
||||||
|
}
|
||||||
|
|
||||||
|
trait QueryTable {
|
||||||
|
type Statement;
|
||||||
|
|
||||||
|
fn table_name(self, table_name: DynIden) -> Self::Statement;
|
||||||
|
}
|
||||||
|
|
||||||
|
impl QueryTable for SelectStatement {
|
||||||
|
type Statement = SelectStatement;
|
||||||
|
|
||||||
|
fn table_name(mut self, table_name: DynIden) -> SelectStatement {
|
||||||
|
self.from(table_name);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl QueryTable for sea_query::TableCreateStatement {
|
||||||
|
type Statement = sea_query::TableCreateStatement;
|
||||||
|
|
||||||
|
fn table_name(mut self, table_name: DynIden) -> sea_query::TableCreateStatement {
|
||||||
|
self.table(table_name);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<A> QueryTable for sea_orm::Insert<A>
|
||||||
|
where
|
||||||
|
A: ActiveModelTrait,
|
||||||
|
{
|
||||||
|
type Statement = sea_orm::Insert<A>;
|
||||||
|
|
||||||
|
fn table_name(mut self, table_name: DynIden) -> sea_orm::Insert<A> {
|
||||||
|
sea_orm::QueryTrait::query(&mut self).into_table(table_name);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<E> QueryTable for sea_orm::DeleteMany<E>
|
||||||
|
where
|
||||||
|
E: EntityTrait,
|
||||||
|
{
|
||||||
|
type Statement = sea_orm::DeleteMany<E>;
|
||||||
|
|
||||||
|
fn table_name(mut self, table_name: DynIden) -> sea_orm::DeleteMany<E> {
|
||||||
|
sea_orm::QueryTrait::query(&mut self).from_table(table_name);
|
||||||
|
self
|
||||||
|
}
|
||||||
|
}
|
||||||
13
packages/pagetop-seaorm/src/db/migration/prelude.rs
Normal file
13
packages/pagetop-seaorm/src/db/migration/prelude.rs
Normal file
|
|
@ -0,0 +1,13 @@
|
||||||
|
//pub use super::cli;
|
||||||
|
|
||||||
|
pub use super::connection::IntoSchemaManagerConnection;
|
||||||
|
pub use super::connection::SchemaManagerConnection;
|
||||||
|
pub use super::manager::SchemaManager;
|
||||||
|
pub use super::migrator::MigratorTrait;
|
||||||
|
pub use super::{MigrationName, MigrationTrait};
|
||||||
|
pub use async_trait;
|
||||||
|
pub use sea_orm;
|
||||||
|
pub use sea_orm::sea_query;
|
||||||
|
pub use sea_orm::sea_query::*;
|
||||||
|
pub use sea_orm::DeriveIden;
|
||||||
|
pub use sea_orm::DeriveMigrationName;
|
||||||
608
packages/pagetop-seaorm/src/db/migration/schema.rs
Normal file
608
packages/pagetop-seaorm/src/db/migration/schema.rs
Normal file
|
|
@ -0,0 +1,608 @@
|
||||||
|
//! > Adapted from https://github.com/loco-rs/loco/blob/master/src/schema.rs
|
||||||
|
//!
|
||||||
|
//! # Database Table Schema Helpers
|
||||||
|
//!
|
||||||
|
//! This module defines functions and helpers for creating database table
|
||||||
|
//! schemas using the `sea-orm` and `sea-query` libraries.
|
||||||
|
//!
|
||||||
|
//! # Example
|
||||||
|
//!
|
||||||
|
//! The following example shows how the user migration file should be and using
|
||||||
|
//! the schema helpers to create the Db fields.
|
||||||
|
//!
|
||||||
|
//! ```rust
|
||||||
|
//! use sea_orm_migration::{prelude::*, schema::*};
|
||||||
|
//!
|
||||||
|
//! #[derive(DeriveMigrationName)]
|
||||||
|
//! pub struct Migration;
|
||||||
|
//!
|
||||||
|
//! #[async_trait::async_trait]
|
||||||
|
//! impl MigrationTrait for Migration {
|
||||||
|
//! async fn up(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||||
|
//! let table = table_auto(Users::Table)
|
||||||
|
//! .col(pk_auto(Users::Id))
|
||||||
|
//! .col(uuid(Users::Pid))
|
||||||
|
//! .col(string_uniq(Users::Email))
|
||||||
|
//! .col(string(Users::Password))
|
||||||
|
//! .col(string(Users::Name))
|
||||||
|
//! .col(string_null(Users::ResetToken))
|
||||||
|
//! .col(timestamp_null(Users::ResetSentAt))
|
||||||
|
//! .to_owned();
|
||||||
|
//! manager.create_table(table).await?;
|
||||||
|
//! Ok(())
|
||||||
|
//! }
|
||||||
|
//!
|
||||||
|
//! async fn down(&self, manager: &SchemaManager) -> Result<(), DbErr> {
|
||||||
|
//! manager
|
||||||
|
//! .drop_table(Table::drop().table(Users::Table).to_owned())
|
||||||
|
//! .await
|
||||||
|
//! }
|
||||||
|
//! }
|
||||||
|
//!
|
||||||
|
//! #[derive(Iden)]
|
||||||
|
//! pub enum Users {
|
||||||
|
//! Table,
|
||||||
|
//! Id,
|
||||||
|
//! Pid,
|
||||||
|
//! Email,
|
||||||
|
//! Name,
|
||||||
|
//! Password,
|
||||||
|
//! ResetToken,
|
||||||
|
//! ResetSentAt,
|
||||||
|
//! }
|
||||||
|
//! ```
|
||||||
|
|
||||||
|
use crate::prelude::Iden;
|
||||||
|
use sea_orm::sea_query::{
|
||||||
|
self, Alias, ColumnDef, ColumnType, Expr, IntoIden, PgInterval, Table, TableCreateStatement,
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(Iden)]
|
||||||
|
enum GeneralIds {
|
||||||
|
CreatedAt,
|
||||||
|
UpdatedAt,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Wrapping table schema creation.
|
||||||
|
pub fn table_auto<T: IntoIden + 'static>(name: T) -> TableCreateStatement {
|
||||||
|
timestamps(Table::create().table(name).if_not_exists().take())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a primary key column with auto-increment feature.
|
||||||
|
pub fn pk_auto<T: IntoIden>(name: T) -> ColumnDef {
|
||||||
|
integer(name).auto_increment().primary_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn char_len<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).char_len(length).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn char_len_null<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).char_len(length).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn char_len_uniq<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
char_len(col, length).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn char<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).char().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn char_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).char().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn char_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
char(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn string_len<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).string_len(length).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn string_len_null<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).string_len(length).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn string_len_uniq<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
string_len(col, length).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn string<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).string().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn string_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).string().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn string_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
string(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn text<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).text().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn text_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).text().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn text_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
text(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn tiny_integer<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).tiny_integer().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn tiny_integer_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).tiny_integer().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn tiny_integer_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
tiny_integer(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn small_integer<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).small_integer().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn small_integer_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).small_integer().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn small_integer_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
small_integer(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn integer<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).integer().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn integer_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).integer().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn integer_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
integer(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn big_integer<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).big_integer().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn big_integer_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).big_integer().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn big_integer_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
big_integer(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn tiny_unsigned<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).tiny_unsigned().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn tiny_unsigned_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).tiny_unsigned().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn tiny_unsigned_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
tiny_unsigned(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn small_unsigned<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).small_unsigned().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn small_unsigned_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).small_unsigned().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn small_unsigned_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
small_unsigned(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn unsigned<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).unsigned().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn unsigned_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).unsigned().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn unsigned_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
unsigned(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn big_unsigned<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).big_unsigned().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn big_unsigned_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).big_unsigned().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn big_unsigned_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
big_unsigned(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn float<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).float().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn float_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).float().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn float_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
float(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn double<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).double().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn double_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).double().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn double_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
double(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn decimal_len<T: IntoIden>(col: T, precision: u32, scale: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.decimal_len(precision, scale)
|
||||||
|
.not_null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn decimal_len_null<T: IntoIden>(col: T, precision: u32, scale: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.decimal_len(precision, scale)
|
||||||
|
.null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn decimal_len_uniq<T: IntoIden>(col: T, precision: u32, scale: u32) -> ColumnDef {
|
||||||
|
decimal_len(col, precision, scale).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn decimal<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).decimal().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn decimal_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).decimal().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn decimal_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
decimal(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn date_time<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).date_time().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn date_time_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).date_time().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn date_time_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
date_time(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn interval<T: IntoIden>(
|
||||||
|
col: T,
|
||||||
|
fields: Option<PgInterval>,
|
||||||
|
precision: Option<u32>,
|
||||||
|
) -> ColumnDef {
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.interval(fields, precision)
|
||||||
|
.not_null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn interval_null<T: IntoIden>(
|
||||||
|
col: T,
|
||||||
|
fields: Option<PgInterval>,
|
||||||
|
precision: Option<u32>,
|
||||||
|
) -> ColumnDef {
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.interval(fields, precision)
|
||||||
|
.null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn interval_uniq<T: IntoIden>(
|
||||||
|
col: T,
|
||||||
|
fields: Option<PgInterval>,
|
||||||
|
precision: Option<u32>,
|
||||||
|
) -> ColumnDef {
|
||||||
|
interval(col, fields, precision).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn timestamp<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).timestamp().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn timestamp_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).timestamp().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn timestamp_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
timestamp(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn timestamp_with_time_zone<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.timestamp_with_time_zone()
|
||||||
|
.not_null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn timestamp_with_time_zone_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).timestamp_with_time_zone().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn timestamp_with_time_zone_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
timestamp_with_time_zone(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn time<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).time().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn time_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).time().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn time_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
time(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn date<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).date().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn date_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).date().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn date_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
date(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn year<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).year().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn year_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).year().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn year_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
year(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn binary_len<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).binary_len(length).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn binary_len_null<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).binary_len(length).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn binary_len_uniq<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
binary_len(col, length).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn binary<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).binary().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn binary_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).binary().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn binary_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
binary(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn var_binary<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).var_binary(length).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn var_binary_null<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).var_binary(length).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn var_binary_uniq<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
var_binary(col, length).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn bit<T: IntoIden>(col: T, length: Option<u32>) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).bit(length).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn bit_null<T: IntoIden>(col: T, length: Option<u32>) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).bit(length).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn bit_uniq<T: IntoIden>(col: T, length: Option<u32>) -> ColumnDef {
|
||||||
|
bit(col, length).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn varbit<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).varbit(length).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn varbit_null<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).varbit(length).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn varbit_uniq<T: IntoIden>(col: T, length: u32) -> ColumnDef {
|
||||||
|
varbit(col, length).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn blob<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).blob().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn blob_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).blob().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn blob_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
blob(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn boolean<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).boolean().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn boolean_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).boolean().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn boolean_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
boolean(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn money_len<T: IntoIden>(col: T, precision: u32, scale: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.money_len(precision, scale)
|
||||||
|
.not_null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn money_len_null<T: IntoIden>(col: T, precision: u32, scale: u32) -> ColumnDef {
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.money_len(precision, scale)
|
||||||
|
.null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn money_len_uniq<T: IntoIden>(col: T, precision: u32, scale: u32) -> ColumnDef {
|
||||||
|
money_len(col, precision, scale).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn money<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).money().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn money_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).money().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn money_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
money(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn json<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).json().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn json_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).json().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn json_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
json(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn json_binary<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).json_binary().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn json_binary_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).json_binary().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn json_binary_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
json_binary(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn uuid<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).uuid().not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn uuid_null<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).uuid().null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn uuid_uniq<T: IntoIden>(col: T) -> ColumnDef {
|
||||||
|
uuid(col).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn custom<T: IntoIden>(col: T, name: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).custom(name).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn custom_null<T: IntoIden>(col: T, name: T) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).custom(name).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn enumeration<T, N, S, V>(col: T, name: N, variants: V) -> ColumnDef
|
||||||
|
where
|
||||||
|
T: IntoIden,
|
||||||
|
N: IntoIden,
|
||||||
|
S: IntoIden,
|
||||||
|
V: IntoIterator<Item = S>,
|
||||||
|
{
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.enumeration(name, variants)
|
||||||
|
.not_null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn enumeration_null<T, N, S, V>(col: T, name: N, variants: V) -> ColumnDef
|
||||||
|
where
|
||||||
|
T: IntoIden,
|
||||||
|
N: IntoIden,
|
||||||
|
S: IntoIden,
|
||||||
|
V: IntoIterator<Item = S>,
|
||||||
|
{
|
||||||
|
ColumnDef::new(col)
|
||||||
|
.enumeration(name, variants)
|
||||||
|
.null()
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn enumeration_uniq<T, N, S, V>(col: T, name: N, variants: V) -> ColumnDef
|
||||||
|
where
|
||||||
|
T: IntoIden,
|
||||||
|
N: IntoIden,
|
||||||
|
S: IntoIden,
|
||||||
|
V: IntoIterator<Item = S>,
|
||||||
|
{
|
||||||
|
enumeration(col, name, variants).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn array<T: IntoIden>(col: T, elem_type: ColumnType) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).array(elem_type).not_null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn array_null<T: IntoIden>(col: T, elem_type: ColumnType) -> ColumnDef {
|
||||||
|
ColumnDef::new(col).array(elem_type).null().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn array_uniq<T: IntoIden>(col: T, elem_type: ColumnType) -> ColumnDef {
|
||||||
|
array(col, elem_type).unique_key().take()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Add timestamp columns (`CreatedAt` and `UpdatedAt`) to an existing table.
|
||||||
|
pub fn timestamps(t: TableCreateStatement) -> TableCreateStatement {
|
||||||
|
let mut t = t;
|
||||||
|
t.col(timestamp(GeneralIds::CreatedAt).default(Expr::current_timestamp()))
|
||||||
|
.col(timestamp(GeneralIds::UpdatedAt).default(Expr::current_timestamp()))
|
||||||
|
.take()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create an Alias.
|
||||||
|
pub fn name<T: Into<String>>(name: T) -> Alias {
|
||||||
|
Alias::new(name)
|
||||||
|
}
|
||||||
15
packages/pagetop-seaorm/src/db/migration/seaql_migrations.rs
Normal file
15
packages/pagetop-seaorm/src/db/migration/seaql_migrations.rs
Normal file
|
|
@ -0,0 +1,15 @@
|
||||||
|
use sea_orm::entity::prelude::*;
|
||||||
|
|
||||||
|
#[derive(Clone, Debug, PartialEq, Eq, DeriveEntityModel)]
|
||||||
|
// One should override the name of migration table via `MigratorTrait::migration_table_name` method
|
||||||
|
#[sea_orm(table_name = "seaql_migrations")]
|
||||||
|
pub struct Model {
|
||||||
|
#[sea_orm(primary_key, auto_increment = false)]
|
||||||
|
pub version: String,
|
||||||
|
pub applied_at: i64,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
|
||||||
|
pub enum Relation {}
|
||||||
|
|
||||||
|
impl ActiveModelBehavior for ActiveModel {}
|
||||||
31
packages/pagetop-seaorm/src/lib.rs
Normal file
31
packages/pagetop-seaorm/src/lib.rs
Normal file
|
|
@ -0,0 +1,31 @@
|
||||||
|
use pagetop::prelude::*;
|
||||||
|
|
||||||
|
use std::sync::LazyLock;
|
||||||
|
|
||||||
|
pub mod config;
|
||||||
|
pub mod db;
|
||||||
|
|
||||||
|
/// The package Prelude.
|
||||||
|
pub mod prelude {
|
||||||
|
pub use crate::db::*;
|
||||||
|
pub use crate::install_migrations;
|
||||||
|
}
|
||||||
|
|
||||||
|
include_locales!(LOCALES_SEAORM);
|
||||||
|
|
||||||
|
/// Implements [`PackageTrait`](pagetop::core::package::PackageTrait) and specific package API.
|
||||||
|
pub struct SeaORM;
|
||||||
|
|
||||||
|
impl PackageTrait for SeaORM {
|
||||||
|
fn name(&self) -> L10n {
|
||||||
|
L10n::t("package_name", &LOCALES_SEAORM)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn description(&self) -> L10n {
|
||||||
|
L10n::t("package_description", &LOCALES_SEAORM)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn init(&self) {
|
||||||
|
LazyLock::force(&db::DBCONN);
|
||||||
|
}
|
||||||
|
}
|
||||||
2
packages/pagetop-seaorm/src/locale/en-US/package.ftl
Normal file
2
packages/pagetop-seaorm/src/locale/en-US/package.ftl
Normal file
|
|
@ -0,0 +1,2 @@
|
||||||
|
package_name = SeaORM support
|
||||||
|
package_description = Integrate SeaORM as the database framework for PageTop applications.
|
||||||
2
packages/pagetop-seaorm/src/locale/es-ES/package.ftl
Normal file
2
packages/pagetop-seaorm/src/locale/es-ES/package.ftl
Normal file
|
|
@ -0,0 +1,2 @@
|
||||||
|
package_name = Soporte a SeaORM
|
||||||
|
package_description = Integra SeaORM como framework de base de datos para aplicaciones PageTop.
|
||||||
Loading…
Add table
Add a link
Reference in a new issue